id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2310.16770 | Role of cilia activity and surrounding viscous fluid on properties of
metachronal waves | Large groups of active cilia collectively beat in a fluid medium as
metachronal waves, essential for some microorganisms motility and for flow
generation in mucociliary clearance. Several models can predict the emergence
of metachronal waves, but what controls the properties of metachronal waves is
still unclear. Here, we investigate numerically a simple model for cilia in the
presence of noise on regular lattices in one- and two-dimensions. We
characterize the wave using spatial correlation and the frequency of collective
beating. Our results clearly show that the viscosity of the fluid medium does
not affect the wavelength; the activity of the cilia does. These numerical
results are supported by a dimensional analysis, which is expected to be robust
against the model for active force generation, unless surrounding fluid
influences the cilia activity. Interestingly, enhancement of cilia activity
increases the wavelength and decreases the beating frequency, keeping the wave
velocity almost unchanged. These results might have significance in
understanding paramecium locomotion and mucociliary clearance diseases. | Supravat Dey, Gladys Massiera, Estelle Pitard | 2023-10-25T17:02:50Z | http://arxiv.org/abs/2310.16770v2 | # Role of cilia activity and surrounding viscous fluid on properties of metachronal waves
###### Abstract
Large groups of active cilia collectively beat in a fluid medium as metachronal waves, essential for some microorganisms motility and for flow generation in mucociliary clearance. Several models can predict the emergence of metachronal waves, but what controls the properties of metachronal waves is still unclear. Here, we investigate numerically a simple model for cilia in the presence of noise on regular lattices in one- and two-dimensions. We characterize the wave using spatial correlation and the frequency of collective beating. Our results clearly show that the viscosity of the fluid medium does not affect the wavelength; the activity of the cilia does. These numerical results are supported by a dimensional analysis, which is expected to be robust against the model for active force generation, unless surrounding fluid influences the cilia activity. Interestingly, enhancement of cilia activity increases the wavelength and decreases the beating frequency, keeping the wave velocity almost unchanged. These results might have significance in understanding paramecium locomotion and mucociliary clearance diseases.
The emergence of phase-travelling waves in dense arrays of active beating cilia, known as metachronal waves, is a complex multiscale physics problem [1; 2; 3; 4; 5; 6; 7; 8] and is nonequilibrium because of the internal activity-driven movements of cilia [4]. The active beating of each cilium arises from the sliding of microtubules by thousands of molecular motors, and the subsequent interaction with the surrounding fluid medium. The coupling of a large number of these oscillators lead to synchronized dynamics over larger length scales. Illustrations are abundant in nature with ciliary living systems differing by cilia assembly geometry, cilia activity, or properties of the surrounding fluid. In respiratory tissues, the continuous cleaning of our lungs is provided by cilia beating waves that generate mucus flow [9; 10]. For certain microorganisms such as paramecium, synchronized beating of cilia help in their efficient locomotion [11]. The complexity of cilia active beating pattern and their interaction with each other through a complex environment makes it difficult to predict the emergent wave properties, despite recent theoretical and experimental advancements.
Models of cilia arrays [12; 13; 14; 15; 16], aim to identify the conditions required for such a coordinated state and to comprehend the physical parameters that govern the properties of the metachronal wave and the subsequent mucus transport. Several models have been proposed [12; 13; 14; 17], wherein the coupling is primarily described as a viscous hydrodynamic coupling. In these models, different types of active forces - from simple to complex, successfully generate continuous beating of a cilium. Numerical simulations enable to investigate the intricate structure of cilia by considering their beating as a filament bending wave [18; 19; 13]. Another approach is to model cilia by actuated micron-sized beads called rotors [20; 21; 22; 23] or rowers [24; 25; 12]. For a large group of cilia arranged in arrays, it has been shown that hydrodynamic coupling can lead to metachronal waves for various models of cilia [13; 15; 16].
Recently, the influence, on these collective behaviors, of several physical parameters such as noise [25; 26] and disorder in the arrangement and orientation of cilia has been investigated both numerically [25; 27; 28] and experimentally [28; 29], showing that spatial heterogeneity favors transport. Other important physical quantities that may play a role on the coordination are the activity and the dissipation, that will have opposite impacts on the metachronal waves emerging from cilia beating. Experimentally, a decrease in beating frequency with viscosity was found [30; 31], whereas the beating amplitude and the metachronal wavelength were found constant up to \(\approx 50\) times the viscosity of water [32; 33]. Theoretically, the mutual influence of activity and dissipation were almost not explored [34]. Here, our fundamental inquiry pertains to the interplay between cilia activity and fluid medium and its impact on the overall properties of metachronal waves.
To investigate this, we study the metachronal waves in the rower model of cilia in viscous fluid for one and two-dimensional regular lattices in the presence of thermal noise. In the rower model [24; 25; 12], the complex active beating of a cilium is simplified into the back and forth motion, along an axis, of a micron-sized bead immersed in a viscous fluid, thus ensuring a low Reynold's number regime. Such an oscillating motion is driven by two harmonic potential branches, corresponding to the stroke and anti-stroke of the cilia beating, with a geo
metric switching mechanism. The bead moves downhill of a potential until it reaches one of the two terminal positions for which switching to the second branch occurs (Fig. 1A, B). This switching is like pumping energy to lift the bead on the upper side of the other potential at the terminal. At a given time, the bead can be found in one of these two states, the stroke and anti-stroke of the cilia beating, represented by a discrete \(\sigma=\pm 1\). The driving force for a bead displacement \(y\) for a given \(\sigma\) can be written as
\[f(y,\sigma)\;=-\frac{dV(y,\sigma)}{dy}\;=-k(y-\sigma\mu/2), \tag{1}\]
where \(k\) is the force constant associated with the harmonic potentials, \(\mu\) is the distance between minima of two potentials, and \(\mathcal{A}\) is the beating amplitude. The supply of energies during each downhill motion in a harmonic potential, \(k\mathcal{A}^{2}/2\), and during each switch, the pumping energy \(k\mu\mathcal{A}\), keep the bead oscillating in the dissipating media. Therefore, for a given \(\mu\), the 'activity' of the bead depends on values of \(k\) and \(\mathcal{A}\). Because of its simplicity and ability to capture the two-stroke beating of cilia, the rower model has become a method of choice for theoretical and experimental studies of synchronization in ciliary systems [16; 29].
We consider a system of \(N\) rowers beating in the \(y\) direction in a viscous medium. Rowers are placed regularly in one- or two-dimensions (square) lattices (see Fig. 1C) at fixed positions \(\mathbf{r_{i}}\) (for \(i=\{1,2,3,...,N\}\)). The displacement, \(y_{i}\), of a rower \(i\) is hydrodynamically coupled with the others and is given by
\[\frac{dy_{i}}{dt}=-\frac{f_{i}}{\gamma}+\sum_{j\neq i}O(i,j)\,f_{j}+\xi_{i}, \tag{2}\]
where \(\gamma=6\pi\eta a\) is the viscous drag coefficient for a bead with radius \(a\) moving in a fluid medium with viscosity \(\eta\) and \(O(i,j)\) the coupling strength between rower \(i\) and \(j\). In the far-field hydrodynamic coupling approximation, for which both the distance from the surface and the distance between two adjacent rowers (lattice spacing \(\ell\)) are large compared to \(a\), \(O(i,j)\) is set by the Oseen tensor: \(O(i,j)=\frac{1}{8\pi\eta r_{ij}}\left(1+(\frac{y_{ij}}{r_{ij}})^{2}\right)\), with \(i\neq j\), and \(\mathbf{r}_{ij}=\mathbf{r}_{j}-\mathbf{r}_{i}\), the separation vector between rowers \(i\) and \(j\). The last term is due to the thermal noise, obeying the following delta-correlation: \(\langle\xi(t)\rangle=0\), \(\langle\xi(t_{1})\xi(t_{2})\rangle=2\,D\,\delta(t_{1}-t_{2})\). For simplicity, we assume no correlation between the noise acting on each of the rowers as in [25]. The noise strength or diffusivity is equal to \(D=k_{B}\,T/\gamma\), \(k_{B}\) and \(T\) being the Boltzmann constant and the temperature. The displacement of a single isolated bead shows sustained oscillations with the frequency \(\nu_{0}=1/(2\,\tau_{d}\log\left[(\mu+2\mathcal{A})/(\mu-2\mathcal{A})\right])\), where \(\tau_{d}=\gamma/k\) is the relaxation time for the bead to reach equilibrium in a harmonic potential [24]. Such two coupled rowers beat collectively with antiphase synchronization [12]. For many rowers, the interplay between the activity of the rowers and coupling through the medium generates metachronal waves [12].
_Simulation details_ - The Euler method with an integration step equal to 5\(\times 10^{-3}s\) has been used to evolve the coupled dynamical equation (Eq. 2), starting from random initial values for \(\{\sigma_{i},y_{i}\}\). The open boundary condition is implemented. Parameters are chosen within the experimentally relevant range [24; 35], as
Figure 1: Rower model of cilia [12]. (A) The motion of a micron-sized bead in a viscous medium under two harmonic potential branches, corresponding to \(\sigma\pm 1\), represents the stroke and anti-stroke beating of a cilium. (B) The bead switches branches when it reaches terminal position \(y=\pm\mathcal{A}\). (C) Rows beating along \(y\)-axis on two dimensional \(L\times L\) regular lattices. Hydrodynamic interaction between rower \(i\) and \(j\) is modelled by the Oseen coupling that depends on the separation vector \(\mathbf{r}_{ij}\) and viscosity of the medium.
Figure 2: Metachronal waves in 1d. A. Snapshot of displacements of the first 100 rowers. The displacement of even (odd) sites is plotted in light (dark) color. B. Correlation function \(C(x\)=\(m\ell)\) between two rowers is plotted against separation distance \(x\). C. Kymograph of the beating state \(\sigma\), with white color (black) representing \(\sigma\)=1 (\(\sigma\)= \(-1\)). Parameters: \(\mathcal{A}=0.8\), \(\eta=6\,\)mPa-s, and \(N=200\).
follows: \(a\)=1.5 \(\mu m\), \(\ell\)=8 \(\mu m\), \(k\)=2.6 \(pN\)\(\cdot\)\(\mu m^{-1}\), \(\mu\)=2 \(\mu m\), \(\mathcal{A}\)=0.56 \(-\)0.8 \(\mu m\), \(\eta\)=2 \(-\)20 \(mPa\)\(\cdot\)\(s\), and \(T\)=300\(K\). Results presented here for large system sizes; \(N\)=\(L\)=200 (for 1d) and \(N\)=\(L^{2}\)=1600 (for 2d). Comparing results with smaller systems (not shown here), we confirm that the presented results have no system size dependence. _Results_ - Fig. 2 shows the metachronal waves in the one-dimensional lattice. The beads' displacement against the rowers' position for a given time displays two spatial waves that are visualized by connecting displacements \(y_{i}\) by lines for all rowers at the even and odd lattice sites separately (Fig. 2A) in agreement with [12; 36]. This is a unique feature of the rower model, and arises due to a degree of anti-phase synchronization between two adjacent rowers. The wave propagation is illustrated in Fig. 2C by the kymograph obtained for \(\sigma_{i}(t)\). To characterize it, we compute the spatial correlation function between two rowers as a function of their separation vector \(\mathbf{r}\):
\[C(\mathbf{r})=\frac{\sum_{ij}\langle\sigma_{i}(\mathbf{r}_{i},t)\sigma_{j}( \mathbf{r}_{j},t)\rangle\delta(\mathbf{r}-\mathbf{r}_{ij})}{\sum_{ij}\delta( \mathbf{r}-\mathbf{r}_{ij})}. \tag{3}\]
As the rowers are placed on a regular lattice with lattice spacing \(\ell\), the coordinates of \(\mathbf{r}\) are discrete and can be written as \((m\ell,n\ell)\) with \(m,n\in\{0,1,2,...,L\}\). The measurement is done after a large equilibration time \(t_{0}\), where the system is assumed to reach a steady state. Brackets \(\langle\cdot\rangle\) represent average over times and ensembles. An ensemble is the collection of 5000 sets of \(\{\sigma_{i}(t)\}\) recorded every 2 seconds after \(t_{0}\)=2500 seconds.
Fig. 2B shows the variation of \(C(x)=C(x,y=0)\) in one dimension. For odd and even \(m\) values, two oscillating curves decay to zero as the distance between rowers \(x=m\ell\) increases. While the oscillations indicate the wave nature of the collective beating, the loss of correlations at larger \(x\) suggest a damping in the coordination on a characteristic length scale \(l_{d}\). \(C(x)\) can be fitted with the simple function \(\pm\)e\({}^{-x/l_{d}}\cos(2\pi x/\lambda)\), the \(+\) (\(-\)) sign being for even (odd) \(m\). This fit estimates the wavelength \(\lambda\) and decay length \(l_{d}\). For Fig. 2, \(\lambda\simeq 13.7\ell\) and \(l_{d}\simeq 9.0\ell\). In a recent work, wavelength, and decay length were measured experimentally for metachronal waves on the human bronchial epithelium, and these two lengthscale values are comparable [37]. Our results are consistent with the experiment. The ensemble and spatial average of the beating frequency was computed: \(\nu\simeq 3.4\)\(Hz\) and combined with \(\lambda\) to infer the metachronal wave velocity \(V=\nu\lambda\simeq 370\)\(\mu m.s^{-1}\). These values are consistent with estimates that can be inferred directly from the slopes in the kymograph Fig. 2C.
We then investigate the effect of viscosity of the fluid medium and activity of the cilia on the metachronal waves quantities: \(\lambda\), \(l_{d}\), \(\nu\), and \(V\)=\(\nu\lambda\), by computing \(C(x)\) for various \(\eta\) and \(\mathcal{A}\). The plot of \(C(x)\) for different \(\mathcal{A}\) values shows that both the wavelength \(\lambda\) and the decay length \(l_{d}\) increase with \(\mathcal{A}\) (Fig. 3A and C), whereas the averaged beating frequency \(\nu\) decreases with \(\mathcal{A}\), keeping
Figure 3: Effect of viscosity and beating amplitude on metachronal waves in 1d (\(N\)=200). Correlation function \(C(x\)=\(m\ell)\) as a function of the distance between rowers, for three different \(\mathcal{A}\) values (for \(\eta\)=6 \(mPa.s\)) (A) and for three different \(\eta\) values (for \(\mathcal{A}\)=0.8 \(\mu m\)) (B). The wavelengths of metachronal waves \(\lambda\) are plotted against \(\mathcal{A}\) (C), and \(\eta\) (D), together with the corresponding beating frequency \(\nu\). Propagation velocity \(V\)=\(\lambda\nu\) is plotted against \(\mathcal{A}\) (E), and \(\eta\) (F).
Figure 4: Effect of viscosity and beating amplitude on metachronal waves in 2d (\(N\)=\(L^{2}\)=1600). Correlation function along beating direction \(C(x\)=0,\(y\)=\(n\ell)\) as a function of the distance between rowers, for three different values of \(\mathcal{A}\) (for \(\eta\)=6 \(mPa.s\)) (A) and for three different \(\eta\) values (for \(\mathcal{A}\)=0.8 \(\mu m\)) (B). The wavelengths of metachronal waves \(\lambda\) are plotted against \(\mathcal{A}\) (C), and \(\eta\) (D), together with the corresponding beating frequency \(\nu\). Propagation velocity \(V\)=\(\lambda\nu\) is plotted as a function of \(\mathcal{A}\) (E), and \(\eta\) (F).
the velocity of the wave \(V\), almost constant (Fig. 3C and E). As \(\nu_{0}\), the natural frequency of a single power, decreases with \(\mathcal{A}\), the decrease of \(\nu\) is expected. On the same line, increasing \(\mathcal{A}\), which is a characteristic length of the problem, may naturally increase the length scale of the emerging collective dynamics. Thus the respective variation of \(\nu\) and \(\lambda\) can be generally expected. What is remarkable though is that they compensate to result in an almost constant metachronal wave velocity. Interestingly, \(C(x)\) does not depend on the values of \(\eta\) (see Fig. 3B), meaning \(\lambda\) and \(l_{d}\) are independent of \(\eta\) and implying that the spatial behavior of emergent waves does not depend on the fluid viscosity and are only determined by cilia activity parameters, in agreement with experimental observations [30; 33]. Below, we argued that such behavior is generally characteristic of a hydrodynamically coupled system. Finally, the frequency \(\nu\) decreases as a function of \(\eta\) and so does \(V\), as measured experimentally in [30; 33].
On a square lattice, we find that the metachronal wave propagates along the beating direction \(y\) whereas no wave is obtained in the perpendicular direction (Fig. 5), suggesting longitudinal waves. In Fig. 4A and B, we plot the correlation function along \(y\)-direction \(C(0,y=n\ell)\) against \(n\) for various values of \(\mathcal{A}\) and \(\eta\). Similar to 1d, two spatial waves can be seen for even and odd values of \(n\) for a given parameter set. For a fixed \(\eta\), \(\lambda\) increases and \(\nu\) decreases with \(\mathcal{A}\), keeping the wave velocity \(V\) almost constant (Fig. 4C and E). On the contrary, \(\lambda\) remains constant and \(\nu\) decreases with \(\eta\), leading to a decrease in the velocity \(V\) with \(\eta\) (Fig. 4D and F). These results are consistent with the 1d results. We further note that although the qualitative behavior of metachronal waves in 1d and 2d are similar, the values of \(\lambda\) and \(V\) are relatively larger in 1d. This result raises interesting questions on the implications of the geometry of realistic ciliated tissues, which are mostly organized in 2d groups of cilia bundles.
In the direction perpendicular to beating, no oscillation is obtained (Fig. 5). \(C(x,0)\) either monotonically decays to zero as for large \(\mathcal{A}\) or shows a negative correlation for small \(y=m\ell\) with odd \(m\) that eventually approaches zero for large \(m\). For a given \(\mathcal{A}\), \(C(x,0)\) does not depend on \(\eta\) (Fig. 5B), although odd and even \(m\) can follow different extinction, reminiscent of \(C(0,y)\). We compare the decay lengths of correlations along \(x\) and \(y\) directions \(l_{d,x}\) and \(l_{d,y}\). The decay length for the damped oscillations along \(y\), \(l_{d,y}\), can be estimated from the fitting method discussed above. The decay length \(l_{d,x}\) is estimated from the exponential fit of the \(C(x=m\ell,0)\) for even \(m\) values. The ratio \(l_{d,y}/l_{d,x}\) is plotted in the insets, one notes that \(l_{d,y}/l_{d,x}\gtrsim 2\). For a fixed \(\mathcal{A}\), it remains unchanged with \(\eta\). However, for a given \(\eta\), the ratio increases with cilia activity \(\mathcal{A}\), which implies an enhancement of coherence along the beating direction compared the perpendicular one. This anisotropic response may be related to the anisotropy of the interaction strength. Indeed, considering the same \(r_{ij}\) value, \(O(i,j)\) is two times larger along the \(y\)-axis than along the \(x\)-axis.
The fact that we obtain metachronal waves with spatial properties unaffected by viscosity has not been emphasized by previous studies, to our knowledge. Nevertheless, this remarkable numerical observation is robust on an order of magnitude of \(\eta\) obtained with both 1d and 2d simulations. To rationalize this result, one needs to look into the details of characteristic length, and timescales of the system set by the activity and the surrounding viscous medium. The relaxation time \(\tau_{d}=6\pi\eta a/k\) for the bead motion in the viscous medium under a harmonic driving potential, which also determines the natural frequency \(\nu_{0}\) of a single power, is a crucial timescale in our problem. In the rower model, there are two lengths scales, the amplitude \(\mathcal{A}\), and \(\mu\), the distance between the two branches of the potential (Fig. 1). Since we only vary \(\mathcal{A}\), we chose it as the typical length scale. We note that our conclusion below, however, does not depend on the choice of the length scale. Multiplying both sides of Eq. 2 by \(\tau_{d}/\mathcal{A}\) leads to an adimensional equation for the collective beating dynamics:
\[\frac{dy_{i}^{\prime}}{dt^{\prime}}=-f_{i}^{\prime}-\sum_{i\neq j}\frac{3\,a} {4r_{ij}}\left(1+(y_{ij}/r_{ij})^{2}\right)f_{j}^{\prime}+\zeta_{i}(t^{ \prime}), \tag{4}\]
where \(t^{\prime}\) and \(y^{\prime}\) are dimensionless time \(t^{\prime}=t/\tau_{d}\) and displacement \(y^{\prime}=y/\mathcal{A}\), \(f_{j}^{\prime}=-(y_{j}^{\prime}-\mu\sigma_{j}/(2\mathcal{A}))\) is the dimensionless force acting on rower \(j\), and \(\langle\zeta(t_{1}^{\prime})\zeta(t_{2}^{\prime})\rangle=2k_{B}T/(k\mathcal{A} ^{2})\delta(t_{1}^{\prime}-t_{2}^{\prime})\) is the adimensional noise correlation. As activity parameters \(\mathcal{A}\), \(k\), and \(\mu\) are constant, Eq. 4 is \(\eta\) independent. The latter suggests that the spatial properties are independent of \(\eta\). However, as \(\tau_{d}\) is affected by \(\eta\), it impacts the dynamical properties of the system. If any parameter \(\mathcal{A}\), \(k\), and \(\mu\) are influenced by the medium, then our observation will break down.
We argue that the independence on fluid viscosity of
Figure 5: Effect of viscosity and beating amplitude on correlations along the direction perpendicular to the beating direction \(C(x\)=\(m\ell,y\)=0). A. \(C(x\)=\(m\ell,y\)=0) for three different \(\mathcal{A}\) for a fixed \(\eta\)=6 mPa-s (A), and for three values of \(\eta\) for a given \(\mathcal{A}\)=0.69\(\mu m\) (B).
the spatial emergent properties is more generic to systems operating at low Reynold's numbers, irrespective of the model details. For systems at a low Reynold's number, both the viscous drag and hydrodynamic coupling between two objects are inversely proportional to \(\eta\). The thermal noise strength \(D\) is also inversely proportional to \(\eta\). As \(\tau_{d}\) is inversely proportional to \(\eta\), this also means that the normalization of Eq. 2 by \(\tau_{d}\) will lead to the same conclusion. Therefore, one can get a similar adimensional equation as Eq. 4 for any model of active cilia coupled by a viscous fluid at low Reynolds numbers. Hence, the spatial properties of the emergent waves are expected to be independent of viscosity. If active beating is strongly dependent on the fluid rheology, then the active parameters of the rower model would depend on the viscosity, and this result would fail. Experimental results [31] seem to indicate though a very small dependence of cilia beating amplitude with the liquid medium. We note that additional sources of deviation from this result could be the viscoelastic nature of the fluid or a non-thermal noise, which we have not addressed in this paper. Finally, although this simple dimensional analysis cannot predict the occurrence or nature of emergent behavior, it is powerful in predicting that the wavelength or other spatial properties will be viscosity independent in general.
In conclusion, we have presented simple generic results about complex dynamics of hydrodynamically coupled model cilia. Using a very simple rower model of coupled oscillators, we have focused our study on the influence of activity and dissipation on the spatial and temporal synchronization properties of cilia assemblies. Enhancement of cilia activity increases the wavelength and beating period, keeping the wave velocity almost unchanged. On the other hand, viscosity does not affect spatial patterns characterizing metachronal waves such as wavelength or correlation lengths. On the contrary, the beating frequency and the wave velocity indeed decrease with viscosity. The deviation from such a behaviour may indicate the influence of additional properties of the medium not taken into account in the coupling description of the current model and could also be a signature of viscoelasticity or elasticity of the tissue itself. This could pave the way to the study of the emergence of specific functions of cilia in pathological contexts for example [23].
|
2302.06772 | Cyber-Physical Power System Layers: Classification, Characterization,
and Interactions | This paper provides a strategy to identify layers and sub-layers of
cyber-physical power systems (CPPS) and characterize their inter- and
intra-actions. The physical layer usually consists of the power grid and
protection devices whereas the cyber layer consists of communication, and
computation and control components. Combining components of the cyber layer in
one layer complicates the process of modeling intra-actions because each
component has different failure modes. On the other hand, dividing the cyber
layers into a large number of sub-layers may unnecessarily increase the number
of system states and increase the computational burden. In this paper, we
classify system layers based on their common, coupled, and shared functions.
Also, interactions between the classified layers are identified, characterized,
and clustered based on their impact on the system. Furthermore, based on the
overall function of each layer and types of its components, intra-actions
within layers are characterized. The strategies developed in this paper for
comprehensive classification of system layers and characterization of their
inter- and intra-actions contribute toward the goal of accurate and detailed
modeling of state transition and failure and attack propagation in CPPS, which
can be used for various reliability assessment studies. | Michael Abdelmalak, Narayan Bhusal, Mukesh Gautam, Mohammed Ben-Idris | 2023-02-14T01:01:20Z | http://arxiv.org/abs/2302.06772v1 | # Cyber-Physical Power System Layers:
###### Abstract
This paper provides a strategy to identify layers and sub-layers of cyber-physical power systems (CPPS) and characterize their inter- and intra-actions. The physical layer usually consists of the power grid and protection devices whereas the cyber layer consists of communication, and computation and control components. Combining components of the cyber layer in one layer complicates the process of modeling intra-actions because each component has different failure modes. On the other hand, dividing the cyber layers into a large number of sub-layers may unnecessarily increase the number of system states and increase the computational burden. In this paper, we classify system layers based on their common, coupled, and shared functions. Also, interactions between the classified layers are identified, characterized, and clustered based on their impact on the system. Furthermore, based on the overall function of each layer and types of its components, intra-actions within layers are characterized. The strategies developed in this paper for comprehensive classification of system layers and characterization of their inter- and intra-actions contribute toward the goal of accurate and detailed modeling of state transition and failure and attack propagation in CPPS, which can be used for various reliability assessment studies.
Cyber-Physical Power Systems, Resilience, real-time simulation.
## I Introduction
The advancements in communication and automation technologies have increased significantly in the last decade resulting in widespread integration and deployment in power systems. Grid modernization approaches created what is now known to be cyber-physical power systems (CPPSs). Such systems are composed mainly of cyber layer, i.e., communication and control systems, and physical layer referring to the power system. Despite the noticeable added benefits of cyber layer on power systems achieving reliable, secured, and economic operation, increased vulnerabilities against cyber threats and attacks are always being associated with the level of integration. Also, the reliance to leverage user-friendly human interface platforms, cloud computation, and smart artificial-intelligence devices create further complexities to analyses of CPPSs. Therefore, it has become a necessity to accurately model the state transitions and propagation behaviors in CPPSs for improved evaluation and enhancement of their resilience and performance.
Recently published research in [1, 2, 3], provides a comprehensive review of CPPSs from the perspective of modeling, simulation, and analysis with cyber security applications. This paper also provides literature survey on cyber attacks and cybersecurity measures for CPPSs. This work describes the CPPS as the coupled network of cyber and physical systems. Cyber layer consists of computation, communication, and control systems. Physical system, on the other hand, consists of a physical power grid governed by physics-based rules. In [4], key features of cyber-physical systems in multi-layered architecture are conceptualized. This work characterizes the cyber physical system into physical layer, cyber-physical layer, and the cyber layer. Physical layer consists of physical components and their dynamics, physical measurements, and physical operators. Cyber-physical layer includes programmable controllers, real-time communication networks, sensors, and actuators. Cyber layer is formed by a combination of cyber communication networks, supervisory computers, and supervisors.
Cyber layer can be identified as the layer responsible for the computation, analysis, and assessment of the power system on the regional and global scale. Defining the boundaries of a cyber layer within a CPPS model is not a straightforward process. First, the advancements in information and communication technology have resulted in embedded smart computation processors in all power system components. This raises a concern whether such computation parts are system or component involved. Also, some system computational tasks take place at the local level such as protection decisions; whereas other wide-area analyses are handled in the energy management systems [5]. This raises a concern whether the cyber layer is composed of a single layer or can be split into several layers. The cyber layer comprises all required applications to maintain reliable and economical operation of the power system. Some of these applications are run in the local level prior to passing to global level such as automatic generation control, remedial action schemes, and protection protocols. Other global applications include but are not limited to state estimation, real-time contingency analysis, security constrained optimal power flow, unit commitment, and energy market optimization. Determining the proper input data into diverse applications causes a confusion on boundaries of the cyber layer.
Whereas the power grid represents only a physics-governed physical layer, the cyber layer consists of several layers such as sensor, protection, communication, computation, and control layers. Combining the components of the cyber layers in one layer complicates the process of modeling intra-actions
because each component has different failure modes. On the other hand, dividing the cyber layers into a large number of sub-layers may unnecessarily increase the number of system states and increase the computational burden. Therefore, rigorously identifying system heterogeneous layers (cyber and physical) and comprehensively characterizing their inter- and intra-actions are essential to (1) establish accurate models for state transitions; (2) identify chains of failure propagation within and between layers; and (3) develop efficient and practical reliability and resilience analysis, evaluation, and enhancement methods and strategies for CPPSs. Further research is inevitable for the maturity of CPPS classification, characterization, and modeling, simulation, and analysis of interactions between and within the CPPS layers.
This paper establishes strategies to identify CPPS layers and sublayers and characterizes their inter- and intra-actions. In this paper, CPPS layers are classified based on their common, coupled, and shared functions. During classification, we start with common intended functions, of which there are many, each of which aggregates several system components. Then, we identify coupling layers (i.e., failure of coupling layers separates two or more layers) such as the communication layer, which couples the heterogeneous physical layer and remaining layers. Next, we identify shared layers such as the sensors' layer--a shared layer between the communication and protection layers. Also, interactions between the classified layers are identified and characterized; possible interactions are discussed and clustered based on their impacts on the system. Furthermore, intra-actions within each layer are characterized based on the overall function of the layer and types of its components. The strategies developed in this paper for comprehensive classification of system layers and characterization of their inter- and intra-actions contribute toward the goal of accurate and detailed modeling of state transitions and failure and attack propagation in CPPS. This is a necessary step toward developing analysis, evaluation, and enhancement methods for CPPS reliability and resilience.
The rest of the paper is structured as follows. Section II provides a survey on existing approaches of classification, characterization, and interaction of cyber-physical layers, and criteria of CPPS modeling. Section III describes the suggested classification, characterization, and interactions between CPPS layers. Section IV provides the concluding remarks.
## II Modeling of Cyber-Physical Power Systems
Modeling of cyber-physical systems across various domains has gained significant interest in the last decade. This includes, but not limited to, biomedical systems, transportation systems, and energy systems [6, 7]. Proper models of CPPSs are necessary for accurate, reliable, and efficient analysis and assessment [8, 9, 10]. This section summarizes the most recent modeling approaches of CPPSs and the associated dependencies across the model layers. Also, it presents few criteria to measure the capabilities of these models within the Cyber-Physical domain.
### _Existing CPPS Models_
The layer classification of the CPPS model varies in the existing literature based on the study or the system. In [11, 12], a two-layer CPPS model has been provided to assess the transient power system stability against control and communication failures. The first layer represents the power grid system, whereas the second refers to the cyber layer. Another two-layer CPPS model has been provided in [13], where the cyber layer is represented by three sub-layers including measurements, protection, and control. Authors of [5] have restructured the CPPS model in [13] to include an intermediate layer between the cyber and physical layers. The connecting layer handles three main applications, wide-area monitoring, protection, and control. The function of the intermediate layer has been changed in [14] to represent only the communication between the physical layer and the cyber layer. A comprehensive four layers CPPS model has been provided in [15] representing physical, communication, control, and monitoring layers.
Fig. 1 represents a three-layer CPPS model in [16]. The bottom layer represents the physical power system; the intermediate layer refers to the coupling communication layer; and the top layer is the decision control layer. The measurement layer is assumed fully reliable, whereas the protection layer is ignored. The mathematical computations are integrated within the control layer. It is worth noting that this model captures only the states and interactions of three main layers neglecting the inter-actions within each layer.
A more detailed CPPS model has been developed in [17, 18], as shown in Fig. 2. The model splits the cyber-physical smart grid into a hierarchical six layers including management layer, supervisory layer, network layer, communication layer, control layer, and physical layer. The presented model complies relatively with the NIST smart grid conceptual model [19]. The control layer includes sensors, actuators, and intrusion-detection devices. The communication layer is the connection medium between the control layer and various network types. The data routing and network formation are handled in the network layer. The computational data analysis, performed in the supervisory layer, is passed to the management layer for proper decision making. Also,
Fig. 1: CPPS model in [16]
the management layer takes into account the energy market, regulatory policies, and system operation.
### _Dependencies in CPPS Layers_
Several studies have been conducted to model dependencies among CPPS layers [20]. Graph theory, complex-network methods, finite state models, Petri net models, correlation methods, and cellular automate methods are some methods to model such dependencies [2]. Five mathematical models have been presented in [21] to analyze interdependencies of CPPS layers including dynamic analysis, topological analysis, consequence analysis, causal analysis, and hazard identification. A graphical network model has been integrated with a chaotic key flight algorithm to assess the transition of a cyber-attack to a cascading failure scenario of power grids. To model the transition between power and cyber layers on the component level, a Markov state model has been presented in [22]. Authors of [23] have provided a Petri net model to capture the interdependencies between information layer and physical layer against malicious attacks. A correlation matrix approach has been introduced in [24] to study the propagation behavior of cyber-induced failures into power systems. The cyber-physical interface matrix can be calculated using the IEEE-61850 communication scheme and available failure rate of cyber-related components.
Various methods have been presented to classify dependencies in CPPS models. In [25], a classification based on the relationship between network and system elements has been introduced including both direct/indirect element-element and element-network models. Three levels of interactions have been introduced in [2] including computational-communication interactions, communication-physical interactions, and local physical-controller-protection interactions. A comprehensive guideline to model interactions between power system layer and ICT layers has been introduced in [21]. Such interactions are; (1) common cause, where the cause of failure in both systems is the same, e.g., whole substation shutdown, (2) cascading cause, where a failure in one layer propagates to another layer, e.g., power outage of communication systems, and (3) escalating cause, where an existing failure in one layer worsens an independent failure in another layer, e.g., failure in protection layer during a faulted power system. Authors of [23] have classified interdependencies between infrastructure layers into type of interdependencies, infrastructure environment, couplings among layers, infrastructure characteristics, state of operation, and type of failure.
### _Modeling Criteria_
Though extensive research has been conducted in modeling CPPSs, only a few papers have given interest to evaluate the developed models. Selecting a particular model is a sophisticated process that requires highlighting the pros and cons of each model. Also, the compatibility of a CPPS model to a specific study or application plays a vital role in the decision process. A few main criteria are used to quantify CPPSs models including: (1) accuracy, (2) scalability, (3) fidelity, (4) application-compatibility, (5) dynamics-adaptability, and (6) topological-suitability. These metrics are explained as follows.
(1) Accuracy: Modeling accuracy refers to the capability of a model to reproduce experimental data that agrees with the physical phenomena precisely. In other words, this criterion measures the consistency of a model against varying scenarios and diverse input data. It is a necessity for CPPS models to maintain consistent outcomes under various constraints and diverse factors such as geographic locations and operating conditions.
(2) Scalability: The scalability feature refers to the capability of a model to adapt to large-scale systems and provide comprehensive representation of the system. Building a scalable CPPS model requires extensive caution with sophisticated conversion procedure, available computational capabilities, different modeling domain, diverse interoperability issues, and fast market technology.
(3) Fidelity: If the model outcomes match the results of real-world systems, then a CPPS model is said to maintain fidelity. In CPPS, high nonlinearity levels in the power system layer impose further complexities to achieve fidelity. Due to modeling approximations, a small discrepancy can be noticed between the CPPS model and the real-world system. Maintaining least discrepancies yields high fidelity models.
(4) Application-compatibility: The level of information and approximation of a particular model may change based on the application or problem under study. For instance, reliability-based studies of power systems do not usually require dynamic system information. A CPPS model is said to maintain a high level of application-compatibility if it
Fig. 2: Different CPPS layers with the control system [17][18].
can be used across different types of studies with minimal modifications.
(5) Dynamics-adaptability: Power systems are characterized by high dynamics level. In various studies, it is required to capture the small-time variations in the system dynamics. This criterion aims to quantify the capability of a CPPS model to capture the dynamical behavior, particularly transient and subtransient changes in the power system.
(6) Topological-suitability: The NIST smart grid conceptual model describes future CPPSs in terms of seven main domains including customer, distribution, transmission, generation, market, service providers, and business services. A CPPSs model shall be capable of representing these domains, their distinctive features, and their dependencies. Due to the large-scale integration of distributed energy resources and increased number of local control centers, the system topology is changing from a centralized structure to a distributed structure. The topological-suitability criterion shows the degree of a CPPS model to represent the new meshed distributed system topology.
## III Suggested Model for CPPS
CPPS is the combination of various layers that interact together for a reliable operation of the power grids. The power grid is usually represented as a physical layer, whereas the cyber layer might consist of several layers such as measurement, protection, communication, computation, and control layers. Combining various components of the cyber layers in one layer results in improper modeling of dependencies among components and layers. Also, it complicates the process of modeling intra-actions because each component has different failure modes. On the other hand, dividing the cyber layers into numerous sub-layers may increase the computational complexity due to the large number of system states. Therefore, accurately classifying system layers such that the inter- and intra-actions between and within them while reducing the modeling complexity and computation burden has become important.
By taking the trade-off between the modeling accuracy and computational complexities into consideration, a five-layer CPPS model is identified. These layers are classified based on their common, coupled, and shared functions. The main layers are the physical grid, the global protection layer, the global communication layer, the computation layer, and the monitoring and decision layer as shown in Fig. 3. This architecture also consists of some local layers, for example, local protection, control, and communication layers that are not directly connected to the main monitoring and decision layer. Brief description of these layers is provided as follows.
### _Physical Power Grid Layer_
Conventional power grid is the main building block upon which the concept of CPPS has advanced. This layer provides the detailed description of the power system model, its configuration, electrical characteristics, and topology [26]. This layer might include devices such as measurement devices and protection devices that are directly connected to power system components for proper operation and functioning of the system [16]. Each component in the physical layer has unique fundamental functions and electrical characteristics. The physical layer can be further sub-categorized based on type of components into power system components, protection components, and measurements components.
#### Iii-A1 Power System Components
This part of CPPS describes the topology of power systems using single line diagrams. The power grid is categorized based on functionality into three main categories: generation, transmission, and distribution. In normal operation, generation level should be sufficient to supply load demands under consideration of all system operating constraints.
#### Iii-A2 Protection Devices
The protection layer consists of all the protective devices that either prevent or reduce the impact of disturbances to operation devices. Protective devices such as relays are usually installed on various locations including power transmission lines, bus-bars, generators, transformers, and load nodes. Protective devices are equipped with sensors and act on the local level based on predefined settings that maintain the proper coordination between various relays [5]. For instance, a primary protection relay trips and isolates a faulted transmission line. Also, some components such as turbine-governor units connected to electric generators require very detailed local protection schemes to operate properly. On the other hand, the global protection scheme focuses on the overall performance of the system without involvement of the local protection. It aims to detect abnormal system behavior, develop corrective actions, and respond in a quick and automatic way to prevent the propagation of a small disturbance to larger-scale events.
Fig. 3: Proposed CPPS layers.
3 Measurement Components
Measurement devices are mainly responsible for observing the performance of power system components. Measurement devices can be classified into system (central) measurement and component (local) measurement devices. In the local level, measurements are passed to local controllers via spark communication links. For instance, generator units require an independent and massive measurement layer to monitor and maintain their performance, which could be mechanical, electrical or even physical measurements such as vibration sensors, rotor speed sensors, and magnetic field sensors. Global measurements, on the other hand, assess the performance of the power system as a whole. The transmission of global measurements heavily depends on two-way high-bandwidth communication technologies in order to access the information from the power grid and its components. These measurements are utilized to detect the propagation of a specific event to other components. For example, a faulted generator can be detected by measuring the variations in its reactive power flow [27].
### _Cyber Layer_
A cyber layer can be identified as the layer that utilizes information and communication technology (ICT) and computer-aided platforms to gather, assess, and control the operation of power systems. It might be composed of communication channels, computation and control platforms, and monitoring systems.
#### Ii-B1 Communication Channels
ICT is a vital connecting bond between measurements and various cyber layers. Interface devices such as RTUs provide a two-way function in CPPS which are: (1) to transfer measured data via the communication layer, and (2) to execute decision-making signals coming from the control layer. RTUs are installed in various locations to capture the observability of the system states [5]. Methods of communication between several components vary according to: system level, system scale, security constraint, priority, and hardware installation [14]. Both local and wide area network environments are accompanied with several communication protocols to provide the proper communication. High capacity fiber optic cables are being widely used to connect between substations and system control centers in the transmission level at high transfer speed [15].
#### Ii-B2 Computation and Control platforms
This layer is responsible for providing the proper control actions based on various power system assessment tools. Generally, control centers receive the measurements from field devices and pass them to operational processes, a decision is made and transmitted to actuators that apply a state change in the field devices. Both local and global centers utilize supervisory control and SCADA systems to handle the various computation and control algorithms [5, 28, 29, 15]. Various monitoring screens are integrated to provide real-time information of the system components and status.
Each part of the power system has its own control algorithms, variables, and tools. In generation, terminal voltage and output power are the essential primary control algorithms. On the local level, generators have two control schemes: automatic voltage regulator, and governor control, whereas on the wide-area level, automatic generation control is used [5]. To ensure safe operation of power flow through transmission lines, two control algorithms are utilized in the transmission system: state estimation and voltage-ampere reactive compensation. Two main algorithms are used in the distribution level control namely load shedding control, and advanced metering infrastructure.
### _Interactions and Intra-actions_
CPPS dependencies are classified into inter-actions and intra-actions, where the former studies the dependencies between various CPPS layers and the latter focuses on dependencies within a specific layer of a CPPS model. The complex interconnectivity between CPPS layers and the deep integration of ICT across all layers create further challenges to identify inter- and intra-dependencies. This section provides a brief explanation of these dependencies within the suggested CPPS model.
The suggested model takes into consideration previous classifications as follows. The model identifies direct and indirect correlations among layers and sublayers. For instance, an event taking place in the global communication layer might directly propagate into the physical layer, whereas a fault at local protection devices might not be directly reflected in the main computation layer. Both inter- and intra-dependencies have been characterized in the suggested model. For example, steady-state power flow studies, and transient stability studies are utilized to assess the performance of power components in the physical layer. Physical layer and decision layer are dynamically interactive through the global communication layer, whereas results of the computation layer are not directly reflected on the physical layer. The suggested model gives insights on the common cause, cascading and escalating impacts. A power cyber-attack taking place in any cyber layer, either local or global, might cascade into the physical layer.
### _Evaluating the Suggested Model_
As previously mentioned, the CPPS evaluation criteria can be used to measure the degree of competence of the suggested CPPS model. First, the suggested model provides a high accuracy outcome due to high matching between the model and the real system model. The suggested model can be scaled up to a specific level where the computational limits are not violated. However, co-simulation approaches can be leveraged to overcome this drawback. Also, the suggested model fulfills the fidelity feature since it provides a more detailed CPPS reducing the degree of approximations between various layers. High level of application-compatibility and topological-suitability is maintained. Different power system topologies, i.e., meshed and radial, and communication topologies, i.e., ring, star, and meshed, can be modeled. Finally, the suggested model can adapt to dynamic studies with high degree levels. Various time scales can be used for
analysis and assessment.
## IV Conclusion
This paper has classified system layers based-on their common, coupled, and shared functions. Also, interactions between the classified layers were identified and characterized, all possible interactions were enumerated, and they have been clustered based on their impact on the system. Furthermore, based on the overall function of the layer and types of its components, intra-action within the layers were characterized. The strategies developed in this paper for comprehensive classification of system layers and characterization of their inter- and intra-actions contributes towards the goal of accurate and detailed modeling of state transition and failure and attack propagation in CPPS. The accurate and detailed modeling of state transition and failure and attack propagation in CPPS is a necessary step towards reliability and resilience analysis, evaluation, and enhancement of CPPSs.
## Acknowledgement
This work was supported by the U.S. National Science Foundation (NSF) under Grant NSF 1847578.
|
2308.13442 | Unlocking Fine-Grained Details with Wavelet-based High-Frequency
Enhancement in Transformers | Medical image segmentation is a critical task that plays a vital role in
diagnosis, treatment planning, and disease monitoring. Accurate segmentation of
anatomical structures and abnormalities from medical images can aid in the
early detection and treatment of various diseases. In this paper, we address
the local feature deficiency of the Transformer model by carefully re-designing
the self-attention map to produce accurate dense prediction in medical images.
To this end, we first apply the wavelet transformation to decompose the input
feature map into low-frequency (LF) and high-frequency (HF) subbands. The LF
segment is associated with coarse-grained features while the HF components
preserve fine-grained features such as texture and edge information. Next, we
reformulate the self-attention operation using the efficient Transformer to
perform both spatial and context attention on top of the frequency
representation. Furthermore, to intensify the importance of the boundary
information, we impose an additional attention map by creating a Gaussian
pyramid on top of the HF components. Moreover, we propose a multi-scale context
enhancement block within skip connections to adaptively model inter-scale
dependencies to overcome the semantic gap among stages of the encoder and
decoder modules. Throughout comprehensive experiments, we demonstrate the
effectiveness of our strategy on multi-organ and skin lesion segmentation
benchmarks. The implementation code will be available upon acceptance.
\href{https://github.com/mindflow-institue/WaveFormer}{GitHub}. | Reza Azad, Amirhossein Kazerouni, Alaa Sulaiman, Afshin Bozorgpour, Ehsan Khodapanah Aghdam, Abin Jose, Dorit Merhof | 2023-08-25T15:42:19Z | http://arxiv.org/abs/2308.13442v2 | # Unlocking Fine-Grained Details with Wavelet-based High-Frequency Enhancement in Transformers
###### Abstract
Medical image segmentation is a critical task that plays a vital role in diagnosis, treatment planning, and disease monitoring. Accurate segmentation of anatomical structures and abnormalities from medical images can aid in the early detection and treatment of various diseases. In this paper, we address the local feature deficiency of the Transformer model by carefully re-designing the self-attention map to produce accurate dense prediction in medical images. To this end, we first apply the wavelet transformation to decompose the input feature map into low-frequency (LF) and high-frequency (HF) subbands. The LF segment is associated with coarse-grained features, while the HF components preserve fine-grained features such as texture and edge information. Next, we reformulate the self-attention operation using the efficient Transformer to perform both spatial and context attention on top of the frequency representation. Furthermore, to intensify the importance of the boundary information, we impose an additional attention map by creating a Gaussian pyramid on top of the HF components. Moreover, we propose a multi-scale context enhancement block within skip connections to adaptively model inter-scale dependencies to overcome the semantic gap among stages of the encoder and decoder modules. Throughout comprehensive experiments, we demonstrate the effectiveness of our strategy on multi-organ and skin lesion segmentation benchmarks. The implementation code will be available upon acceptance. GitHub.
Keywords:Deep learning High-frequency Wavelet Segmentation.
## 1 Introduction
In the field of computer vision, Convolutional Neural Networks (CNNs) have been the dominant architecture for various tasks for many years [14, 17]. More
recently, however, the Vision Transformer (ViT) [10] has been shown to achieve state-of-the-art (SOTA) results in diverse tasks with significantly fewer parameters than traditional CNN-based approaches. This has resulted in a shift in the field towards utilizing ViT, which is becoming increasingly popular for a wide range of computer vision tasks [3, 4]. The main success behind the ViTs is their ability to model long-range contextual dependencies by applying a grid-based self-affinities calculation on image patches (tokens). Unlike CNNs, which require stacked convolution blocks to increase the receptive field size, the ViT captures the global contextual representation within a single block. However, the ViT model usually suffers from a weak local description compared to the CNN models, which is crucial for semantic segmentation tasks in medical images.
To address the local feature deficiency of Transformer models, recent studies have explored the combination of CNN-Transformer models or pure Transformer-based designs with U-Net-like architectures [6, 13]. The strength of the U-Net lies in its symmetrical hierarchical design with a large number of feature channels. However, a pure Transformer-based design involves quadratic computational complexity of the self-attention operation with respect to the number of patches, which makes a combination of U-Net and Transformer challenging. Furthermore, due to this fixed-size scale paradigm, ViT has no strong spatial inductive bias. Therefore, extensive research endeavors aim to overcome these issues by designing efficient and linear complexity self-attention mechanisms to make ViTs suitable for dense prediction tasks. Such designs either diminish the patch numbers (e.g., ATS [19] or A-ViT [31]), or apply downsampling or pooling operations, i.e., on images or key/value tensors (e.g., SegFormer [28], PVT [26], or MViT [11]). Furthermore, calculation on self-attentions is hindered by local windowing schemas as in studies such as Swin Transformer [16] or DW-ViT [18]. Swin-Unet [5] explored the linear Swin Transformer in a U-shaped structure as a Transformer-based backbone. MISSFormer [13] investigated the efficient self-attention from SegFormer as a main module for 2D medical image segmentation. In contrast, these methods endorse the ability of ViTs in segmentation tasks but still suffer from boundary mismatching and poor boundary localization due to the information dropping through their enhanced and efficient self-attention process. On the other hand, the Swin Transformer [16] utilizes non-overlapping windows to employ the self-attention mechanism which, however, may lead to the loss of detailed edges and other spatial information. Efficient self-attention [28] used in [13] decreases the dimensions of the input sequence in spatial dimensions that lose informative details and make the segmentation results error-prone. Moreover, recent studies [25] investigated how self-attention performs as a low-pass filter when Transformer blocks are stacked successively. Therefore, stacking Transformer blocks in a multi-scale paradigm (e.g. U-Net architecture) not only helps to model a multi-scale representation but also degrades the loss of local texture and localization features (high-frequency details) through the network.
High-frequency components are often critical in many real-world signals, such as speech and images, and they are usually associated with fine-grained details that can provide valuable information for many vision-based tasks. However, the
Transformer model is known to consider low-frequency representations, making it challenging to capture these high-frequency components [25]. This limitation can result in vague and unsatisfactory feature extraction, leading to a suboptimal performance on the segmentation tasks, which requires a precise boundary extraction. Therefore, exerting wavelet analysis to enhance high-frequency representations in a Transformer can provide a multi-resolution decomposition of the input data, allowing us to identify and isolate high-frequency components that provide a more comprehensive representation.
In this paper, we propose a new Wavelet-based approach for medical image segmentation in a U-shaped structure with the help of efficient Transformers that modifies the quadratic self-attention map calculation by reformulating the self-attention map into a linear operation. We also propose incorporating a boundary attention map to highlight the importance of edge information further to distinguish overlapped objects, termed **F**requency **E**nhancement **T**ransformer (**FET**) block. Furthermore, we design an MSCE module within the skip connections to overcome the semantic gap among the encoder and decoder stages to build rich texture information transferring, which is otherwise limited by the multi-scale representation in a conventional encoder-to-decoder path. Our contributions are as \(\blacklozenge\) We propose a novel FET block comprising a frequency-enhanced module and boundary-aware attention map to model both shape and texture representation in an adaptive way. \(\blacklozenge\) Applying our proposed MSCE module to skip connections induces the informative texture information from the encoder to the decoder to enrich the missing localization information regarded as a low-frequency representation. \(\blacklozenge\) In addition, our method leverages the high-frequency components after applying a Gaussian kernel to perform additional attention information that could effectively highlight the boundary and detailed information for dense prediction tasks, _e.g._ segmentation.
## 2 Proposed Method
As illustrated in Figure 1, our proposed method trains in an end-to-end strategy that incorporates the frequency analysis in a multi-scale representation within the efficient Transformer paradigm. Therefore, this section first recapitulates the seminal vision Transformer's inner structure by investigating the multi-head self-attention (MHSA) general mathematical formulation. Assume \(X\in\mathbb{R}^{H\times W\times D}\) to be the 2D input image (or feature map stream), then \(X\) can be reshaped as a sequence of patches consisting of \(n=H\times W\) image patches, where \(D\) is the dimension of each patch. Afterward, three representations are learned from the \(X\), namely \(Q\in\mathbb{R}^{n\times D}\) Queries, \(K\in\mathbb{R}^{n\times D}\) Keys, and \(V\in\mathbb{R}^{n\times D}\) Values. The multi-head attention regime utilizes \(N_{h}\) diverse Queries, Keys, and Values, where \(\{Q_{j},V_{j},K_{j}\}\in\mathbb{R}^{n\times D_{h}}\) depicts the \(j\)-th head information. Then, the MHSA follows and learns the final attention over calculated queries, keys, and values according to the following equations:
\[\mathbf{MHSA}(Q,K,V) =\mathbf{Concat}(head_{0},head_{1},...,head_{N_{h}})W^{O},\] \[head_{j} =\mathbf{Attention}(Q_{j},K_{j},V_{j}),\] \[\mathbf{Attention}(Q_{j},K_{j},V_{j}) =\mathbf{Softmax}(\frac{Q_{j}K_{j}^{T}}{\sqrt{D_{h}}})V_{j}, \tag{1}\]
where **Concat** and \(W^{O}\) denote the concatenation operation and the learnable transformation tensor, respectively. Thus, the conventional Transformer captures long-range dependencies but still suffers from several limitations that could affect the ViT's performance in dense segmentation tasks: first, the computational cost of multi-head self-attention is quadratic in patch numbers, \(\mathcal{O}(n^{2}D)\), making it unsuitable for high-resolution tasks. Second, the recent analytic work from Wang et al. [25] demonstrated the deficiency of a multi-head self-attention mechanism in capturing high-frequency details due to the included **Softmax** operation. Specifically, the lack of ability to capture high-frequency information degrades the segmentation performance with naive ViTs. Therefore, in the next section, we propose our FET module to address all aforementioned issues.
### Efficient Transformer
Due to the quadratic computational complexity of seminal Transformers, a wide range of studies have been conducted to minimize this weakness. Shen et al. [23]
Figure 1: The overview of the proposed **F**requency **E**nhanced **T**ransformer (**FET**) model. Each frequency-enhanced Transformer block comprises the sequential LayerNorm, FET block, LayerNorm, and Mix-FFN modules.
revisited the dot production within the multi-head self-attention mechanism to circumvent redundant operations. From Equation1, it can be seen that the MHSA captures the similarity between each pair of patches, which is much more resource intensive. Efficient attention computes the self-attention as
\[\textbf{Efficient Attention}=\mathbf{\rho_{q}}(Q)(\mathbf{\rho_{k}}(K)^{T}V), \tag{2}\]
where \(\mathbf{\rho_{q}}\) and \(\mathbf{\rho_{k}}\) denote the normalization functions for \(Q\) and \(K\). In Equation2, instead of considering the keys as \(n\) feature vectors in \(\mathbb{R}^{D}\), the module interprets them as \(d_{k}\) feature maps with only one channel. Efficient attention applies these feature maps as weights across all positions and combines the value features by weighted summation, resulting in a global context vector. This vector does not refer to any particular position, but rather represents a comprehensive overview of the input features, analogous to a global context vector.
### Frequency Enhancement Transformer (FET)
As suggested by [30], we follow their intuition to preserve the high-frequency counterparts for medical image segmentation tasks. Discrete Wavelet Transform (DWT) is a mapping function from spatial resolution to spatial-frequency space. Wavelet decomposition is a powerful technique that decomposes images into high and low-frequency components, providing a multi-resolution analysis of the input signal. In medical image segmentation, high-frequency components of the image correspond to fine details such as edges and texture. In contrast, low-frequency components correspond to large-scale structures and background information. Thus, a wavelet decomposition which analyzes both high and low-frequency components of medical images may enhance the accuracy of segmentation models by capturing both local and global features of the image. While applying DWT on an image, there would be four distinct wavelet subbands, namely LL, LH, HL, and HH, demonstrating the texture, horizontal details, vertical details, and diagonal information, respectively.
The FET (visualized in Figure2a) is designed to address previous limitations by highlighting the boundary information (high-frequency details) for medical image segmentation. Motivated by [30], FET utilizes the DWT to account for the frequency analysis for focusing on high-frequency counterparts. First, the input 2D image (feature map) \(X\in\mathbb{R}^{H\times W\times D}\) (\(n=H\times W\)) is linearly transformed into \(\overset{\sim}{X}=\mathbb{R}^{n\times\frac{D}{4}}\) by reducing the channel dimension. Classical DWT applies pairs of low-pass and high-pass filters along rows and columns to extract frequency response subbands. Next, DWT is applied to \(\overset{\sim}{X}\) to extract frequency responses and to downsample the input. As a result, the four subbands of input are \(\overset{\sim}{X}=[\overset{\sim}{X}_{LL},\overset{\sim}{X}_{LH},\overset{ \sim}{X}_{HL},\overset{\sim}{X}_{HH}]\in\mathbb{R}^{n\times\frac{D}{4}}\). The high-frequency components (\(\overset{\sim}{X}_{LH}\), \(\overset{\sim}{X}_{HL}\), and \(\overset{\sim}{X}_{HH}\)) concatenate in a new dimension due to the underlying texture details at the fine-grained level. Then, a \(3\times 1\times 1\) convolution is applied to the resulting feature map to recalibrate for a subsequent Gaussian hierarchical "Boundary Attention" mechanism. The process continues with another \(3\times 1\times 1\)
convolution, and then the encoded boundary features are concatenated in the channel dimension. Analogous to [30], another branch applies a \(3\times 3\) convolution for creating the keys and values. Furthermore, a global context results from incorporating keys and values. However, to compensate for the **Softmax** operation's destructive effect [25], we add the boundary attention to Value, to include the boundary preservation action when calculating attention. After boundary extraction, the FET block uses a query \(Q\) from the input \(X\) and key \(K\) and value \(V\) from the DWT to extract multi-disciplinary contextual correlations. While the firstmost left branch captures the spatial dependencies, the middle branch extracts the channel representation in an efficient concept. In addition, the most right branch highlights the boundary information within the value representation. Finally, the FET model in Figure 1 is composed of a LayerNorm, FET block (see Figure 1(a)), LayerNorm, and Mix-FFN [28] modules in sequence.
### Multi-Scale Context Enhancement (MSCE)
A multi-scale fusion paradigm is considered in our design for accurate semantic segmentation to alleviate the semantic gap between stages of U-shaped structures, as in Figure 1(b). Given the multi-level features that resulted from the hierarchical encoder, representations are flattened in spatial dimension and are
Figure 2: **(a)** The **FET** Block. [LL, H, V, D] denotes the low-frequency, horizontal, vertical, and diagonal high-frequency counterparts. **(b)** The overview of **MSCE** skip connection enhancement module. LN, EffT, and SE are the LayerNorm, the efficient Transformer module, and the squeeze and excitation block, respectively.
reshaped to keep the same channel depth at each stage. Considering \(F_{i}\) as a hierarchical feature in each encoder stage \(i\in\{1,\dots,4\}\), we flatten them spatially and reshape them to obtain the same channel depth for each stage before concatenating them in the spatial dimension. Following the LayerNorm and efficient Transformer, we create the hierarchical long-range contextual correlation. Afterward, the tokens are split and reshaped to their original shape of features in each stage and are fed to the FET block to capture the amalgamated hierarchical contextual representation. We capture the global information from the represented token space as a _Global Query_ to the FET blocks.
## 3 Experiments
Our proposed method was implemented using the PyTorch library and executed on a single RTX 3090 GPU. A batch size of 24 and a SGD solver with a base learning rate of 0.05, a momentum of 0.9, and a weight decay of 0.0001 is used. The training was carried out for 400 epochs. For the segmentation task, both cross-entropy and Dice losses were utilized as the loss function. The segmentation task was performed using the combined loss (\(Loss=0.6\cdot L_{dice}+0.4\cdot L_{ce}\) as used in [12]). **Datasets:** First, we evaluated our method on the _Synapse_ dataset [15] that contains 30 cases of abdominal CT scans with 3,779 axial contrast-enhanced abdominal clinical CT images. Each CT data consists of \(85\sim 198\) slices of a consistent size \(512\times 512\) with the eight organ classes annotation. We followed the same preferences for data preparation as in [7]. Second, our study on skin lesion segmentation is based on the _ISIC 2018_[9] dataset, which was published by the International Skin Imaging Collaboration (ISIC) as a large-scale dataset of dermoscopy images. We follow the [2] for the evaluation setting.
**Qualitative and Quantitative Results:** In Table 1, we compare the performance of our proposed FET method with previous SOTA methods for segmenting abdominal organs using the DSC and the HD metrics. Our method surpasses existing CNN-based methods by a significant margin. FET exhibits superior learning ability on the DSC metric compared to other models, achieving an increase of 1.9% compared to HiFormer. The quantitative results highlight the FET superiority in segmenting kidney, pancreas, and spleen organs. The Table (a)a also endorses the mentioned results qualitatively, and all other models suffer from organ deformations when segmenting the liver and suffer from under-segmentation while FET performs smoothly.
**Skin Lesion Segmentation:** Table (b)b also endorses the capability of FET compared to other well-known methods for skin lesion segmentation methods. Specifically, our method performs better than hybrid approaches such as TMU-Net [20]. Additionally, our method proves to be more resilient to noisy elements when compared to pure Transformer-based methods such as Swin-Unet [5], which suffer from reduced performance due to a lack of emphasis on local texture modeling. In addition, comparing qualitative results (presented in Figure 3) on the ISIC 2018 dataset approves our method's capability to capture fine-grained boundary information.
\begin{table}
\end{table}
Table 1: Comparison results of the proposed method on the _Synapse_ dataset. Blue indicates the best result, and red displays the second-best.
\begin{table}
\end{table}
Table 2: (a) Segmentation results of the proposed method versus SOTA methods on the _Synapse_ dataset. (b) Quantitative results on ISIC2018 dataset.
Figure 3: Visual representation of FET model on the _ISIC 2018_ dataset. Ground truth and prediction boundaries are shown in green, and blue colors, respectively.
To comprehensively evaluate the influence of our module on capturing high-frequency information in deeper layers, we conducted an extensive analysis of the spectrum response in Figure 4. Our findings reveal that our method stands out from traditional self-attention modules by effectively preserving high-frequency information within the depths of the network.
To further assess the effectiveness of our approach in capturing both local and global information, we have included the visualization of attention maps in Figure 4. The results clearly demonstrate our method's capability to successfully detect both small and large organs.
Figure 4: Illustration of the spectral response of Standard Transformer (up) and FET (down) for capturing different frequency representation.
Figure 5: The performance of the FET model was evaluated by visualizing its attention map using Grad-CAM on the _Synapse_ dataset. The results showed that the model effectively detects both small (_i.e._, aorta and gallbladder from the right side’s top to bottom) and large organs (_i.e._, liver and right kidney from the left side’s top to bottom), demonstrating its effectiveness in capturing long-range dependencies and local features. In summary, the FET model performed well in detecting organs on the _Synapse_ dataset.
## 4 Conclusion
In this paper, we redesigned the Transformer block to recalibrate spatial and context representation adaptively. We further imposed a secondary attention map to highlight the importance of boundary information within the Transformer block. Moreover, we modeled the intra-scale dependency for further performance improvement by redesigning the skip connection path. The effectiveness of our module is illustrated through the experimental results.
**Acknowledgments:** This work was funded by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) under project number 191948804. We would like to thanks Elnaz Khorami for her guidence on the proposed method and mathematical formulation.
|
2301.10440 | Scheduling Space Expander: An Extension of Concurrency Control for Data
Ingestion Queries | With the continuing advances of sensing devices and IoT/Telecom applications,
database systems need to process data ingestion queries that update the sensor
data frequently. However, as the rate of data ingestion queries increases,
existing protocols have exhibited degraded performance since concurrent updates
need to acquire lock to update the latest versions. To reduce the load on
system on data ingestion queries, we focus on the theory of version order; we
can test that a write is an old and unnecessary version by using version order
of data items. In this paper, we propose a novel protocol extension method,
scheduling space expander (SSE). SSE adds another control flow to conventional
protocols to omit updates on data ingestion queries. It generates an erasing
version order, which assumes that a transaction processes outdated unnecessary
versions. SSE also tests the correctness of this version order efficiently and
independently from conventional protocols. In addition, we present an
optimization of SSE called epoch-based SSE (ESSE), which tests and maintains an
erasing version order more efficiently than SSE. We extend two state-of-the-art
1VCC and MVCC protocols, Silo and MVTO with ESSE. Experimental results
demonstrate that extensions of Silo and MVTO improve 2.7x and 2.5x performance
on the TATP benchmark on a 144-core machine, and the extensions achieved
performance comparable to that of the original protocol for the TPC-C
benchmark. | Sho Nakazono, Hiroyuki Uchiyama, Yasuhiro Fujiwara, Hideyuki Kawashima | 2023-01-25T07:27:24Z | http://arxiv.org/abs/2301.10440v1 | # Scheduling Space Expander: An Extension of Concurrency Control for Data Ingestion Queries
###### Abstract.
With the continuing advances of sensing devices and IoT applications, database systems needs to process data ingestion queries that update the sensor data frequently. To process data ingestion queries with transactional correctness, we propose a novel protocol extension method, scheduling space expander (SSE). The key idea of SSE is that we can safely omit an update if the update becomes outdated and unnecessary. SSE adds another control flow to conventional protocols to test the transactional correctness of an erasing version order, which assumes that a transactions' updates are all outdated and unnecessary. In addition, we present an optimization of SSE called epoch-based SSE (ESSE), which generates, tests, and maintains the erasing version order more efficiently than SSE. Our approach makes the performance of data ingestion queries more efficient. Experimental results demonstrate that our ESSE extensions of Silo and MVTO improve 2.7\(\times\) and 2.5\(\times\) performance on the TATP benchmark on a 144-core machine, and the extensions achieved performance comparable to that of the original protocol for the TPC-C benchmark.
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Journal: Journal: Journal: Journal: Journal: Information Process
+
Footnote †: journal: Journal: Journal: Journal: Journal: Journal: Journal: Information Process
+
Footnote †:
Blind update is the most significant difference between our intended applications and traditional applications. Traditional workloads such as TPC-C (Sang et al., 2017), which models wholesale warehouse management, also generate a large number of writes in the form of new orders, but these workloads request inserts, not blind updates. Inserts are scalable by partitioning since they write distinct items, but updates must write to the same data item in principle, and thus it is difficult to scale. In our experiments, as the rate of data ingestion queries increases, existing protocols have exhibited degraded performance for TATP benchmark as shown in Figure 1. This is because they have to use a lock mechanism to order the update requests serially into the same data item, to preserve the transactional correctness. Such serial execution of updates causes performance degradation. If the throughput becomes less than the data velocity, we cannot operate the system and services of IoT/Telecom applications.
To process massive amounts of data in real-time, there exist methods to **omit** updates without lock mechanisms by using load shedding or backpressure(Sale et al., 2016; Sale et al., 2016). However, these methods do not have enough guarantee of the transactional correctness. For example, when a sensor updates information of two nearby moving objects but the system omits one of them partially, actuators only obtain one moving object data and thus some actuators might cause an accident such as a collision. To guarantee the transactional correctness, databases need to use concurrency control (CC) protocols. CC protocols handles the interleaving of concurrent operations by ensuring the transactional correctness as two essential properties: serializability (guarantee of consistent data snapshot) (Sale et al., 2016) and linearizability (guarantee of non-stale data snapshot) (Sale et al., 2017). In theory, we can decide whether an omission of an update satisfies the correctness by finding _version order_(Brands et al., 2016) related to all update. If a version order found and its correctness is verified by a protocol, we can skip locking, buffer updates, and persistent logging while preserving the transactional correctness. However, to the best of our knowledge, no existing methods leverage the notion of version orders. It is because the naive approach requires the expensive acyclicity checking of all possible dependency graphs based on all transactions and all possible version orders, which has been proven as the NP-Complete problem (Sale et al., 2016).
In this paper, we propose a versatile protocol extension method, **scheduling space expander (SSE)**. The contributions of SSE are threefold.
**C1:** SSE reduce the verification cost in a polynomial-time by testing only a single **erasing version order** which is generated by SSE's data structure. With erasing version order, the verification needs only for a single subgraph of concurrent transactions. SSE also keeps a version order and its testing algorithm by conventional protocols; if an erasing version order does not found or failed correctness testing, then SSE delegates the control flow to conventional protocols. Thus, SSE can omit updates but does not directly abort any transaction. It indicates that SSE purely expands the scheduling space of conventional protocols.
**C2:** We developed **epoch-based SSE (ESSE)** to introduce optimizations for SSE. With the epoch framework, ESSE reduces the number of target transactions in the SSE's verification and encodes the footprints of these target transactions into a \(4\)-bits data structure. As a result, a protocol expanded by ESSE can generate an erasing version order and execute correctness testing in a latch-free manner. If a transaction passes the testing, it can omit its write operations with a bit of atomic operations such as Compare-And-Swap.
**C3:** We demonstrated that SSE and ESSE are applicable to various protocols. We applied ESSE to two state-of-the-art 1VCC and MVCC protocols (Silo (Silo, 2010) and Cicada-based MVTO (Sale et al., 2016)); then, we evaluated the performance on the TATP, YCSB (Brads et al., 2016), and TPC-C benchmarks in a \(144\)-core environment. Figure 1 shows that ESSE successfully mitigates the performance problem of data ingestion queries and improved the performance on the TATP benchmark. This is because ESSE appropriately omitted unnecessary versions, as illustrated in the experiment (Figure (c)c in Section 6.1). In Table 1 we present the difference between conventional protocols and our proposal. ESSE extends various state-of-the-art protocols such as Silo and MVTO to enable omitting write operations while presserving the transactional correctness.
The rest of this paper is organized as follows. Section 2 describes the preliminaries. Section 3 proposes the notion of safely omitable transactions and its correctness testing algorithm. Section 4 presents the scheduling space expander (SSE) scheme, which can generate safely omitable transactions. Section 5 presents ESSE, which is the optimization technique for SSE based on the epoch framework. Section 6 reports our evaluation of the proposed scheme. Finally, Section 7 describes related work, and Section 8 concludes the paper.
## 2. Preliminaries
We mainly use the notations derived from Weikum et al. (Sale et al., 2016). Table 2 shows frequently used symbols and notations. Let \(t_{i},x_{i},w_{i}(x_{i})\), and \(r_{i}(x_{j})\) be \(i\)-th transaction, a version of data item \(x\), a write operation, and a read operation, respectively. \(t_{i}\) has an ordered set of operations. \(w_{i}(x_{i})\) means \(t_{i}\) writes \(x_{i}\). \(r_{i}(x_{j})\) means \(t_{i}\) reads \(x_{j}\). \(ws_{i}\)
\begin{table}
\begin{tabular}{c c c c} \hline \hline Protocol & Transactional correctness & Omit write operations & Version storage \\ \hline Timestamp Ordering (T/O) with Thomas Write Rule (TWR) & Not strictly serializable & **Yes** & 1VCC \\ Silo OCC & **Strictly serializable** & No & 1VCC \\ Cicada MVTO & Not strictly serializable & No & MVCC \\ Silo + ESSE & **Strictly serializable** & **Yes** & 1VCC \\ MVTO + ESSE & **Strictly serializable** & **Yes** & MVCC \\ \hline \hline \end{tabular}
\end{table}
Table 1. Differences between conventional protocols and our proposal
and \(r_{s1}\) represent the set of read and write operations of \(t_{i}\), respectively. \(c_{i}\) and \(a_{i}\) represents \(t_{i}\)'s termination operation; commits and aborts, respectively.
### Transactional Correctness
We define the **transactional correctness** of our intended services as recoverability (Kang et al., 2017) and strict serializability (Kang et al., 2017). We assume no transaction reads any value written by uncommitted transactions. This constraint ensures recoverability. Strict serializability consists of serializability and linearizability (Kang et al., 2017). Serializability is necessary to provide consistent data snapshots for real-time operations of our intended IoT/Telecom applications, i.e., readings results without inconsistent or partial updates. Among multiple notions of serializabilities, we use multiversion view serializability (MVSR) because this property provides the widest scheduling space (Kang et al., 2017). Linearizability refers to the wall-clock ordering constraints among non-concurrent transactions. If a database has the guarantee of linearizability, it ensures to prevent stale reads and writes, i.e., reading and writing of outdated versions.
Bernstein et al. proposed the multiversion serialization graph (MVSG) (Bernstein et al., 2016) and proved that _a schedule is MVSR if and only if there exists an acyclic MVSG_. An MVSG has nodes for all committed transactions in the schedule. The edges are added by a given schedule and a _version order_ for the schedule. There are two types of version orders: version order for a data item and for a schedule. A version order for a schedule is a union of all version orders for data items. When \(x_{i}\) precedes \(x_{j}\) in a schedule, we denote \(x_{i}<_{p}x_{j}\) and we refer it to as a version order for a data item. With a version order for a schedule, the edges of MVSG are added for each triple of distinct operations \(w_{j}(x_{j})\), \(r_{i}(x_{j})\), and \(w_{k}(x_{k})\), where \(t_{i}\neq t_{k}\neq t_{j}\). There are three types of edges: (1) \(t_{j}\overset{\text{wrr}}{\rightarrow}t_{i}\) indicates that \(t_{j}\) writes a version \(x_{j}\) and \(t_{i}\) reads it. (2) If \(x_{j}<_{p}x_{k}\), then \(t_{i}\overset{\text{\Leftarrow($r_{i}$)}}{\rightarrow}t_{k}\) indicates that \(t_{i}\) reads a version \(x_{j}\) and its version order precedes \(t_{k}\)'s version \(x_{k}\). (3) Otherwise, \(t_{k}\overset{\text{\Leftarrow($w$)}}{\rightarrow}t_{j}\) indicates that \(t_{k}\) writes a version \(x_{k}\) and its version order precedes \(t_{j}\)'s version \(x_{j}\). Note that the original notation1 of the MVSG does not include the dependency types for transaction orders. Of course, Bernstein's definition has no problem. However, we introduce the above notations to clarify our proof of correctness theorems.
Footnote 1: In the original MVSG definition (Bernstein et al., 2016), all edges were denoted as \(\rightarrow\), and the version orders for the schedule and for each data item had the same notation \(\Leftarrow\); thus, when \(x_{i}\) preceded \(x_{j}\); it was denoted as \(\text{\textminus}x_{i}\ll x_{j}\) in \(\ll\):’
### Data Ingestion Queries and Write Omission Technique
The data ingestion queries are used to aggregate updates from sensors and mobile devices in IoT/Telecom applications. This query has long been discussed in non-transactional systems such as streaming databases (Kang et al., 2017; Kang et al., 2017). The data ingestion query does not need to be transactional if it ingest data for historical analysis. However, if the data is used in real-time operations that manipulate real-world actuators, we need to use CC protocols because such operations require the transactional correctness (Bernstein et al., 2016; Kang et al., 2017). Unfortunately, for write contended workloads such as data ingestion queries, it has been studied that existing state-of-the-art CC protocols do not exhibit high throughput (Kang et al., 2017; Kang et al., 2017) since they need to use locking to satisfy serializability.
To solve the performance degradation problem on data ingestion queries, non-transactional streaming databases uses the write omission technique such as load shedding (Kang et al., 2017). In transaction processing, such write omission technique is as known as the Thomas write rule (TWR) (TWR) (TWR, 1996), which is an optimization rule for the timestamp ordering (T/O) CC protocol. With the TWR, a transaction can avoid installing a write of a data item \(x\) when the transaction's timestamp is less than the \(x\)'s timestamp, which has already been installed. However, it is unclear whether or not an omission satisfies the transactional correctness, and it is also unclear whether or not this rule can apply to other modern protocols.
## 3. Safely Omitted Transactions
In this section, we introduce the definition of **safe omittable transactions** for any protocol to utilize the technique of write omission, and its validation algorithm with MVSG.
\begin{table}
\begin{tabular}{|c|l|} \hline Notation & Definition \\ \hline \hline \(t_{i}\) & \(i\)-th transaction; an ordered set of operations \\ \(x_{l}\) & a version of data item \(x\) \\ \(w_{l}(x_{i})\) & a write operation; \(t_{i}\) writes \(x_{i}\) \\ \(r_{i}(x_{j})\) & a read operation; \(t_{i}\) reads \(x_{j}\) \\ \(c_{i}\) & a commit operation of \(t_{i}\) \\ \(a_{i}\) & a abort operation of \(t_{i}\) \\ \(r_{s1}\) & a set of versions read by \(t_{i}\) \\ \(w_{s1}\) & a set of versions written by \(t_{i}\) \\ \hline \end{tabular}
\end{table}
Table 2. Frequently used symbols and notations
Figure 2. Examples of schedules with version orders. The dotted operations depicted with wastebaskets omit its versions. Only cases (b) and (d) has safely omittable transactions (i.e., ensure the transactional correctness of unpublished transactions).
We first provide the following definition:
Definition 1 (Unpublished).: _An **unpublished** transactions is a transaction which does not execute installing and logging of its write set into storage._
Definition 2 (Safely Omitted).: _An unpublished transaction \(t_{j}\) is **safely omittable** if \(c_{j}\) does not affect the correctness._
The key aspects of the Definition 2 is that the transaction can commit without publishing its write set. It's versions must be unread by concurrent transactions and also future transactions. Therefore, \(t_{j}\) is unnecessary for other transactions; we can skip both buffer updates and persistent logging for safely omittable transactions.
To test whether a transaction \(t_{j}\) is safely omittable, we need to verify the correctness. It achieved by the notion of MVSG. Specifically, we have to generate a version order for the schedule, and then test 1) the MVSG's acyclicity and 2) the wall-clock ordering among non-concurrent transactions. Figure 2 illustrates these testing with two example schedules and four version orders. Safely omittable transactions are graved out and unpublished updates are marked with a trash box. The pairs of (a)-(b) and (c)-(d) have the same schedule, but have different version orders and transaction lifetimes, respectively. These difference result in (b) and (d) include safely omittable transactions, while (a) violates serializability and (c) violate linearizability. Note that the operations arrives in order of wall-clock time depicted as left-to-right, but we draws MVSGs with version orders that are generated regardless of the arrival order.
**Serializability.** In (a) and (b), \(t_{1}\) executes read-modify-write into \(x\) (reads \(x_{0}\) and writes \(x_{1}\) as the next version) and \(t_{2}\) executes blind write (writes \(x_{2}\) as any version) over the same data item. If we generate a version order \(x_{2}<_{o}x_{1}\) and omit \(t_{2}\) as seen in (a), the edges of MVSG represent a cycle \(t_{1}\to t_{2}\to t_{1}\). This is because \(t_{1}\) read-modify-writes to the just next version of \(x_{0}\) and thus, any transaction can place a version as the middle of \(x_{0}\) and \(x_{1}\). However, if we generate a version order \(x_{1}<_{o}x_{2}\) as seen in (b), \(t_{1}\) is safely omittable since the MVSG has the acyclic form.
**Linearizability.** In (c) and (d), there exists only blind updates. Therefore, both MVSGs are edgeless and acyclic. However, if we generate improper version order and omit wrong versions, database violates linearizability. In the case (c) we generate the version order \(x_{2}<_{o}x_{1}\). This version order does not match the wall-clock ordering of transactions; \(t_{1}\) and \(t_{2}\) are non-concurrent transactions and thus the order of these transactions' versions must be \(x_{1}<_{o}x_{2}\). Hereafter, the transactional correctness of database is lost by write omission.
We can see the rules and limitations for creating safely omittable transactions from the above examples. Examples (a) and (b) indicate that there must be at least one _blind update_(Kumar et al., 2017). If there is no blind update but we omit an update, the correctness testing will not pass regardless of what version order we create. In addition, examples (c) and (d) indicate that the blind update must be written by a concurrent transaction. Hence, to create safely omittable versions, we have three questions that have never been comprehensively studied to the best of our knowledge:
1. How to find concurrent blind updates to generate a version order?
2. How to test the correctness?
In this paper, we package the solutions to these three problems into a single CC protocol extension called SSE.
## 4. SSE: Scheduling Space Expander
In this section, we propose the **scheduling space expander (SSE)** which adds another control flow to conventional protocols for generating safely omittable transactions. SSE solve the problems shown in the previous section as followings:
1. [label=**A0:**]
2. SSE selects and manages concurrent blind updates as the _pivot versions_ to generate an _erasing version order_ which assumes there exists safely omittable transactions (Section 4.1).
3. SSE uses MVSG to test the correctness of erasing version order efficiently (Section 4.2).
We first introduce an _erasing version order_, which is a version order generated by SSE to reduce the computational cost of the correctness testing (Section 4.1). We next outline how tests an erasing version order (Section 4.2). We then show the SSE's control flow to expand a protocol and improve its performance on data ingestion queries (Section 4.3).
### SSE's Version Order Generation
The serializability theory indicates a write is safely omittable if there exists a version order which draws acyclic MVSG. However, it is impractical to try the testing with all possible version orders since the testing with all version orders is proven as NP-complete (Bartos et al., 2017; Kumar et al., 2017; Kumar et al., 2017). To perform the test efficiently, SSE incorporates heuristic restrictions in generating candidate version orders. When an active transaction \(t_{j}\) arrives, SSE generates an **erasing version order**. It is a version order which assumes that all writes in \(t_{j}\) are safely omittable. Formally, an erasing version order satisfies the following three conditions: (1) SSE changes the version order only for data items that \(t_{j}\) is updating. From this condition, SSE's correctness testing can focus on the MVSG's subsets that include the node of \(t_{j}\); if correctness is violated, it will be due to a change of a version order for a data item related to \(ws_{j}\). (2) Each version \(x_{j}\) is the just before version of a blind update. We add this condition to hold the unread condition of safely omittable versions; the non-latest versions become stale and not requested by subsequent transactions. In addition, we enforce that the following version must be blind update. As described at Figure 2-(a) in Section 3, if \(x_{j}\) is placed on the middle of read-modify-write, then MVSG always become cyclic.
Figure 3. Overall structure of SSE implementation.
As a concrete way to create an erasing version order, SSE selects a blind update as _pivot version_ for each data item. A pivot version is a landmark for generating the erasing version order; it tells other transactions to "place your version just before this pivot version". For example, in SSE, a transaction \(t_{j}\) generates an erasing version order such that all its versions are located immediately before the pivot versions. If there exists pivot versions \(x_{po},y_{po},...\), then \(t_{j}\) creates an erasing version order \(x_{j}<_{o}x_{po},y_{j}<_{o}y_{po}\). Figure 3 shows the overall structure of our prototype implementation of the pivot versions. We implemented the pivot versions by adding a single indirect reference for each data item. We assume that database has a tree-like index, and that every data item is accessed from its leaf nodes. In SSE, every index leaf node has a pointer to a pivot version, which is the indirection object to data item. Each data item is represented as a singly linked list starting from the pivot version. SSE completes the correctness testing of erasing version order only with pivot versions; a pivot version includes footprints of reachable transactions, as described in the later Section 4.2.
### Correctness Testing
With an erasing version order, SSE tests the transactional correctness efficiently. To test serializability, it is sufficient to test only the MVSG paths that include a node of \(t_{j}\). To ensure linearizability, all of \(t_{j}\)'s reachable nodes that appear in the serializability test must be concurrent with \(t_{j}\). We define two types of node sets, _successors_, and _overwriters_ (abbreviate as \(s_{j}\) and \(o_{j}\)), in accordance with the outgoing edge from \(t_{j}\) as follows:
Definition 3 (Type of Reachable Transactions).: _For the transactions directly reachable from a transaction \(t_{j}\), we define the following two sets:_
\[s_{j}\coloneqq\{t_{k}|t_{j}\stackrel{{\ll(\textit{vw})}}{{ \rightarrow}}t_{k}\},\ \ o_{j}\coloneqq\{t_{k}|t_{j}\stackrel{{\ll(\textit{vw})}}{{ \rightarrow}}t_{k}\}\]
The following theorem is derived from this definition:
Theorem 1 (Directly Reachable Transactions).: _If a schedule satisfies recoverability and an edge \(t_{j}\to t_{k}\) exists, then the directly reachable transaction \(t_{k}\) is in either \(o_{j}\) or \(s_{j}\)._
Proof.: An MVSG has only three types of edges: \(t_{j}\stackrel{{\textit{wr}}}{{\rightarrow}}t_{k},t_{j} \stackrel{{\ll(\textit{rw})}}{{\rightarrow}}t_{k}\), and \(t_{j}\stackrel{{\ll(\textit{vw})}}{{\rightarrow}}t_{k}\). Because recoverability is satisfied and \(t_{j}\) is active, edge \(t_{j}\stackrel{{\textit{wr}}}{{\rightarrow}}t_{k}\) does not exist.
From Theorem 1, SSE can test correctness by testing all paths starting from these two sets of transactions. Therefore, SSE separates the procedure of correctness testing into two sub-testings as described below.
```
Input:\(t_{j}\) Output: whether or not \(c_{j}\) keeps serializability
1\(T=\{t_{k},t_{1}|t_{k}\in s_{j}\wedge t_{i}\) is reachable from \(t_{k}\) in MVSG } // (A)
2forall\(t_{m}\) in \(T\)do
3if\(t_{m}\) commits before \(t_{j}\)'s beginningthen// (B)
4return strict serializability is not satisfied
5forall\(t_{m}\) in \(\textit{ws}_{m}\)do
6forall\(y_{n}\) in \(\textit{rs}_{j}\)do
7if\(y_{m}=y_{n}\)then// (C)\(\stackrel{{\textit{wr}}}{{\rightarrow}}\) to \(t_{j}\)return MVSG is cyclic
8forall\(y_{g}\) in \(\textit{rs}_{m}\)do
9forall\(y_{j}\) in \(\textit{ws}_{j}\)do
10if\(y_{g}<_{o}y_{j}\)then// (D)\(\stackrel{{\ll(\textit{rw})}}{{\rightarrow}}\) to \(t_{j}\)
11returnMVSG is cyclic
12
13 returnMVSG is acyclic
```
**Algorithm 1**Correctness testing for \(s_{j}\)
**Testing of successors.** No studies have focused on the testing of successors because conventional protocols always write the incoming \(t_{j}\)'s versions as the latest versions, so the set of successors is empty. Algorithm 1 provides a testing procedure for \(s_{j}\). Let \(\textit{rs}_{i}\) be a set of versions read by \(t_{i}\), and let \(\textit{ws}_{i}\) be a set of versions written by \(t_{i}\). In step (A), it collects transactions that are included in or reachable from \(s_{j}\). In step (B), for linearizability, it tests the concurrency between \(t_{j}\) and each transaction \(t_{m}\) in the collected transactions. In steps (C) and (D), the algorithm tests serializability. Because each \(t_{m}\) is a transaction included in or reachable from \(s_{j}\), a path \(t_{j}\stackrel{{\ll(\textit{vw})}}{{\rightarrow}}...,\to t_{m}\) already exists. Therefore, if there is no path \(t_{m}\to t_{j}\) for each \(t_{m}\), the MVSG is acyclic; steps (C) and (D) thus focus on the last such edge \(t_{m}\to t_{j}\). Note that transaction \(t_{m}\) does not have a path \(t_{m}\stackrel{{\ll(\textit{vw})}}{{\rightarrow}}t_{j}\). This type of edge is added to the MVSG only if some committed transactions read some version in \(\textit{ws}_{j}\), and such a read operation is not permitted to enforce recoverability; because \(t_{j}\) is an active transaction, no committed transaction can read version \(x_{j}\). Therefore, we only need to check the types of the last edges \(t_{m}\stackrel{{\textit{wr}}}{{\rightarrow}}t_{j}\) and \(t_{m}\stackrel{{\ll(\textit{rw})}}{{\rightarrow}}t_{j}\). Accordingly, in step (C), the algorithm checks the last edges \(t_{m}\stackrel{{\textit{wr}}}{{\rightarrow}}t_{j}\). It checks whether \(y_{n}\) in \(\textit{rs}_{j}\) is the same with \(y_{m}\) in \(\textit{ws}_{m}\). This is because, if this condition holds, then there exists a cyclic path \(t_{j}\stackrel{{\ll(\textit{vw})}}{{\rightarrow}}...\to t_{m} \stackrel{{\textit{wr}}}{{\rightarrow}}t_{j}\). Similarly, in step (D), the algorithm checks the last edges \(t_{m}\stackrel{{\ll(\textit{rw})}}{{\rightarrow}}t_{j}\) by confirming that there exists \(y_{j}\) in \(\textit{ws}_{j}\) that is never than \(y_{g}\) in \(\textit{rs}_{m}\). This is because there exists a cyclic path \(t_{j}\stackrel{{\ll(\textit{vw})}}{{\rightarrow}}...\to t_{m} \stackrel{{\ll(\textit{rw})}}{{\rightarrow}}t_{j}\) if the condition holds. Consequently, if the testing of steps (C) and (D) are passed, then no transaction in \(s_{j}\) can reach \(t_{j}\).
**Testing of overwriters.** The detailed algorithms of this subsets are beyond the scope of this paper since we can use existing algorithms, such as anti-dependency validation, from conventional protocols. For example, Silo (Silo, 2017) has one of the simplest approaches: it checks \(o_{j}=\phi\) by testing whether each version in \(\textit{rs}_{j}\) has been overwritten and a newer version exists. If both testings of successors and overwriters are succeeded, we can commit \(t_{j}\) without any correctness violation since \(t_{j}\)'s all write operations are safely omittable.
An important fact is that the set of transactions \(t_{po}\) that wrote the pivot versions, is equivalent to the successors, \(s_{j}\). This is because the pivot versions are the just next versions for \(t_{j}\)'s updates. Therefore, we can implement the Algorithm 1 by using the read/write sets of the transactions \(t_{m}\), that includes \(t_{po}\) and reachable transactions from some \(t_{po}\). To this end, each pivot version hold the footprints of transactions that have read or written a version greater than or equal to this pivot version.
### Control Flow
We revealed that an erasing version order helps to confirm that a write is safely omittable, and we also revealed the algorithm to test it. The next challenge is how to apply this version order and testing algorithm to conventional CC protocols. It is desirable for us not to change the performance characteristics and the specifications of conventional CC protocols drastically. However, if we simply change a conventional protocol to generate an erasing version order, the changed protocol will be useless; since erasing version order always try to write non-latest versions, we would not successfully update the latest versions. Even if contended updates occur into the same data item, practical applications need to update the latest version periodically. Rather than such changes, we propose **SSE**, an extension method that purely extends the scheduling space of conventional protocols. SSE is not a protocol, but an extension of the protocol; instead of changing the version order and correctness testing algorithms of conventional protocols, SSE adds erasing version order and tests its correctness independently.
Figure 4 shows the control flow of an extended protocol. SSE starts its processing upon the commit request of an active transaction \(t_{j}\). In the first step, SSE generates an erasing version order as described in Section 4.1. If any version order is not be found, SSE delegates the processing of \(t_{j}\) to a conventional protocol. If an erasing version order is found, SSE then checks correctness. For linearizability, it checks concurrency between \(t_{j}\) and all related transactions. For serializability, it checks whether MVSG has no cycle. If both tests are passed, SSE omits all of \(t_{j}\)'s versions and commits \(t_{j}\). Otherwise, it delegates the processing of \(t_{j}\) to a conventional protocol. Note that SSE does not change the version order generated by the conventional protocol, and SSE can commit \(t_{j}\) but does not directly abort \(t_{j}\). Even if SSE cannot commit \(t_{j}\) with its erasing version order, it may be possible to commit \(t_{j}\) with a version order from the conventional protocol. It indicates that SSE purely expands the scheduling space of conventional protocols.
In the best case, we can omit updates of \(t_{j}\) only by accessing the pivot versions since these indirection data structures have all the necessary data for correctness testing. Otherwise, if an update \(w_{j}(x_{j})\) does not find a pivot version of \(x\) or some testing fails, then we delegate the control and quit to execute testing of SSE. In this case, pivot versions have almost negligible overhead for conventional protocols; it adds only a single indirection references for each data item.
## 5. ESSE: Epoch-Based SSE
In this section we introduce the optimization method of SSE, called **epoch-based SSE (ESSE)**. We first explain that the naive implementation of SSE has performance problems in the correctness testing of successors. As described in Section 3, this testing requires both a read set (\(r_{sm}\)) and a write set (\(ws_{m}\)) for all reachable transactions \(t_{m}\); it leads a huge overhead. Specifically, in step (A) of Algorithm 1, the memory consumption from the footprints \(rs_{m}\) and \(ws_{m}\) increases as the number of testing target transactions \(t_{m}\) increases. In addition, steps (C) and (D) have the synchronization problem: footprints \(rs_{m}\) and \(ws_{m}\) must be a concurrent data structure to commit transactions in parallel.
To solve these two performance problems, we propose ESSE, which is an optimized SSE implementation using epoch-based group commits [4; 11]. An epoch-based group commit divides the wall-clock time into _epochs_ and assigns each transaction to an epoch. The commit operations for the transactions in an epoch are simultaneously delayed. Because they have the same commit point, they can be regarded as concurrent, whereas transactions in different epochs are not concurrent. ESSE heavily utilizes this nature of concurrency in epochs. Specifically, ESSE selects pivot versions as the first blind updates of data items for each epoch. Then the linearizability testing become simple; we can test it easily by checking that the pivot versions are written in the same epoch with \(t_{j}\)'s one. This tight coupling of epochs and pivot versions helps to mitigate the former performance problem of SSE, the huge memory consumption. There is no need to retain the footprints of the transactions in old epochs because SSE fails the linearizability testing before the serializability testing for these transactions.
ESSE also includes two optimizations, as follows. (1) It creates an optimization rule by using epochs to reduce the number of testing target transactions \(t_{m}\) (Section 5.1). (2) It provides a **reachability flag** for testing of successors efficiently; it is a latch-free implementation of SSE's pivot versions to solve the latter performance problem of SSE, the necessity of synchronization (Section 5.2). After these explanation of ESSE, we show the actual application of ESSE to conventional protocols, and explain what kind of protocols are preferable for ESSE (Section 5.3).
### Optimization Rule
To reduce the number of testing target transactions, we add the optimization rules by using ESSE's pivot versions. For an epoch, we define a set of pivot versions as a **pivot barrier**. Using the pivot barrier, we obtain the following theorem, which provides important properties for reducing the target transactions.
Theorem 2 (Pivot Barrier Violation).: _Let \(t_{j}\) be an active transaction in epoch \(e_{1}\), attempting to commit on ESSE. Let \(e_{1}\) be the transactions that wrote the pivot barrier (pivot versions in the epoch) of \(e_{1}\). The correctness testing of successors\({}_{j}\) passes if \(t_{j}\) is not reachable from any transaction in \(e_{1}\)._
Proof.: To prove by contradiction, we first assume that no transaction exists that satisfies both conditions of Theorem 2 and that the
Figure 4. Control flow of SSE with conventional protocols.
testing of \(successors_{j}\) detects an MVSG cycle. Because the MVSG has a cycle, there exists a transaction \(t_{i}\) from which \(t_{j}\) is directly reachable, which satisfies the direct reachability condition in Theorem 2. By the assumption, \(t_{i}\) does not satisfy the right side of the pivot barrier condition; thus, \(t_{i}\) is not reachable from any \(t_{p}\). Therefore, the MVSG cycle is completed by the left side of the pivot barrier in the form \(t_{j}\rightarrow...,\to t_{i}\to t_{j}\to t_{p}\). We focus on the first edge outgoing from \(t_{j}\). Because all versions in \(ws_{j}\) are immediate predecessors of the pivot versions, no transaction can write a version that is larger than a version belonging to \(t_{j}\) but smaller than the pivot version. Therefore, to form the cycle on the left side of the pivot barrier, the type of edge in the cycle outgoing from \(t_{j}\) must not be \(\stackrel{{\mathsf{ref}}}{{\rightarrow}}\). In addition, the type of edge must not be \(\stackrel{{\mathsf{wr}}}{{\rightarrow}}\) since \(t_{j}\) is an active transaction and it is attempting to omit its writes. Therefore, the edge type must be \(\stackrel{{\mathsf{ref}}}{{\rightarrow}}\). Then, from Theorem 1 and the assumption, \(t_{i}\) is not in \(successors_{j}\) and its reachable transactions. Therefore, \(t_{i}\) is not a target transaction of the correctness testing of \(successors_{j}\), and \(t_{j}\) cannot detect an MVSG cycle.
Theorem 2 clarifies the type of transactions introducing MVSG cycles during the testing of successors. Because an MVSG cycle always includes the transactions that read or write a version on the right side of the pivot barrier and that \(t_{j}\) is reachable from, ESSE only needs to store the footprints of such transactions. If \(t_{j}\) detects the existence of such transactions in the saved footprints, ESSE does not commit \(t_{j}\); on the contrary, the existence of other transactions does not matter for \(t_{j}\). Therefore, we only need to manage the footprints of transactions that satisfy the two conditions in Theorem 2. Specifically, we store the footprints of transactions that satisfy the first condition (right side of pivot barrier) since we cannot test whether a transaction satisfies the second condition without incoming transaction \(t_{j}\). Transactions that satisfy the first condition assumes that they are reachable to incoming \(t_{j}\) and thus they store their read/write sets into pivot version.
Figure 5 shows examples of pivot versions and their version order in each epoch. All versions are installed in shared memory, and all transactions \(t_{1},t_{2},t_{3},t_{4}\), and \(t_{5}\) are committed. Let \(t_{j}\) be an active transaction that belongs to epoch \(e_{1}\) and requests to write \(x_{j}\) and \(y_{j}\). For \(t_{j}\)'s correctness testing of successors, the existence of \(t_{3}\) is essential: because \(t_{3}\) satisfies both the right-side and directly reachable conditions in Theorem 2, it generates a cycle in the MVSG and violates serializability in committing \(t_{j}\) with ESSE's erasing version order. Therefore, before committing \(t_{j}\), ESSE has to detect such an MVSG cycle from the footprint of \(t_{3}\). In contrast, because \(t_{1}\) does not satisfy the right-side condition, by Theorem 2, it is unnecessary to save its footprint. In addition, by using epochs, ESSE reduces the number of necessary footprints and enables periodic garbage collection. There is no need to manage the pivot versions and pivot barriers of old epochs. Specifically, by linearizability, it is unnecessary to manage the footprints of any transactions except \(t_{5}\) after all of \(e_{1}\)'s transactions are terminated; after \(e_{1}\) finishes, we do not start any transactions belonging to \(e_{1}\), and it is thus unnecessary to manage the pivot versions and footprints of \(e_{1}\).
### Latch-free Implementation
In ESSE, a pivot version for each data item is represented as a **reachability tracker**, which is a 64-bit data structure. It is the detailed implementation of the pivot version as described in Figure 3. A reachability tracker consists of an epoch and two fields (MergedRS and MergedWS). Figure 6 shows the reachability tracker's layout.
* **Epoch.** This field stores the epoch of the pivot version as a 32-bit integer. Each transaction fetches and assigns its own epoch from the global epoch at the beginning of the transaction. When a transaction executes a blind update and it becomes a pivot version, ESSE stores the transaction's epoch to this field. This field is used for the correctness testing. Linearizability is satisfied if the epochs of all collected reachability trackers have the same epoch as the active transaction's epoch.
Figure 5. Pivot versions and pivot barriers of two epochs. All versions are not omitable and installed in memory. An epoch’s pivot barrier consists of its pivot versions. The gray boxes depict the pivot versions, and the dashed lines depict the pivot barriers of each epoch. \(t_{3}\) fails SSE’s correctness testing and installs versions for both the right and left side of the pivot barrier. Such a transaction causes an MVSG cycle for subsequent transactions.
Figure 6. Layout of a reachability tracker
Figure 7. An example of mergedWS
* **MergedRS (mRS) and MergedWS (mWS).** These fields represent 16-bit bloom filters. Each filter stores all footprints of the transactions that satisfy the two conditions in Theorem 2; it stores the union of the keys of the read or write set for all transactions that read or write the greater or equal versions of pivot versions. Figure 7 illustrates an example layout of the mWS. Each data item \(x,y,z\) is mapped into the corresponding slot by a hash function \(h\). ESSE uses this structure to test the serializability of successors.
The reachability tracker encapsulates the functionalities of the SSE's pivot version into the 64-bit data structure; it has a scheme for transactions' concurrency detector and footprints for reachable transactions. Through this compression, ESSE accesses the reachability trackers in a latch-free manner by using atomic operations such as a compare-and-swap (CAS).
```
Input:\(t_{j}\) Output: whether or not \(c_{j}\) keeps serializability
1forall\(x_{j}\) in \(ws_{j}\)do
2 RT := get_reachability_tracker_of(x)//(A)
3ifRT.epoch \(\neq\)\(t_{j}\)'s epochthen
4returnstrict serializability is not satisfied//(B)
5ifany key in \(rs_{j}\) exists in RT.mWSthen//(C)
6returnMVSG may not be acyclic
7ifany key in \(ws_{j}\) exists in RT.mRSthen//(D)
8returnMVSG may not be acyclic
9returnstrict serializability is satisfied
```
**Algorithm 2**Correctness testing for _successors\({}_{j}\)_ by using reachability trackers
Algorithm 3 is a detailed implementation of Algorithm 1 that is based on using reachability trackers. All steps (A-D) are equivalent in the two algorithms. In step (A), it collects the reachability trackers for the data items in the \(ws_{j}\) in order to generate erasing version order. In step (B), the algorithm checks linearizability by testing whether \(t_{j}\) and the reachability trackers have the same epoch. If it detects different epochs, the correctness testing for linearizability fails. In steps (C) and (D), the algorithm checks serializability by testing whether \(t_{j}\) has an incoming edge. The reachability tracker exploits the bloom filter and thus it may produce false positives though it produces no false negatives. When a hash function assigns different keys to the same slot, it may return that a non-existent cycle exists.
To use reachability trackers for correctness testing, we need to update them to maintain the MergedRS/WS of reachable transactions from the pivot version. Thus, we add this updating before the commit of each transaction. Algorithm 3 describes the ESSE commit protocol2 of an active transaction \(t_{j}\). In step (1), it tests the strict serializability of the \(o_{j}\) and \(s_{j}\) sets. If the testing fails, the algorithm quits processing \(c_{j}\) by using ESSE's version order and delegates subsequent processing to conventional protocols. In step (2), if \(t_{j}\) does not satisfy the conditions of Theorem 2, ESSE does not store a footprint. Otherwise, ESSE stores \(t_{j}\)'s footprint in the reachability trackers. In step (3), it stores all edges from the pivot barrier to \(t_{j}\) to the mergedRS of the data item. Specifically, the algorithm sets a bit flag in mergedRS and mergedWS for each data item in \(rs_{j}\) and \(ws_{j}\), respectively. It also sets bit flags for all bits in the reachability trackers of \(rs_{j}\). In step (4), it updates the mergedWS. If \(x_{j}\) is the first blind update of the epoch, the algorithm resets the pivot version of data item \(x\) to \(x_{j}\). Otherwise, it adds \(t_{j}\)'s footprint and all edges from the versions in the pivot barrier to the mergedWS of the data item. Note that steps (3) and (5) merge the bits in mRS/mWS of reachability trackers accessed by \(t_{j}\) to keep reachability from the pivot versions; when \(t_{j}\) accesses \(x\) and \(y\) and commits, there exist paths from the pivot versions of \(x\) and \(y\) to \(t_{j}\) respectively, and thus reachability trackers of both data items should store these paths by merging.
Footnote 2: Here, the “commit protocol” refers to the processing at the time when a user application does not add any operation into the transaction. It does not refer to the “commit phase” in optimistic concurrency control (OCC).
Throughout the algorithm, we access the reachability trackers atomically in a latch-free manner by using the 64-bit data layout. ESSE copies all reachability trackers to another location at the beginning of the commit protocol. After testing and modification of mRS/mWS, it performs the CAS operation for all locations to update
all fields atomically. If a CAS operation fails, ESSE retries the commit protocol from the beginning. If the bit arrays in the reachability tracker match exactly before and after the modifications, we can guarantee atomicity in a lightweight way via load instead of CAS to verify that no changes occurred concurrently.
### Extension Details
To apply ESSE to a conventional protocol, we need to add some components: epoch-based group commit, a read/write set of each transaction, reachability tracker, and the correctness testing of overwriters. Table 3 summarizes the components required for the ESSE extension to a protocol. If the protocol already has the necessary components, ESSE can use them straightforwardly. For example, Silo has everything ESSE needs, and thus it is one of the preferable protocols. Note that Table 3 does not include reachability tracker, but ESSE adds it for all protocols as described in Figure 3. In contrast with Silo, the traditional two-phase locking protocol (2PL) (Garf et al., 2016) has none of the necessary components. In addition, 2PL cannot utilize the version omission technique of ESSE since it writes a version to the data item immediately before the commit of transactions. Moreover, if 2PL is used with some logging algorithms such as ARIES (Shi et al., 2017), it persists the log immediately. The protocol of ESSE starts when the uncommitted versions are already installed and persist. In this case, if the erasing version order is accepted, 2PL+ESSE must undo the uncommitted installed versions before unlocking. Therefore, OCC is a desirable property for ESSE in terms of performance.
We extended two optimistic protocols: Silo and MVTO to Silo+ESSE and MVTO+ESSE, respectively. We chose these protocols since they are modern fast protocols of the 1VCC and MVCC types, respectively. In addition to 2PL, we did not include T/O with TWR for the experiments. T/O inherently requires a centralized counter to generate monotonically increasing timestamps. It is known that protocols with such a counter incur serious performance degradation in a many-core environment (Shi et al., 2017).
**Silo and Silo+ESSE.** Silo (Shi et al., 2017) is an optimistic protocol that obtains state-of-the-art performance on read-intensive workloads. When an application requests a transaction to commit, Silo acquires locks for all data items in the transaction's write set before installing new versions. We ported Silo's correctness testing of overwriters to Silo+ESSE. Silo's testing checks \(o_{j}=\phi\); if a data item in \(rs_{j}\) is overwritten, Silo and Silo+ESSE abort \(t_{j}\). Note that the original Silo stores records in the leaf nodes of the index directly. Our implementation of Silo and Silo+ESSE may cause an overhead of a cache miss due to ESSE's reachability tracker indirection.
**MVTO and MVTO+ESSE.** MVTO (Miy et al., 2017; Shi et al., 2017) is a timestamp-based CC protocol that has multiversion storage. We implemented MVTO on the basis of Cicada (Shi et al., 2017), the state-of-the-art MVTO protocol. It uses per-thread distributed timestamp generation with adaptive backoff, read/write sets for optimistic multi-versioning, and rapid garbage collection. Note that the original Cicada does not guarantee strict serializability but causal consistency (Shi et al., 2017). We applied epoch-based group commit to this protocol to ensure strict serializability; the original has the 64-bit per-transaction timestamp, but we shortened it to 32-bit and added an epoch number to the upper 32 bits to synchronize between epochs and avoid stale reads. To test overwriters, we ported Silo's implementation to MVTO+ESSE; when a version \(x_{i}\) in \(rs_{j}\) was not the latest version of data item \(x\), MVTO+ESSE failed to test \(o_{j}\) with its erasing version order and delegate its control to MVTO.
## 6. Evaluation
Our experiments used a lightweight, non-distributed, embedded, transactional key-value storage prototype written in C++. It consists of in-memory storage, CC protocols (Silo, Silo+ESSE, MVTO, and MVTO+ESSE), a tree index forked by Masstree (Masstree, 2017), and a parallel-logging manager according to the SiloR (Masstree, 2017)specification. The experiments were run on a 72-core machine with four Intel Xeon E7-8870 CPUs and 1 TB of DRAM. Each CPU socket had 18 physical cores and 36 logical cores with hyperthreading. The results for over 72 threads showed sublinear scaling due to contention within the physical cores; in the result figures, we use a gray background color to indicate this situation. Each socket had a 45-MB L3 shared cache. The transaction logs were separated for each worker thread and flushed into a single solid-state drive. Because all queries were compiled at build time, neither networked clients nor SQL interpreters were used. Each worker thread had a thread-local workload generator that enabled it to input its own transactions.
We selected three workloads generated by benchmark specifications: TATP (Tatp et al., 2017), YCSB (Bak et al., 2017), and TPC-C (Tatp et al., 2017) benchmarks. We selected TATP as the benchmark for our intended application such as IoT/Telecom applications, including data ingestion queries. We selected YCSB as the ideal scenario for ESSE in terms of performance because it includes a tremendous number of blind updates. Finally, we selected TPC-C as a counterpoint benchmark with the least performance benefit because it includes no blind updates.
**TATP benchmark.** TATP represents the workload for a telecommunication company. It includes 16% data ingestion queries that
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Protocol & Epoch-based group commit & Read/Write set & Testing method of overwriters & Optimistic CC \\ \hline Silo OCC & Yes & Yes & Yes & Yes \\ Cicada MVTO & No & Yes & No & Yes \\
2PL & No & No & No & No \\ \hline Required for ESSE & Yes & Yes & Yes & Preferable \\ \hline \hline \end{tabular}
\end{table}
Table 3. A list of components required by ESSE, and the comparison of protocols. In order to extend a protocol with ESSE, we need to add lacking components. Although OCC is not a requirement, it is a preferable property for ESSE in terms of performance.
generate a flood of blind updates for managing changes in a subscriber's current location or the profile data. 70% of the rest of queries consist of GET_SUBSCRIBER_DATA and GET_ACCESS_DATA. They both retrieve the latest and correct data snapshot updated by data ingestion queries to operate the telecom base station. We implemented the benchmark in accordance with its specifications. In addition to the workload obeying the original specifications, we added workloads with various percentages of blind updates from the original's 16% to emulate our intended IoT/Telecom applications. The query for which we varied the percentage was UPDATE_LOCATION.
**YCSB benchmark.** This workload generator is representative of conventional large-scale online benchmarks. Because the original YCSB does not support a transaction with multiple operations, we implemented a YCSB-like workload generator in our prototype, similarly to DBx1000 [(41)]. Specifically, each transaction accessed four data items chosen randomly according to a Zipfian distribution with parameter \(\theta\). Each data item had a single primary key and an 8-byte additional column. We populated our prototype implementation as a single table with 100K data items.
**TPC-C benchmark.** This is an industry-standard for online transaction processing. It consists of six tables and five transactions that simulate the information system of a wholesaler. Note that the TPC-C benchmark does not have any blind updates; all write operations are inserts or read-modify writes. Thus, SSE could not commit any transactions with its erasing version order. We implemented TPC-C full mix including all five transactions. Phantom anomalies were prevented by the same method with Silo: we scan the tree index again at the commit of each transaction.
### TATP Benchmark Results
Figures (a)a and (b)b show the results for the original TATP and its update-intensive modification, respectively. In both cases, ESSE improved the performance of the original protocols, and the improvement for the update-intensive modification was particularly remarkable. Because most of the blind updates are safely omittable versions and thus were not installed in physical memory, Silo+ESSE and MVTO+ESSE achieved 2.7\(\times\) and 2.5\(\times\) performance improvements, respectively. In contrast, the performance of Silo for the update-intensive modification was drastically degraded by lock contention. Although Silo is a read-lock-free protocol, Silo's write operations require lockings that reduce parallelism. MVTO exhibited the poorest performance on both the original TATP and the modification. Although MVTO does not acquire locks for write operations, its throughput degrades because it allocates memory to create new versions. SSE overcame these weaknesses of Silo and MVTO and thus improved the performance drastically.
Figure (c)c shows the number of updates for TATP with the update-intensive modification. For Silo and Silo+ESSE, we counted the number of in-place updates with locks. For MVTO and MVTO+ESSE, we counted the number of out-of-place creating new versions. The
Figure 8. TATP benchmark results
Figure 9. Runtime breakdowns of Figure (a)a
results indicate that the two ESSE protocols rarely performed actual updates, because most of the writes generated safely omittable versions.
Figure (d)d shows the results for the TATP benchmark with various data ingestion query rates and 144 fixed worker threads. As the percentage of data ingestion queries became larger than the original 16%, the throughputs of Silo and MVTO dropped. The reasons were that Silo suffered from lock contention on the same data item and MVTO suffered from the management of multiple versions in physical memory. In contrast, the ESSE protocols outperformed the originals and their throughput was not degraded as the percentage increased. Furthermore, when the percentage exceeded 80%, the performance of these extended protocols improved. This is because SSE provides cache efficiency; in this setting, the clients requested to execute blind writes into almost the same data items, and ESSE thus generated a tremendous number of safely omittable versions. As a result, it rarely installed new versions, and almost all read operations received the same versions that are rarely evicted from the CPU caches.
Figure 9 shows the runtime breakdown for the results shown in Figure (a)a. With a single thread, the top consumers of CPU ticks were the INDEX block for Silo and Silo+ESSE and the BUFFER_UPDATE block for MVTO and MVTO+SSE. The overhead of ESSE was negligible for both ESSE protocols. With 144 threads, the primary consumers of CPU ticks were still the same as with a single thread, and the overhead of ESSE was again negligible. Note that ESSE dramatically reduced the number of CPU ticks spent waiting for the LOCKING and BUFFER_UPDATE blocks. These two overheads were reduced by using the reachability trackers. When a transaction is committed using ESSE's erasing version order, the write operations avoid locks and access only the reachability trackers in a latch-free manner.
As described in Section 1, our intended applications consist of data ingestion queries and real-time operations. However, there is sometimes a need to support historical analysis (Belleelle et al., 2017) for all submitted versions. To support such historical analysis queries, we need to flush the persistent logs for each version as accumulated data. Even if we omit some versions in CC protocols, we also require log persistence for safely omittable versions. Figure 10 shows the performance results on the TATP benchmark for the protocols with the logging feature disabled. We set the percentage of data ingestion queries to 80%. We can see that the performance of the "NoLog" protocols was almost the same as that of the protocols with logging. This indicates that the performance improvement of the SSE does not come from avoiding log persistence; rather, SSE improves the performance by reducing memory consumption and avoiding lock mechanisms.
### YCSB Benchmark Results
Figure (a)a shows the results for the YCSB-A workload with a medium contention rate (\(\theta=0.6\)). YCSB-A defines the proportion of operations as 50% reads and 50% blind writes. Prior works showed that conventional protocols perform poorly on such write-contended workloads (Han et al., 2016; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017). Note that YCSB requests the database population before benchmarking; all data items are inserted before measurements. Therefore, in YCSB, all write operations are blind writes. Thus, once the pivot versions for each 40-ms epoch are marked, ESSE's correctness testing rarely fails. We expect that ESSE protocols can avoid installing a tremendous number of blind updates and to improve the performance accordingly. In fact, as shown in Figure (a)a, the ESSE protocols achieved higher throughput than the original ones. In the best case with 144 threads, the throughput of MVTO+ESSE was more than 20\(\times\) better than that of the original protocol. As this workload produces more WAW conflicts than TATP, the length of the linked lists of MVTO tends to be longer. Because the longer linked lists increased the overhead of version traversing for both reads and writes, MVTO's performance was degraded. In contrast, MVTO+ESE reduced the length of the linked lists because it avoided unnecessary version allocation by omitting blind updates with ESSE.
Figures (b)b and (c)c show the results for YCSB-A when we varied the two influential parameters for ESSE. We tested these workloads with 72 threads and \(\theta=0.6\). Figure (b)b shows the results for various epoch sizes. As described in Section 5.2, ESSE uses epoch-based group commits and tests the concurrency among transactions by using epochs. Hence, a longer epoch duration makes more transactions concurrent and reduces the number of ESSE correctness testing failures. Therefore, to investigate the effect of the epoch duration, we tested the YCSB-A with various durations. As shown in Figure (b)b, the throughput of the ESSE protocols increased with the duration. As a result, we can improve the performance by increasing the epoch duration as much as the application allows. Next, Figure (c)c shows the results for various sizes of the read/write sets. As the read/write set size increased, the performance of the ESSE protocols decreased to the level of the original protocols. Because the reachability tracker described in Section 5.2 has two bloom filters (mRS/mWS) with a size of only 16 bits, as the number of operations in a transaction increases, the more often the correctness testing of successors fails because of false positives in the filters.
Figure (d)d shows the results for the read-mostly YCSB-B workload under low contention rate (\(\theta=0.2\)). There are almost no safely omittable versions in this workload since most of the operations are read operations, and contention rarely occurs. Therefore, there is little benefit from the ESSE's performance improvement; on the contrary, the overhead of ESSE may be painful factor for the performance. However, Figure (d)d shows that ESSE protocols performs similar performance to the original protocols. It indicates the low overhead property of ESSE. In such workloads, a transaction gives up the ESSE protocol quickly generating an erasing version order,
Figure 10. TATP results with logging-disabled protocols
and thus it does not execute the correctness testing. As in mentioned in Section 4.3, a transaction first checks the epoch number in the indirection of each data item. ESSE requires all epoch numbers and the transaction's epoch are the same; however, it is rarely satisfied in this workload. Therefore, the transaction gives up ESSE's protocol quickly and delegated the control to the baseline protocol.
Figure 12 shows the performance results of YCSB-B under high contention rate (\(\theta=0.9\)). The dashed lines represent the throughput and the solid lines represent the number of aborts. The YCSB-B workload was expected to be unsuitable for testing our approach because it specifies the proportion of read operations as 95% (Bird et al., 2016): SSE and ESSE can improve the performance on blind updates, but YCSB-B rarely executes them. Nevertheless, the ESSE protocols exhibited performance comparable to that of the original protocols, and surprisingly, Silo+ESSE outperformed the original Silo. This improvement indicates that the version omission technique is beneficial for other transactions. In this case, Silo+ESSE changed overwriting operations for the latest versions to omission for stale versions, thus reducing the abort rate because its validation fails when the latest versions are overwritten. In Figure 12, the number of aborts for Silo was higher than the throughput with 25 threads. This was because only 5% of write operations forced Silo to abort the 95% of read operations. In contrast, Silo+ESSE kept a lower abort rate than the original Silo, and it improved its performance on this read-mostly workload, because ESSE prevented writes of the latest versions.
### TPC-C Benchmark Results
None of the five queries in the TPC-C benchmark contained blind writes except inserts. This means that there were no transaction commits with SSE's erasing version order for this workload. Although our target is IoT/Telecom applications and their data ingestion queries containing a tremendous number of blind updates, we also tested our approach on this benchmark in order to illustrate ESSE's low-overhead property.
To analyze this low-overhead property, we ran the TPC-C benchmark with a single warehouse. This high-contention scenario represents the worst case for ESSE because the reachability tracker for each data item must be frequently updated even though it is never used. Figure 13 shows the throughput with respect to the number of threads. Both Silo and Silo+ESE scale up to 32 cores, similar to the experimental results in the original paper. MVTO+ESSE's overhead was negligible, yet its performance was comparable to that of the original protocol. MVTO's performance bottleneck on TPC-C was version traversing or buffer update, so the overhead of the reachability tracker did not affect performance. In contrast, the throughput of Silo+ESSE was about 0.75\(\times\) lower than that of the original protocol 3. This was because most of the queries in TPC-C lead to read-modify-writes into the latest versions. Because transactions that perform a read-modify-write operation into a version larger than the pivot version may satisfy the two conditions of Theorem 2, their footprints must be stored in the reachability tracker for each data item.
Footnote 3: [https://github.com/tensorflow](https://github.com/tensorflow)
process such a massive number of updates efficiently, it is essential to omit updates that are unnecessary for read operations. Streaming systems use load shedding (Han et al., 2019) or backpressure (Han et al., 2019) to put a rate limit of submitting data. However, these techniques do not ensure transactional correctness; they omit the data before submitting it to databases. Then we cannot use CC protocols to choose the version order to provide correct data snapshot. In transaction processing, Thomas's write rule (Thomas, 1979) can omit the data and ensure serializability. However, to the best of our knowledge, TWR does not guarantee strict serializability. In addition, TWR is applicable only for single-version timestamp ordering protocol, which is obsolete on modern in-memory databases. SSE also performs the write omission and these methods while preserving the transactional correctness and it is applicable to various CC protocol.
The commutative theory is another example to increase the parallelism on write operations. Commutative systems such as CRDT (Cheng et al., 2017) and Doppel (Demphys et al., 2017), define a commutative operation set such as ADD or INCR. Under commutative systems, these operations can be executed in parallel with preserving consistency. In contrast, SSE focuses on non-commutative operations on the basis of the traditional page-model interface with only READ and WRITE. As a result, SSE can optimize the performance of IoT/Telecom applications that do not include commutative operations.
Another example of a protocol that generates omittable versions is deterministic databases (Tue Stephen et al., 2013; Tue Stephen et al., 2013). A deterministic database uses the batching approach for concurrency control: centralized transaction managers collect transactions and separate them into batches. Faleiro et al. devised _lazy transaction execution_(Faleiro et al., 2015), in which a deterministic database can generate omittable versions. By delaying the execution of operations, they executes only blind writes that eventually become the latest versions in each batch. After that, database removes other versions. In contrast with lazy evaluation, however, SSE is applicable to non-deterministic protocols because it does not require a centralized transaction manager or prior knowledge of the transactions.
Multi-version concurrency control protocols (Faleiro et al., 2015; Tue Stephen et al., 2013; Tue Stephen et al., 2013) can hold multiple versions for each data item. Multiversion read avoids the high abort rates of single-version protocols, especially for workloads that include long transactions (Tue Stephen et al., 2013; Tue Stephen et al., 2013; Tue Stephen et al., 2013). All MVCC protocols can theoretically generate multiple version orders by using MVSG. However, conventional protocol (Tue Stephen et al., 2013; Tue Stephen et al., 2013; Tue Stephen et al., 2013; Tue Stephen et al., 2013) generates only a single version order. This is because the decision process to find a suitable one from all possible orders is NP-complete (Tue Stephen et al., 2013; Tue Stephen et al., 2013), as mentioned in Section 4.2. SSE also does not give the exact solution, but it reduces the computational cost by adding only a single candidate version order, which is preferable for generating safely omittable versions.
## 8. Conclusion
We presented a protocol extension method, scheduling space expander (SSE), and its optimized implementation named ESSE. SSE and ESSE can extend various protocol so that it can, in theory, test an additional version order for the purpose of generating safely omittable versions while preserving both strict serializability and recoverability. To evaluate the performance gain with our approach, we extended two existing protocols, Silo and MVTO to include ESSE. We expect that SSE and ESSE can help accelerate emerging systems with data ingestion queries.
|
2310.15279 | The linear system for Sudoku and a fractional completion threshold | We study a system of linear equations associated with Sudoku latin squares.
The coefficient matrix $M$ of the normal system has various symmetries arising
from Sudoku. From this, we find the eigenvalues and eigenvectors of $M$, and
compute a generalized inverse. Then, using linear perturbation methods, we
obtain a fractional completion guarantee for sufficiently large and sparse
rectangular-box Sudoku puzzles. | Peter J. Dukes, Kate Nimegeers | 2023-10-23T18:32:45Z | http://arxiv.org/abs/2310.15279v3 | # The linear system for Sudoku and a fractional completion threshold
###### Abstract.
We study a system of linear equations associated with Sudoku latin squares. The coefficient matrix \(M\) of the normal system has various symmetries arising from Sudoku. From this, we find the eigenvalues and eigenvectors of \(M\), and compute a generalized inverse. Then, using linear perturbation methods, we obtain a fractional completion guarantee for sufficiently large and sparse rectangular-box Sudoku puzzles.
Research of Peter Dukes is supported by NSERC Discovery Grant RGPIN-2017-03891.
## 1. **Introduction**
A _latin square_ of order \(n\) is an \(n\times n\) array with entries from a set of \(n\) symbols (often taken to be \([n]:=\{1,2,\ldots,n\}\)) having the property that each symbol appears exactly once in every row and every column. A _partial latin square_ of order \(n\) is an \(n\times n\) array whose cells are either empty or filled with one of \(n\) symbols in such a way that each symbol appears at most once in every row and every column. A partial latin square can be identified with a set of ordered triples in a natural way: if symbol \(k\) appears in row \(i\) and column \(j\), we include the ordered triple \((i,j,k)\). A _completion_ of a partial latin square \(P\) is a latin square \(L\) which contains \(P\) in the sense of ordered triples; that is, every symbol occurring in \(P\) also occurs in the corresponding cell of \(L\).
It is natural to ask how dense a partial latin square can be while still having a completion. Daykin and Haggkvist conjectured [7] that a partial latin square in which any row, column and symbol is used at most \(n/4\) times should have a completion. They proved a weaker version of this claim, with \(n/4\) replaced by \(2^{-9}n^{1/2}\). Chetwynd and Haggkvist [6] and later Gustavsson [9] obtained the first such completion guarantee which was linear in \(n\). Let us say that a partial latin square is \(\epsilon\)-dense if no row, column, or symbol is used more than \(\epsilon n\) times. Bartlett [3] built on the preceding work to show that all \(\epsilon\)-dense partial latin squares have a completion for \(\epsilon=9.8\times 10^{-5}\). Then, over two papers, this was improved to roughly \(\epsilon=0.04\), provided \(n\) is large. One paper [4] of Bowditch and Dukes obtained this threshold for a fractional relaxation of the problem, and the other paper [2] by Barber, Kuhn, Lo, Osthus and Taylor showed using absorbers and balancing graphs that the fractional threshold suffices for very large instances of the (exact) completion problem.
Let \(h\) and \(w\) be integers with \(h,w\geq 2\), and put \(n=hw\). A _Sudoku latin square_ (or briefly _Sudoku_) of type \((h,w)\) is an \(n\times n\) latin square whose cells are partitioned into a \(w\times h\) pattern of \(h\times w\) subarrays where every symbol appears exactly once in each subarray. The subarrays are called _boxes_, or sometimes also called _cages_. A partial Sudoku and completion of such is defined analogously as above for latin squares. The completion problem for partial Sudoku in the case \(h=w=3\) is a famous recreational puzzle. A mathematical discussion of Sudoku solving strategies can be found in
[5, 16]. By contrast, we are interested here in the fractional relaxation of partial Sudoku completion, essentially following the approach used in [4] for latin squares.
Let us explain the fractional relaxation in more detail. Working from a partial latin square \(P\), an empty cell can be assigned a convex combination of symbols instead of a single symbol. More formally, \(P\) can be represented as a function \(f_{P}:[n]^{3}\to\{0,1\}\) in which \(f_{P}(i,j,k)\) is the number of times symbol \(k\) appears in cell \((i,j)\). A _fractional completion_ of \(P\) is a function \(f:[n]^{3}\to[0,1]\) such that, for any \(i,j,k\in[n]\),
* \(f_{P}(i,j,k)=1\) implies \(f(i,j,k)=1\); and
* \(\sum_{i=1}^{n}f(i,j,k)=\sum_{j=1}^{n}f(i,j,k)=\sum_{k=1}^{n}f(i,j,k)=1\).
Viewing this as an array, cell \((i,j)\) is assigned a fractional occurrence of symbol \(k\) with value \(f(i,j,k)\). The first condition ensures that filled cells of \(P\) are left unchanged. The second condition ensures that every symbol appears with a total occurrence of one in each column and each row, and that every cell is used with a total value of one. For the Sudoku setting, we can add an extra family of constraints, namely that for all boxes \(b\) and symbols \(k\), \(\sum_{(i,j)\in b}f(i,j,k)=1\), where the sum is over all ordered pairs \((i,j)\) belonging to box \(b\). We remark that when \(f\) is \(\{0,1\}\)-valued, it corresponds to an exact completion of \(P\), whether for general latin squares or Sudoku.
Figure 1 depicts a fractional Sudoku of type \((2,3)\), where the solid disks correspond to pre-filled cells of a partial Sudoku, and the multi-colored disks correspond to a fractional completion.
The notion of \(\epsilon\)-dense needs to be strengthened in the Sudoku setting. First, it is natural to impose a constraint on number of filled cells in any box. Otherwise, completion can be blocked by placing symbols \(1,\ldots,n-1\) in the top-left box and symbol \(n\) in line with the remaining empty cell of that box. This uses each symbol only once and each row and column at most \(\max\{h,w\}\) times. For \(h\approx w\), this is sub-linear in \(n\). Separately from this, it is also natural to prevent symbol occurrences from being too unbalanced relative to the box partition. In more detail, for a Sudoku of type \((h,w)\)
Figure 1. Illustration of a fractional completion of a partial Sudoku of type \((2,3)\)
it is possible to force a given symbol in, say, the (1,1)-entry by placing it outside of the top-left box in rows \(2,\ldots,h\) and in columns \(2,\ldots,w\). We can arrange for this to occur for different symbols, say 1 and 2, making completion impossible. But this uses no row or column more than twice and uses each symbol only \(h+w-2\) times, which could again be sub-linear in \(n\). An illustration of the obstructions for \(h=w=3\) are given in Figure 2. To address this, we can strengthen the \(\epsilon\)-dense condition for a partial Sudoku to:
* each row, column, and box has at most \(\epsilon n\) filled cells;
* each symbol occurs at most \(\epsilon h\) times in any bundle of \(h\) rows corresponding to the box partition, and likewise at most \(\epsilon w\) times in any bundle of \(w\) columns.
Our main result gives a guarantee on fractional completion of \(\epsilon\)-dense Sudoku latin squares.
**Theorem 1.1**.: For sufficiently large \(h\) and \(w\), every \((1/101)\)-dense partial Sudoku of type \((h,w)\) has a fractional completion.
It turns out that our methods to prove Theorem 1.1 really only require a weaker notion of density. Roughly speaking, we need each empty cell \((i,j)\) to have a large proportion of all symbols \(k\) available to be placed there, and analogous availability when the roles of rows, columns and symbols are permuted, and boxes are introduced. This is made more precise later.
The outline of the paper is as follows. In the next section, we reformulate Sudoku completion as a certain graph decomposition problem, and then give a linear system of equations governing the fractional relaxation. Starting with the empty Sudoku, the rank and a basis for the nullspace are computed and interpreted combinatorially. The \(\epsilon\)-dense case can be viewed as a perturbed version of the empty case, closely following the approach in [4] for latin squares. This motivates a study of the linear algebra for empty Sudoku in more detail. In Section 3, we observe that all relevant computations take place in an adjacency algebra of fixed dimension, independent of \(h\) and \(w\). Eigenvalues and eigenvectors relevant for the problem are described in Section 4.1. Then, using computer-assisted symbolic algebra, a certain generalized inverse is computed and upper-bounded in \(\infty\)-norm. This bound ensures solutions to our linear system remain nonnegative. Section 5 discusses in more detail the perturbation as we pass to the \(\epsilon\)-dense setting. By the end of this section, all the ingredients are in place to prove Theorem 1.1. In Section 6, we consider partial Sudoku completion in the case when the \(h\times w\) boxes are asymptotically thin; that is, we consider (say) fixed width \(w\) and growing height \(h\). The last section contains some concluding remarks and a brief discussion of possible extensions and generalizations.
Figure 2. Sparse partial latin squares with no Sudoku completion
## 2. **Preliminaries and set-up**
### A graph decomposition model
In a Sudoku of type \((h,w)\), let \(\operatorname{box}(i,j)\) denote the box containing cell \((i,j)\). If boxes are numbered left to right then top to bottom, then we have the formula \(\operatorname{box}(i,j)=h\lfloor\frac{i-1}{h}\rfloor+\lfloor\frac{j-1}{w} \rfloor+1\).
We define the graph \(G_{hw}\) as follows. Its vertex set is
\[V(G_{hw})=\{r_{1},\ldots,r_{n}\}\cup\{c_{1},\ldots,c_{n}\}\cup\{b_{1},\ldots,b _{n}\}\cup\{s_{1},\ldots,s_{n}\},\]
with the four sets corresponding to rows, columns, boxes, and symbols, respectively. Its edge set is
\[E(G_{hw})=\bigcup_{i,j=1}^{n}\{\{r_{i},c_{j}\},\{r_{i},s_{j}\},\{c_{i},s_{j}\}, \{b_{i},s_{j}\}\}. \tag{2.1}\]
In other words, exactly one edge is present for every combination of row-column, row-symbol, column-symbol, and box-symbol. As a point of notation, \(G_{hw}\) depends only on \(n=hw\); indeed, if we omit indices in (2.1) it is seen to be simply a blow-up of the graph \(K_{3}+e\) by independent sets of size \(n\). With this in mind, the subscript in \(G_{hw}\) can reasonably be interpreted as the product of \(h\) and \(w\), though it will be useful below to keep these parameters separate. Note that the subgraph of \(G_{hw}\) induced by rows, columns, and symbols is just the complete 3-partite graph \(K_{n,n,n}\).
A _tile_ in \(G_{hw}\) is a copy of \(K_{3}+e\) induced by four vertices \(r_{i},c_{j},b_{\ell},s_{k}\) for which \(\operatorname{box}(i,j)=\ell\). This tile represents the act of placing symbol \(k\) in cell \((i,j)\), and also keeps track of the box used. Let \(T(G_{hw})\) denote the set of all \(n^{3}\) tiles in \(G_{hw}\).
Given a partial Sudoku \(S\) of type \((h,w)\), let \(G_{S}\) denote the subgraph of \(G_{hw}\) obtained by removing the edge sets of tiles corresponding to filled cells of \(S\). In other words, \(V(G_{S})=V(G_{hw})\) and \(E(G_{S})\) contains:
* \(\{r_{i},c_{j}\}\) if and only if cell \((i,j)\) is empty;
* \(\{r_{i},s_{k}\}\) if and only if symbol \(k\) is missing in row \(i\);
* \(\{c_{j},s_{k}\}\) if and only if symbol \(k\) is missing in column \(j\);
* \(\{b_{\ell},s_{k}\}\) if and only if symbol \(k\) is missing in box \(\ell\).
Let \(T(G_{S})\) be the set of tiles in \(T(G_{hw})\) all of whose edges are in \(G_{S}\).
An equivalent but slightly different model can be obtained by including row-box and column-box edges. That is, we could change tiles into cliques \(K_{4}\), and change the host graph \(G_{hw}\) into a multigraph \(G_{hw}^{*}\) with the same vertex set and all the edges of \(G_{hw}\), and additionally including the edges
* \(\{r_{i},b_{\ell}\}\) with multiplicity \(w\) if and only if row \(i\) is incident with box \(\ell\); and
* \(\{c_{j},b_{\ell}\}\) with multiplicity \(h\) if and only if column \(j\) is incident with box \(\ell\).
Likewise, given a partial Sudoku \(S\), we could define \(G_{S}^{*}\) to be the subgraph of \(G_{hw}^{*}\) obtained by removing the edges of all 4-cliques on \(\{r_{i},c_{j},b_{\ell},s_{k}\}\) whenever symbol \(k\) occurs in cell \((i,j)\) (and box \(\ell\)) of \(S\).
For graphs \(F\) and \(G\), we say that \(G\) has an \(F\)-_decomposition_ if its edge set \(E(G)\) can be partitioned into subgraphs isomorphic to \(F\). This extends naturally to multigraphs \(G\), where now repeated edges are distinguished. That is, the number of copies of \(F\) containing two vertices \(u\neq v\) equals the multiplicity of edge \(\{u,v\}\) in \(G\). Many problems in combinatorics can be formulated in terms of
graph decompositions. For example, \(K_{3}\)-decompositions of \(K_{n,n,n}\) are equivalent to latin squares of order \(n\); see for instance [2, 4, 14]. The following is an analog for Sudoku using our graphs above.
**Proposition 2.1**.: The partial Sudoku \(S\) has a completion if and only if the graph \(G_{S}\) has an edge-decomposition into tiles, or equivalently if and only if \(G_{S}^{*}\) has a \(K_{4}\)-decomposition.
Proof.: Suppose \(S\) has a completion \(S^{\prime}\). If cell \((i,j)\) was blank in \(S\) but filled in \(S^{\prime}\), say with symbol \(k\), we use the tile defined by \(\{r_{i},c_{j},s_{k},b_{\ell}\}\), where \(\ell=\mbox{box}(i,j)\). Each such tile belongs to \(T(G_{S})\) because \((i,j)\) was blank in \(S\) and because \(k\) occurs only once in \(S^{\prime}\) in row \(i\), column \(j\) and box \(\ell\). Consider the set \(\mathcal{T}\) of these tiles induced by cells that were blank in \(S\) and filled in \(S^{\prime}\). These tiles are edge-disjoint, again because \(S^{\prime}\) has no repeated symbols in any any row, column, or box. We check that \(\mathcal{T}\) gives an edge-decomposition of \(G_{S}\) into tiles. Any row-column edge of \(G_{S}\), say \(\{r_{i},c_{j}\}\), is in the tile corresponding to the symbol placed at entry \((i,j)\) of \(S^{\prime}\). Consider a row-symbol edge, say \(\{r_{i},s_{k}\}\in E(G_{S})\). The presence of this edge means \(k\) was missing from row \(i\) in \(S\). It occurs somewhere in row \(i\) of \(S^{\prime}\), say at entry \((i,j)\). This entry was blank in \(S\), so \(r_{i},c_{j},s_{k}\) define a tile in \(\mathcal{T}\), along with the box containing \((i,j)\). A similar verification holds for edges of type column-symbol and box-symbol in \(G_{S}\).
For the converse, the argument is reversible. Given a set \(\mathcal{T}\) of tiles that form an edge-decomposition of \(G_{S}\), we complete \(S\) by placing symbol \(k\) in entry \((i,j)\) whenever \(r_{i},c_{j},s_{k}\) belong to a tile of \(\mathcal{T}\). Since the tiles of \(\mathcal{T}\) are edge-disjoint, every entry \((i,j)\) is filled at most once and no row, column, or box contains repeats. Since the edges within \(\mathcal{T}\) partition those in \(G_{S}\), it follows that every blank entry of \(S\) gets filled, and every symbol occurs in every row, column, and box.
The claim about \(G_{S}^{*}\) having a \(K_{4}\)-decomposition is nearly identical. In the forward implication, we note that a row-box edge \(\{r_{i},b_{\ell}\}\) in \(G_{hw}^{*}\) occurs in total \(w\) times counting \(E(G_{S}^{*})\) and the decomposition. This is because the completion \(S^{\prime}\) has \(w\) entries in row \(i\) and box \(\ell\). Similarly, column-box edges of \(G_{hw}^{*}\) occur a total of \(h\) times.
The model using row-box and column-box edges has the advantage that all 4-cliques in \(G_{S}^{*}\) correspond to valid tiles. However, since no new information is carried by those extra edges, we henceforth work with tiles in \(G_{S}\), omitting the implied edges of type row-box and column-box.
This paper is concerned with partial Sudoku latin squares which are nearly empty. Recall the definition of \(\epsilon\)-dense discussed in Section 1 and strengthened for the case of Sudoku. The definition leads easily to various degree bounds in \(G_{S}\), which we summarize here.
**Lemma 2.2**.: Suppose \(S\) is \(\epsilon\)-dense. Then in the graph \(G_{S}\), the number of edges from vertex
* \(c_{j}\) to the row partite set is at least \((1-\epsilon)n\);
* \(s_{k}\) to any row bundle is at least \((1-\epsilon)h\);
* \(r_{i}\) to the column partite set is at least \((1-\epsilon)n\);
* \(s_{k}\) to any column bundle is at least \((1-\epsilon)w\);
* \(r_{i},c_{j}\) or \(b_{\ell}\) to the symbol partite set is at least \((1-\epsilon)n\);
* \(s_{k}\) to the box partite set is at least \((1-\epsilon)n\).
Next, we give an alternate sparseness definition suited to our approach. Let us say that a partial Sudoku \(S\) has the \((1-\delta)\)-_availability property_ if every edge \(e\in E(G_{S})\) is contained in at least \((1-\delta)n\) tiles in \(T(G_{S})\). We note that an \(\epsilon\)-dense partial Sudoku has the \((1-3\epsilon)\)-availability property. Indeed, for an edge \(\{r_{i},c_{j}\}\), at most \(\epsilon n\) symbols are already used in row \(i\), in column \(j\), and in the box containing \((i,j)\). These could all be different sets of symbols, but even so this leaves at least \((1-3\epsilon)n\) available tiles. For an edge of the form \(\{r_{i},s_{k}\}\), at most \(\epsilon n\) columns are filled in
row \(i\), at most \(\epsilon n\) columns already have symbol \(k\), and at most \((\epsilon h)w=\epsilon n\) columns are unavailable due to the boxes along row \(i\) already having symbol \(k\). Edges \(\{c_{j},s_{k}\}\) behave analogously. Finally, an edge of type \(\{s_{k},b_{\ell}\}\) has at most \(\epsilon n\) unavailable options due to filled cells in box \(\ell\), and at most \(\epsilon(h+w)\) unavailable options due to symbol \(k\) occurring elsewhere in the row bundle or column bundle for box \(\ell\). For each of the four edge types, we have at least \((1-3\epsilon)n\) available tiles in \(T(G_{S})\).
Conversely, \(S\) having the \((1-\delta)\)-availability property implies any row, column, symbol or box that is not completely used has at most \(\delta n\) occurrences. Moreover, if symbol \(s_{k}\) occurs fewer than \(h\) times in a row bundle, it must occur at most \(\delta h\) times. Otherwise, take a box \(b_{\ell}\) in this bundle with empty cells and observe that \(\{s_{k},b_{\ell}\}\) has fewer than \(n-w(\delta h)=(1-\delta)n\) tiles available in \(T(G_{S})\). A similar statement holds for column bundles. In other words, the \((1-\delta)\)-availability property implies \(S\) is \(\delta\)-dense, except possibly for completely filled rows, columns, boxes, or any symbols fully used in a bundle. This exception is convenient if, say, one wants to finish off certain symbols in a bundle before completing the rest of the partial Sudoku. This idea is revisited in Section 6.
### The linear system for Sudoku
Consider an empty \(n\times n\) Sudoku, to be filled with entries from \([n]\). Let \(x_{ijk}\) denote the number/fraction of symbols \(s_{k}\) placed in cell \((i,j)\), where \((i,j,k)\in[n]^{3}\). Latin square and Sudoku constraints naturally correspond to linear equations on these variables. The condition that every cell have exactly one entry becomes \(\sum_{k}x_{ijk}=1\) for each \((i,j)\in[n]^{2}\). The condition that every row contains every symbol exactly once becomes \(\sum_{j}x_{ijk}=1\) for each \((i,k)\in[n]^{2}\). Similarly, that every column contains every symbol exactly once becomes \(\sum_{i}x_{ijk}=1\) for each \((j,k)\in[n]^{2}\). Together, these \(3n^{2}\) equations yield a linear system for (fractional) latin squares. The additional condition relevant for Sudoku is that every box contains every symbol exactly once, or
\[\sum_{(i,j)\in\operatorname{box}(\ell)}x_{ijk}=1\]
for each \((k,\ell)\in[n]^{2}\).
This results in a \(4n^{2}\times n^{3}\) linear system
\[W\mathbf{x}=\mathds{1}, \tag{2.2}\]
where \(\mathds{1}\) denotes the all-ones vector and \(W\) is the \(\{0,1\}\) inclusion matrix of \(E(G_{hw})\) versus \(T(G_{hw})\); that is, \(W(e,t)=1\) if \(e\in t\) and is \(0\) otherwise. In this paper, we will mainly consider the (square) normal system, with coefficient matrix \(M=WW^{\top}\). An entrywise nonnegative solution \(\mathbf{y}\) to \(M\mathbf{y}=\mathds{1}\) implies the existence of a solution \(\mathbf{x}=W^{\top}\mathbf{y}\geq\mathbf{0}\) to (2.2). This, in turn, produces a fractional edge-decomposition of \(G_{hw}\) into tiles and a fractional Sudoku of type \((h,w)\).
The rank (over the reals) of both \(W\) and \(M\) can be found by exhibiting a basis for their range consisting of tiles in \(G_{hw}\). For convenience, set punctuation will be omitted from edges and tiles; we abbreviate these by juxtaposing vertices in different partite sets of \(V(G_{hw})\).
**Proposition 2.3**.: We have \(\operatorname{rank}(M)=\operatorname{rank}(W)=n^{3}-(n-1)^{3}+(n-1)(h-1)(w-1)\).
Proof. Let \(\mathcal{T}_{1}\) be the set of \(n^{3}-(n-1)^{3}\) tiles in \(G_{hw}\) which intersect at least one of \(r_{1},c_{1},s_{1}\). It was shown in [4, Proposition 2.3] that \(\mathcal{T}_{1}\) is linearly independent in the vector space of functions from \(E(K_{n,n,n})\) to \(\mathbb{R}\). Thus, \(\mathcal{T}_{1}\) is also linearly independent in \(\mathbb{R}^{E(G_{hw})}\). Let \(\mathcal{T}_{2}\) be any set of \((n-1)(h-1)(w-1)\) tiles of the form \(r_{i}c_{j}s_{k}b_{\ell}\), where \(k=2,\ldots,n-1\) and the \(b_{\ell}\) range over all boxes disjoint from both row \(1\) and column \(1\). Since the tiles in \(\mathcal{T}_{2}\) use distinct box-symbol edges which are not present in \(\mathcal{T}_{1}\), it is clear that \(\mathcal{T}_{1}\cup\mathcal{T}_{2}\) is linearly independent.
We next show that \(\mathcal{T}_{1}\cup\mathcal{T}_{2}\) generates any given column of \(W\), say the one corresponding to a tile \(\{r_{i},c_{j},s_{k},b_{\ell}\}\). Suppose \(i\leq h\) and \(j\leq w\). Then \(\ell=\mathrm{box}(i,j)=1\). We can form the target tile as a linear combination in \(\mathcal{T}_{1}\), namely as
\[r_{i}c_{j}s_{k}b_{1}=r_{1}c_{j}s_{k}b_{1}+r_{i}c_{1}s_{k}b_{1}+r_{i}c_{j}s_{1}b _{1}-r_{1}c_{1}s_{k}b_{1}-r_{1}c_{j}s_{1}b_{1}-r_{i}c_{1}s_{1}b_{1}+r_{1}c_{1}s_ {1}b_{1}.\]
Suppose next that \(i>h\) and \(j\leq w\). As above, we have the linear combination
\[r_{i}c_{j}s_{k}b_{\ell}=r_{1}c_{j}s_{k}b_{1}+r_{i}c_{1}s_{k}b_{\ell}+r_{i}c_{j}s _{1}b_{\ell}-r_{1}c_{1}s_{k}b_{1}-r_{1}c_{j}s_{1}b_{\ell}-r_{i}c_{1}s_{1}b_{1}+ r_{1}c_{1}s_{1}b_{1}.\]
Similarly, \(\mathcal{T}_{1}\) generates any tile with \(i\leq h\) and \(j>w\). Suppose, then, that \(i>h\) and \(j>w\). If \(k=1\), the corresponding tile belongs to \(\mathcal{T}_{1}\), so assume \(k>1\). Put \(p=\mathrm{box}(1,j)\) and \(q=\mathrm{box}(i,1)\), and note that all tiles meeting these boxes are in the span of \(\mathcal{T}_{1}\), as shown above. Since \(i>h\), \(j>w\), and \(k>1\), we know that \(s_{k}\) and \(b_{\ell}\) occur together in some tile \(r_{i^{\prime}}c_{j^{\prime}}s_{k}b_{\ell}\in\mathcal{T}_{2}\). Using this and other tiles generated so far, we compute
\[r_{i}c_{j}s_{k}b_{\ell} =r_{i}c_{j}s_{1}b_{\ell}+r_{1}c_{j}s_{k}b_{p}+r_{i}c_{1}s_{k}b_{q} -r_{1}c_{1}s_{k}b_{1}-r_{1}c_{j}s_{1}b_{p}-r_{i}c_{1}s_{k}b_{q}+r_{i^{\prime}} c_{j^{\prime}}s_{k}b_{\ell}\] \[-r_{i^{\prime}}c_{j^{\prime}}s_{1}b_{\ell}-r_{1}c_{j^{\prime}}s_{ k}b_{p}-r_{i^{\prime}}c_{1}s_{k}b_{q}+r_{1}c_{1}s_{k}b_{1}+r_{1}c_{j^{\prime}}s_{1}b _{p}+r_{i^{\prime}}c_{1}s_{1}b_{q}-r_{1}c_{1}s_{1}b_{1}.\]
We have shown that \(\mathcal{T}_{1}\cup\mathcal{T}_{2}\) spans each column of \(W\), and hence is a basis for \(\mathrm{range}(W)\).
Suppose now that some cells of our Sudoku have been pre-filled, resulting in the graph \(G_{S}\) as described earlier. Let \(W_{S}\) denote the \(\{0,1\}\) inclusion matrix of edges versus tiles in \(G_{S}\). Then the system
\[W_{S}\mathbf{x}=\mathds{1} \tag{2.3}\]
has a solution \(\mathbf{x}\geq\mathbf{0}\) if and only if \(G_{S}\) admits a fractional edge-decomposition into tiles. Note that for non-empty \(S\), the dimensions in the system (2.3) are smaller than in (2.2). The tile weights are given by entries \(x_{ijk}\) of \(\mathbf{x}\). By Proposition 2.1, the existence of such a solution is equivalent to our partial Sudoku \(S\) having a completion.
We may again consider the normal system with coefficient matrix \(M_{S}=W_{S}W_{S}^{\top}\). Even though many possible solutions of (2.3) are lost in doing so, the normal system has the advantage of allowing eigenvalue and perturbation methods, as was done in [4]. We extend these methods to the Sudoku setting in Sections 4 and 5 to follow.
### The kernel
For our analysis of the linear systems (2.2) and (2.3) above, it is important to study the nullspace/kernel of \(M\), or equivalently the left nullspace of \(W\). This can be viewed as the set of all edge-weightings in \(G_{hw}\) in which each tile has a vanishing total weight (over its four edges).
After some simplification, Proposition 2.3 gives
\[\mathrm{dim}\ \mathrm{ker}(M)=4n^{2}-\mathrm{rank}(M)=3n+(h+w)(n-1). \tag{2.4}\]
It suffices to find a linearly independent set of this many vectors in \(\mathrm{ker}(M)\). Such a set is described in three categories of vectors below.
(A) Choose a row \(r_{i}\) and define the vector \(\mathbf{v}\), coordinates indexed by \(E(G_{hw})\), where \(\mathbf{v}(r_{i}c_{j})=1\) for all \(j\in[n]\), \(\mathbf{v}(r_{i}s_{k})=-1\) for all \(k\in[n]\), and otherwise \(\mathbf{v}(e)=0\). Similar classes of kernel vectors exist with the roles of row, column and symbol permuted.
Consider the characteristic vector \(\mathbf{t}\) of some tile \(t\). If \(r_{i}\in t\), then since \(t\) contains exactly one column and exactly one symbol we have \(\mathbf{t}\cdot\mathbf{v}=1-1=0\). On the other hand, if \(r_{i}\not\in t\), the support of \(t\) is disjoint from the support of \(\mathbf{v}\), hence we again have \(\mathbf{t}\cdot\mathbf{v}=0\). In plain language, this kernel vector encodes the condition that the number of times a column is used with row \(i\) equals the number of times a symbol is used in row \(i\). Verification is similar for the permuted varieties.
(B) Choose a box \(b_{\ell}\) and define the vector \({\bf v}\) in which \({\bf v}(r_{i}c_{j})=1\) for all \((i,j)\in b_{\ell}\), \({\bf v}(b_{\ell}s_{k})=-1\) for all \(k\in[n]\), and otherwise \({\bf v}(e)=0\).
As before, let \({\bf t}\) be the characteristic vector of a tile \(t\). If \(b_{\ell}\in t\), then since \(t\) contains exactly one row, column and symbol, we have \({\bf t}\cdot{\bf v}=1-1=0\). On the other hand, if \(b_{\ell}\not\in t\), the supports of \({\bf t}\) and \({\bf v}\) are disjoint. This kernel vector encodes the condition that the number of entries filled in box \(\ell\) equals the number of symbols used in box \(\ell\).
(C) Choose a bundle of rows \(\{r_{hp+1},\ldots,r_{h(p+1)}\}\) and a symbol \(s_{k}\). Define the vector \({\bf v}\) with \({\bf v}(r_{i}s_{k})=1\) and \({\bf v}(b_{\ell}s_{k})=-1\) for all \(hp<i,\ell\leq h(p+1)\), and otherwise \({\bf v}(e)=0\). A similar class of vectors exists for column bundles.
Once again, let \({\bf t}\) be the characteristic vector of a tile \(t\). Suppose \(s_{k}\in t\) and \(t\) intersects the row bundle defining \({\bf v}\). Since \(t\) contains exactly one row and exactly one box meeting this bundle, we have \({\bf t}\cdot{\bf v}=1-1=0\). On the other hand, if either \(s_{k}\not\in t\) or \(t\) intersects a different row bundle, the supports of \({\bf t}\) and \({\bf v}\) are disjoint. The encoded condition states that a given symbol \(s_{k}\) appears the same number of times in a row bundle as in the corresponding box bundle. The column bundle case has a similar verification.
The sum of all \(3n\) vectors of type (A) vanish. Otherwise, there are no nontrivial relations among these vectors; see [4]. The vectors of type (B) involve box-symbol edges with disjoint supports, so are linearly dependent from those of type (A).
We describe some nontrivial relations involving type (C) vectors. First, for each symbol \(s_{k}\), the sum of type (C) vectors over all row bundles minus the sum of type (C) vectors over all column bundles gives cancellation on the box-symbol edges and produces a vector of type (A). Next, the sum of type (B) vectors over all boxes \(\{b_{hp+1},\ldots,b_{h(p+1)}\}\) in a row bundle takes the value \(1\) on all row-column edges \(r_{i}c_{j}\) and \(-1\) on all box-symbol edges \(b_{\ell}s_{k}\) when \(hp<i,\ell\leq h(p+1)\). The sum of type (C) vectors over all \(k\) symbols takes the value \(1\) on all row-symbol edges \(r_{i}s_{k}\) and \(-1\) for all box-symbol edges \(b_{\ell}s_{k}\) when \(hp<i,\ell\leq h(p+1)\). The difference of these belongs to the span of type (A) vectors. A similar relation holds for column bundles. Taking account of these linear dependencies, we have a total of
\[(3n-1)+n+(h+w-1)(n-1)=3n+(h+w)(n-1)\]
vectors, agreeing with the dimension in (2.4). More details on these relations can be found in the second author's thesis [15].
Let \(K\) be the \(4n^{2}\times 4n^{2}\) matrix which projects onto \(\ker(M)\). We have \(K^{2}=K=K^{\top}\). Let \(K[S]\) denote the principal submatrix of \(K\) whose rows and columns correspond to the edges of \(G_{S}\). The following orthogonality relations are similar to [4, Proposition 2.5].
**Proposition 2.4.** The range of \(K[S]\) is orthogonal to the all-ones vector and to the range of \(M_{S}\). That is, (a) \(K[S]\mathds{1}={\bf 0}\); and (b) \(K[S]M_{S}=O\).
Proof. For matrices and vectors indexed by \(E(G_{hw})\), sort the indices so that those corresponding to \(E(G_{S})\) come first. Let \(L\) be the inclusion map from edges of \(G_{S}\) to edges of \(G_{hw}\) and let \(Q\) be the inclusion map from tiles of \(G_{S}\) to tiles of \(G_{hw}\). As matrices, \(L\) and \(Q\) have the structure \([I\mid O]^{\top}\).
Let \(\mathds{1}_{S}=(\mathds{1}\mid{\bf 0})\) be the \(4n^{2}\times 1\) zero-one indicator vector of \(E(G_{S})\) in \(E(G_{hw})\). Alternatively, \(\mathds{1}_{S}\) is obtained from the \(4n^{2}\times 1\) all-ones vector by subtracting indicator vectors of tiles in \(S\). It follows that \(\mathds{1}_{S}\) is contained in the range of \(W^{\top}\), and hence is orthogonal to \(\ker(W^{\top})\). We now compute
\[K[S]\mathds{1}=L^{\top}KL\mathds{1}=L^{\top}K\mathds{1}_{S}={\bf 0}.\]
This proves (a). With our matrix partition, we have
\[W=\left[\begin{array}{c|c}W_{S}&*\\ \hline O&*\end{array}\right]\]
and \(LW_{S}=WQ\). Working from these,
\[K[S]M_{S}=L^{\top}KLW_{S}W_{S}^{\top}=L^{\top}KWQW_{S}^{\top}=O,\]
since \(KW=O\). This proves (b).
Next, we recall [4, Lemma 2.6]. The idea lets us solve an under-determined system \(A\mathbf{x}=\mathbf{b}\) by inverting an additive shift of \(A\). We use this later in Section 5 with \(A\) taking the role of \(M_{S}\), \(B\) a multiple of \(K[S]\), and \(\mathbf{b}=\mathds{1}\).
**Lemma 2.5** (see [4]).: Let \(A\) and \(B\) be symmetric \(N\times N\) real matrices with \(AB=O\), \(A+B\) nonsingular, and \(B\mathbf{b}=\mathbf{0}\). Then \(A(A+B)^{-1}\mathbf{b}=\mathbf{b}\).
## 3. **The adjacency algebra**
### A coherent configuration for Sudoku
Let \(X\) be a finite set. A _coherent configuration_ on \(X\) is a partition of \(X\times X\) into a set of relations \(\mathcal{R}=\{R_{1},\ldots,R_{d}\}\) satisfying the following properties:
1. the union of some relations in \(\mathcal{R}\) equals the diagonal \(\{(x,x):x\in X\}\);
2. for each \(R\) in \(\mathcal{R}\), the transpose relation \(R^{\top}=\{(y,x):(x,y)\in R\}\) is also in \(\mathcal{R}\);
3. for each \(i,j,k\), there exists a constant \(p_{ij}^{k}\) such that for any \(x,z\) with \((x,z)\in R_{k}\), there are exactly \(p_{ij}^{k}\) elements \(y\) such that \((x,y)\in R_{i}\) and \((y,z)\in R_{j}\).
Here, we set up a coherent configuration on the ground set \(X=E(G_{hw})\). Given two rows \(r_{i}\) and \(r_{i^{\prime}}\), we write \(r_{i}\sim r_{i^{\prime}}\) if and only if \(\lfloor(i-1)/h\rfloor=\lfloor(i^{\prime}-1)/h\rfloor\). Similarly, given two columns \(c_{j}\) and \(c_{j^{\prime}}\), we write \(c_{j}\sim c_{j^{\prime}}\) iff \(\lfloor(j-1)/w\rfloor=\lfloor(j^{\prime}-1)/w\rfloor\). In other words, \(\sim\) tracks whether two rows or two columns belong to the same bundle. From the definition, it is clear that \(\sim\) is an equivalence relation on both rows and columns. Write \(r_{i}\not\cong r_{i^{\prime}}\) if \(r_{i}\sim r_{i^{\prime}}\) but \(r_{i}\neq r_{i^{\prime}}\). Define \(\not\cong\) similarly for columns.
Given two boxes \(b_{\ell}\) and \(b_{\ell^{\prime}}\), write \(b_{\ell}\smile b_{\ell^{\prime}}\) if \(\lfloor(\ell-1)/h\rfloor=\lfloor(\ell^{\prime}-1)/h\rfloor\). Informally, this keeps track of whether the two boxes occur in the same row bundle. Write \(b_{\ell}\frown b_{\ell^{\prime}}\) iff \(\ell\equiv\ell^{\prime}\pmod{h}\); this is the analog for boxes in the same column bundle. By abuse of notation, we write \(r_{i}\smile b_{\ell}\) to mean that the corresponding row and box intersect, and similarly for \(c_{j}\frown b_{\ell}\).
Ordered pairs of elements are partitioned into relations according to Table 3. A blank indicates the trivial relation.
Moving from vertices to edges, these partitions induce a partition of \(E(G_{hw})^{2}\) into relations according to Figure 4. In total, there are 69 relations.
\begin{table}
\begin{tabular}{c|c c c} & rows & cols & symbols & boxes \\ \hline rows & \(=,\not\cong,\rtimes\) & & \(\smile,\not\sim\) \\ cols & \(=,\not\cong,\rtimes\) & & \(\frown,\not\sim\) \\ symbols & & & \(=,\not\cong\) \\ boxes & \(\smile,\not\sim\) & \(\frown,\not\sim\) & \(=,\smile,\frown,\not\sim\) \\ \end{tabular}
\end{table}
Table 3. Relations on vertices of \(G_{hw}\)
**Proposition 3.1**.: The relations \(R_{1},\ldots,R_{69}\) given in Figure 4 define a coherent configuration on \(E(G_{hw})\).
Proof.: Generally speaking, the claim follows readily from the fact that the relations are induced from pairs of equivalence relations on vertices. However, we give a more direct check on conditions (1)-(3) for our setting \(X=E(G_{hw})\).
The diagonal relation \(\{(x,x):x\in X\}\) is a union of \(R_{1},R_{16},R_{32},R_{62}\). These correspond to the cases of equality for each of the four edge types row-column, row-symbol, column-symbol, and box-symbol, respectively. For any relation \(R_{i}\), its transpose \(R_{i}^{\top}\) is another relation in our family; the index can be read off directly from Figure 4.
Figure 4. Relations on edges of \(G_{hw}\)
Checking condition (3) amounts to showing that the number of choices \(y\in X\) with \((x,y)\in R_{i}\) and \((y,z)\in R_{j}\) depends only on \(i,j,k\) and not on the specific choice of \(x\) and \(z\) with \((x,z)\in R_{k}\). This follows from the symmetry of our vertices. In more detail, let us consider the four (or fewer) vertices in \(x\cup z\) and argue that there is a canonical choice depending on the relation label \(k\). If \(x\cup z\) contains exactly one symbol, we may assume without loss of generality that it is \(s_{1}\). On the other hand, if \(x\) and \(z\) each contain a symbol, we may assume a common symbol \(s_{1}\) or distinct symbols \(s_{1}\neq s_{2}\), depending on the relation. A similar canonical choice can be made for rows, except that unequal rows could be related under \(\sim\) (choose \(r_{1},r_{2}\)) or not (choose instead \(r_{1},r_{h+1}\)). The same holds for columns. Whenever a box \(b_{\ell}\) appears in \(x\cup z\), we can take a canonical choice that is consistent with that made for rows, columns, or potentially another box. Importantly, none of the preceding choices impact the count of elements \(y\) satisfying the given relations with \(x\) and \(z\). It follows that the quantities \(p_{ij}^{k}\) are well-defined.
With extensive case analysis, it would be technically possible to demonstrate formulas for the structure constants \(p_{ij}^{k}\). However, to avoid presenting such details and to reduce errors, we implemented the following computer-assisted procedure:
* first, we argue that \(p_{ij}^{k}\) belongs to \(\mathbb{Z}[h,w]\), and is at most quadratic in each of \(h,w\);
* next, we compute all structure constants explicitly for the nine cases \(2\leq h,w\leq 4\);
* finally, we interpolate this data to arrive at symbolic expressions for \(p_{ij}^{k}\).
We discuss these points in a little more detail.
**Proposition 3.2**.: In our coherent configuration based on \(E(G_{hw})\), each structure constant \(p_{ij}^{k}\) is a polynomial of degree at most \(2\) in each of \(h\) and \(w\).
Proof.: Fix two edges \(x,z\in E(G_{hw})\) with \((x,z)\in R_{k}\). The quantity \(p_{ij}^{k}\) counts the edges \(y\in E(G_{hw})\) with \((x,y)\in R_{i}\) and \((y,z)\in R_{j}\). This quantity is zero unless the indices \(i\) and \(j\) simultaneously allow one of the four types of edges for \(y\). Given indices \(i\) and \(j\) which admit a choice of \(y\), we must choose either a row-column pair, a row-symbol pair, a column-symbol pair, or a box-symbol pair. The two components of each pair can be selected separately, leading to a product of choices for the two components. The number of choices for a row is an element of \(\{0,1,h,h-1,h-2,n-h\}\). Similarly, the number of choices for a column is an element of \(\{0,1,w,w-1,w-2,n-w\}\). The number of choices for a symbol is an element of \(\{0,1,n,n-1,n-2\}\). Finally, the number of choices for a box is a product of an element of \(\{0,1,h,h-1\}\) with an element of \(\{0,1,w,w-1\}\).
The choice of nine cases \(2\leq h,w\leq 4\) suffices because of the degree bound in Proposition 3.2. The computation was carried out on computer by explicitly listing all \(4h^{2}w^{2}\) edges and counting incidences. This took several minutes for the larger cases. From this, the interpolation in (3) can be carried out easily using a \(9\times 9\) Vandermonde matrix based on the terms \(1,h,w,h^{2},hw,w^{2},h^{2}w,hw^{2},h^{2}w^{2}\), where \((h,w)\in\{2,3,4\}^{2}\).
For each relation index \(i=1,\ldots,69\), we consider its corresponding adjacency matrix \(A_{i}\). Let \(\mathfrak{A}\) denote the \(\mathbb{R}\)-vector space spanned by the \(A_{i}\). Since the relations form a coherent configuration, \(\mathfrak{A}\) is closed under matrix multiplication, and hence forms an algebra.
If we view each relation as a graph, then \(\{R_{i}:i=1,\ldots,69\}\) is a decomposition of the line graph of \(G_{hw}\) into regular graphs. The degrees of these graphs are the nonzero row sums of the corresponding adjacency matrices. We give the degrees \(d_{i}\) for each of the relations in Table 5. These are arranged into columns according to the four edge types: row-column, row-symbol, column-symbol, and symbol-box. Consider, for example, the degree \(d_{47}\). Given a row-symbol edge, say \(r_{1}s_{1}\), this degree counts the symbol-box edges \(s_{k}b_{\ell}\) with \(s_{k}\neq s_{1}\) and \(b_{\ell}\) in the same row bundle as \(r_{1}\). There
are \(n-1\) choices for \(s_{k}\) and \(h\) choices for \(b_{\ell}\), since every row is incident with exactly \(n/w=h\) boxes. So \(d_{47}=(n-1)h\).
### The coefficient matrix \(M\)
Recall that \(W\) denotes the \(\{0,1\}\) inclusion matrix of edges versus tiles in \(G_{hw}\), and that \(M=WW^{\top}\). We computed the rank and a basis for the kernel of \(M\) in Section 2. A key next observation is that \(M\) belongs to our adjacency algebra.
**Proposition 3.3**.: The matrix \(M=WW^{\top}\) lies in \(\mathfrak{A}\), with
\[M= hw(A_{1}+A_{16}+A_{32}+A_{62})+w(A_{46}+A_{50})+h(A_{54}+A_{58})\] \[+A_{10}+A_{13}+A_{22}+A_{25}+A_{28}+A_{30}+A_{38}+A_{42}. \tag{3.1}\]
Proof.: Given two edges \(e,f\) in \(G_{hw}\), the entry \(M(e,f)\) equals the number of tiles containing both \(e\) and \(f\). Since a tile contains exactly one row, column, box, and symbol, this number is zero whenever \(e\cup f\) contains two distinct vertices of the same type. Moreover, since the box in a tile must correspond with the row-column pair, \(M(e,f)\) is zero if \(e\cup f\supset\{r_{i},b_{\ell}\}\) or \(\{c_{j},b_{\ell}\}\) with, respectively \(r_{i}\not\sim b_{\ell}\) or \(c_{j}\not\sim b_{\ell}\). It suffices to consider those remaining cases when there exist tiles extending \(e\cup f\).
If \(e=f\), we claim that there are \(n\) such tiles, regardless of the type of edge. For \(e=\{r_{i},c_{j}\}\), any of the \(n\) symbols extend \(e\) to a tile (and there is a unique box involved). For \(e=\{r_{i},s_{k}\}\), there are any of \(n\) columns (with one corresponding box for each) extending \(e\). This is similar when we exchange the roles of rows and columns. Finally, for \(e=\{b_{\ell},s_{k}\}\), any of the \(hw=n\) cells \((i,j)\) with \(\operatorname{box}(i,j)=\ell\) extend \(e\) to a tile. The identity relation in our setup decomposes into the identity on the four types of edges; in terms of matrices,
\[I=A_{1}+A_{16}+A_{32}+A_{62}.\]
We have shown that the diagonal entries of \(M\), and hence the coefficient for each of these four adjacency matrices, equals \(n\).
\begin{table}
\begin{tabular}{|r r|r r|r r|r r|} \hline \(i\) & \(d_{i}\) & \(i\) & \(d_{i}\) & \(i\) & \(d_{i}\) & \(i\) & \(d_{i}\) \\ \hline
1 & 1 & 13 & \(n\) & 25 & \(n\) & 42 & \(n\) \\
2 & \(w-1\) & 14 & \(n(h-1)\) & 26 & \(n(w-1)\) & 43 & \(n(h-1)\) \\
3 & \((h-1)w\) & 15 & \(nh(w-1)\) & 27 & \(n(h-1)w\) & 44 & \(n(w-1)\) \\
4 & \(h-1\) & 16 & 1 & 30 & \(n\) & 45 & \(n(h-1)(w-1)\) \\
5 & \((h-1)(w-1)\) & 17 & \(n-1\) & 31 & \(n(n-1)\) & 50 & \(h\) \\
6 & \((h-1)^{2}w\) & 18 & \(h-1\) & 32 & 1 & 51 & \((n-1)h\) \\
7 & \(h(w-1)\) & 19 & \((n-1)(h-1)\) & 33 & \(n-1\) & 52 & \(h(w-1)\) \\
8 & \(h(w-1)^{2}\) & 20 & \(h(w-1)\) & 34 & \(w-1\) & 53 & \((n-1)h(w-1)\) \\
9 & \(n(h-1)(w-1)\) & 21 & \((n-1)h(w-1)\) & 35 & \((n-1)(w-1)\) & 58 & \(w\) \\
10 & \(n\) & 28 & \(n\) & 36 & \((h-1)w\) & 59 & \((n-1)w\) \\
11 & \(n(h-1)\) & 29 & \(n(n-1)\) & 37 & \((n-1)(h-1)w\) & 60 & \((h-1)w\) \\
12 & \(nh(w-1)\) & 46 & \(h\) & 54 & \(w\) & 61 & \((n-1)(h-1)w\) \\
22 & \(n\) & 47 & \((n-1)h\) & 55 & \((n-1)w\) & 62 & 1 \\
23 & \(n(w-1)\) & 48 & \(h(w-1)\) & 56 & \((h-1)w\) & 63 & \(n-1\) \\
24 & \(n(h-1)w\) & 49 & \((n-1)h(w-1)\) & 57 & \((n-1)(h-1)w\) & 64 & \(h-1\) \\
38 & \(n\) & & & & & 65 & \((n-1)(h-1)\) \\
39 & \(n(h-1)\) & & & & & 66 & \(w-1\) \\
40 & \(n(w-1)\) & & & & & 67 & \((n-1)(w-1)\) \\
41 & \(n(h-1)(w-1)\) & & & & & 68 & \((h-1)(w-1)\) \\ & & & & & & 69 & \((n-1)(h-1)(w-1)\) \\ \hline \end{tabular}
\end{table}
Table 5. Relation degrees \(d_{i}\); alternatively the nonzero row sums of \(A_{i}\)
Next, consider \(e=\{r_{i},s_{k}\}\) and \(f=\{b_{\ell},s_{k}\}\). In the event that \(r_{i}\smile b_{\ell}\), we obtain \(w\) possible columns \(c_{j}\) such that \(\mathrm{box}(i,j)=\ell\), and \(e\cup f\cup\{c_{j}\}\) defines a valid tile. The two relations corresponding to this choice of \(e\) and \(f\) (transposes of each other) have indices \(46\) and \(50\) in our labelling. If instead we take \(e=\{c_{j},s_{k}\}\) for \(c_{j}\frown b_{\ell}\), there are likewise exactly \(h\) extensions to a tile via some row \(r_{i}\). This choice corresponds to relations numbered \(54\) and \(58\).
Finally, in each of the following possibilities for \(\{e,f\}\), there is a unique tile \(r_{i}c_{j}s_{k}b_{\ell}\) extending \(e\cup f\), where \(\ell=\mathrm{box}(i,j)\):
\[\{r_{i}c_{j},r_{i}s_{k}\},\{r_{i}c_{j},c_{j}s_{k}\},\{r_{i}s_{k},c_{j}s_{k}\}, \{r_{i}c_{j},b_{\ell}s_{k}\}.\]
The corresponding relation labels are \(10,13,22,25,28,30,38,42\).
The structure of entries of \(M\) is depicted in Figure 6. On the left, we present \(M\) as a block matrix, whose block partition corresponds to the four edge types. Each block is an \(n^{2}\times n^{2}\) matrix which can be factored as a Kronecker product. It is convenient to slightly abuse the Kronecker product in the following way. In forming \(A\otimes B\), each factor will be indexed by one of our four Sudoku objects: rows, columns, symbols, and boxes. The product is then indexed by corresponding pairs of elements. For instance, the \((1,2)\)-block of \(M\) can be represented as \(I_{r}\otimes J_{cs}\), where \(I_{r}\) is the identity matrix indexed by \(\{r_{1},\ldots,r_{n}\}\) and \(J_{cs}\) is the all-ones matrix whose rows are indexed by \(\{c_{1},\ldots,c_{n}\}\) and columns are indexed by \(\{s_{1},\ldots,s_{n}\}\). The latter can be factored as \(\mathbf{j}_{c}\otimes\mathbf{j}_{s}^{\top}\), where \(\mathbf{j}\) is an \(n\times 1\) all-ones vector and the subscript indicates the indexing set. The \((1,2)\)-block of \(M\) has rows indexed by edges \(e=r_{i}c_{j}\), columns indexed by edges \(f=r_{i^{\prime}}s_{k^{\prime}}\), and the \((e,f)\)-entry is \(1\) if and only if \(i=i^{\prime}\). This exactly recovers the condition for \(e\) and \(f\) sharing a common tile. Other blocks of \(M\) are similar. We use \(H_{rb}\) to denote the zero-one matrix indexed by rows versus boxes in which \(H_{rb}(r_{i},b_{\ell})=1\) if and only if \(r_{i}\smile b_{\ell}\). We use \(H_{cb}\) analogously for columns. Finally, \(H_{rcb}\) is \(n^{2}\times n\), indexed by row-column edges versus boxes, and \(H_{rcb}(r_{i}c_{j},b_{\ell})=1\) if and only if \(\mathrm{box}(i,j)=\ell\).
On the right of Figure 6, we display the locations of nonzero entries as a graphic, illustrated in the case \(h=2\), \(w=3\). The diagonal has entries \(n=hw\). Blocks \((2,4)\) and \((4,2)\) correspond to \(A_{46}\) and \(A_{50}\), with coefficient \(w\). Blocks \((3,4)\) and \((4,3)\) correspond to \(A_{54}\) and \(A_{58}\), with coefficient \(h\). The other blocks correspond to the remaining terms with coefficient \(1\).
Figure 6: Illustration of the block matrix structure of \(M\)
## 4. **Spectral decomposition of \(M\)**
### Eigenvalues and eigenvectors
Since \(M=WW^{\top}\), we know it is symmetric and hence has real eigenvalues. We also have \(\operatorname{rank}(M)=\operatorname{rank}(W)\), so from Section 2.3, we know that zero is an eigenvalue of \(M\) with multiplicity \(3n+(h+w)(n-1)\). Moreover, \(M\) has constant row sums equal to \(4n\), since every edge belongs to \(n\) tiles, and every tile has four edges. This gives an eigenvalue \(4n\) corresponding to the one-dimensional eigenspace of constant vectors.
In this section, we compute all other eigenvalues and corresponding eigenvectors for \(M\). By (3.1), we know that \(M\in\mathfrak{A}\). Later, a generalized inverse for \(M\) is expressed with a list of coefficients in \(\mathfrak{A}\). For the discovery of these coefficients, it is helpful to have a good understanding of the spectral decomposition of \(M\). This is summarized here, with more details and verifications for eigenvectors appearing in the remainder of this subsection.
**Proposition 4.1**.: The eigenvalues of \(M\) are \(\theta_{j}=jn\), \(j=0,1,\ldots,4\). Each eigenspace has a basis of eigenvectors consisting of vectors with entries in \(\{0,\pm 1\}\).
We have discussed \(\theta_{0}=0\) and \(\theta_{4}=4n\) earlier, so we turn our focus to \(\theta_{1},\theta_{2},\theta_{3}\). Below, we describe different varieties of eigenvectors (A), (B), etc., for each of these \(\theta_{j}\). A basis for each eigenspace can be found by taking a union of linearly independent vectors over the different varieties. Making a selection of linearly independent vectors of the indicated size within each variety can be done using relations as in Section 2.3. More details can be found in [15].
We give an informal description and brief verification for each eigenvector. Checking that \(M\mathbf{v}=\theta_{j}\mathbf{v}\) can be done as follows. Take each edge \(f\in E(G_{hw})\) and extend to a tile \(t\supset f\) in all possible ways. Then, sum the values of \(\mathbf{v}\) on the four edges of \(t\), and check that this total equals \(\theta_{j}\mathbf{v}(f)\). This often equals zero, either from cancelling signs or when the support of \(\mathbf{v}\) is disjoint from the relevant tiles \(t\). Figures 7, 8 and 9 give diagrams illustrating the eigenvector varieties in the case \((h,w)=(2,3)\). In these diagrams, the four sections correspond to the four edge types: row-column (top left), row-symbol (top right), symbol-column (bottom left), and box-symbol (bottom right). Symbols \(+\) and \(-\) denote vector entries \(1\) and \(-1\), respectively, and blanks represent \(0\) in the corresponding positions.
\(\bullet\)\(\theta_{1}=n\); eigenspace dimension \(4n^{2}-(2n-3)(h+w)-5n-1\)
(A) Opposite signs on two distinct rows and two distinct columns, at least one pair of which is in a common bundle. There are \((n-1)^{2}-(h-1)(w-1)\) linearly independent vectors of this kind.
If \(r_{i_{0}},r_{i_{1}}\) are the rows and \(c_{j_{0}},c_{j_{1}}\) are the columns, then the entries are given explicitly by \(\mathbf{v}(r_{i}c_{j})=(-1)^{\alpha+\beta}\) if \((i,j)=(i_{\alpha},j_{\beta})\), and otherwise \(\mathbf{v}(e)=0\). For an edge \(f=r_{i}c_{j}\), we have \(n\) tiles extending
\(f\) corresponding to a choice of symbol \(s_{k}\). Each such tile contains at most one non-vanishing edge, namely that corresponding to \(f\). So \(M{\bf v}(f)=n{\bf v}(f)\). For \(f\) of type row-symbol, column-symbol or box-symbol, we have \(M{\bf v}(f)=0\), either from cancellation or disjoint supports. Importantly, having either \(r_{i_{0}}\sim r_{i_{1}}\) or \(c_{j_{0}}\sim c_{j_{1}}\) ensures cancellation within each box.
(B) Opposite signs on two distinct rows (or columns) in the same bundle and on two distinct symbols. There are \((n-1)(h(w-1)+w(h-1))\) linearly independent vectors of this kind.
If \(r_{i_{0}},r_{i_{1}}\) are the rows and \(s_{k_{0}},s_{k_{1}}\) are the symbols, then the entries are given explicitly by \({\bf v}(r_{i}s_{k})=(-1)^{\alpha+\gamma}\) if \((i,k)=(i_{\alpha},k_{\gamma})\), and otherwise \({\bf v}(e)=0\). Verification that \(M{\bf v}=n{\bf v}\) is similar to (A).
(C) Alternating signs on a rectangle of boxes and opposite signs on two distinct symbols. There are \((n-1)(h-1)(w-1)\) linearly independent vectors of this kind.
Suppose \(\ell_{\alpha\beta}\) are the four box indices, where \(\alpha,\beta\in\{0,1\}\) tell us the chosen row/column bundles, respectively. As in (B), let \(k_{\gamma}\) be the chosen symbol indices, \(\gamma\in\{0,1\}\). The entries of the eigenvector are given by \({\bf v}(s_{k}b_{\ell})=(-1)^{\alpha+\beta+\gamma}\) when \((k,\ell)=(k_{\gamma},\ell_{\alpha\beta})\), and otherwise \({\bf v}(e)=0\). Cancellation occurs if we sum over rows, columns, or symbols. For a symbol-box edge \(f=s_{k}b_{\ell}\), the \(n\) tiles extending \(f\) correspond to a choice of entry in box \(\ell\). This picks up the value of \({\bf v}(f)\) with a multiplicity of \(n\). So \(M{\bf v}=n{\bf v}\).
\(\bullet\)\(\theta_{2}=2n\); eigenspace dimension \((2n-h-w)+(n-1)(h+w-2)+(h-1)(w-1)=(n-3)(h+w-1)+2n\).
(A) Opposite signs on two distinct rows in the same bundle; constant on all columns and symbols. A similar variety exists with rows and columns swapped. There are \(h(w-1)+w(h-1)=2n-h-w\) linearly independent vectors of this kind.
If \(r_{i_{0}},r_{i_{1}}\) are the rows, then the eigenvector entries are \({\bf v}(r_{i}c_{j})={\bf v}(r_{i}s_{k})=(-1)^{\alpha}\) when \(i=i_{\alpha}\), and otherwise \({\bf v}(e)=0\) for all other edges. An edge \(f=r_{i}c_{j}\) or \(r_{i}s_{k}\) has exactly \(n\) extensions to a tile, each of which has two edges of a common sign. So \(M{\bf v}(f)=2n{\bf v}(f)\) in those cases. It is easy to see that \(M{\bf v}(f)=0\) on all other edges due to cancellation on rows.
(B) Opposite signs on both rows and boxes of two distinct row bundles; opposite signs on symbols. A similar variety exists with rows and columns swapped. There are \((n-1)(h+w-2)\) linearly independent vectors of this kind.
If \(f\) is a row-column edge, the cancellation on symbols gives \(M{\bf v}(f)=0\). Likewise, if \(f\) is a column-symbol edge, the cancellation on rows gives \(M{\bf v}(f)=0\). For \(f\) of either of the other two edge types, there are \(n\) extensions to a tile, and again the nonzero edges (if any) agree in sign.
(C) Alternating signs on a rectangle of boxes; constant on all symbols and on entries within each box. There are \((h-1)(w-1)=n-h-w+1\) linearly independent vectors of this kind.
For row-symbol or column-symbol edges, the extension to a tile leads to cancellation. For a row-column edge \(f\), there \(n\) extensions to a tile by selecting a symbol, and each has two matching edges from the entry and box. So \(M\mathbf{v}(f)=2n\mathbf{v}(f)\). Similarly, for a box-symbol edge \(f\), we have \(M\mathbf{v}(f)=2n\mathbf{v}(f)\).
\(\bullet\)\(\theta_{3}=3n\); eigenspace dimension \(n+h+w-3\)
(A) Opposite signs on two distinct symbols; constant on all rows, columns, and boxes. There are \(n-1\) linearly independent vectors of this kind.
If \(k_{0},k_{1}\) are the two symbols, then the eigenvector entries are \(\mathbf{v}(r_{i}s_{k})=\mathbf{v}(c_{j}s_{k})=\mathbf{v}(b_{t}s_{k})=(-1)^{\gamma}\) when \(k=k_{\gamma}\), and otherwise \(\mathbf{v}(e)=0\) for all other edges. If \(f\) is any edge involving a symbol \(s_{k_{\gamma}}\), the \(n\) tiles extending \(f\) each have (if any) three nonzero edges of matching sign. So \(M\mathbf{v}(f)=3n\mathbf{v}(f)\). In other cases, it is easy to see that \(M\mathbf{v}(f)=0\) by cancellation.
(B) Opposite signs on both rows and boxes of two distinct row bundles; constant on all columns and symbols. A similar variety exists with rows and columns swapped. There are \((h-1)+(w-1)\) linearly independent vectors of this kind.
The verification here is similar to (A), except that row bundles take the role of symbols.
#### Kronecker product
As with our matrix \(M\), it is possible to write the eigenvectors, including the kernel vectors from Section 2.3, using \(\otimes\). Before doing so, we set up some ingredient vectors and conventions of notation. Let \(\mathbf{j}_{r}\) denote the \(n\times 1\) all-ones vector indexed by rows, and \(\mathbf{j}_{c}\), \(\mathbf{j}_{s}\), \(\mathbf{j}_{b}\) similar for columns, symbols and boxes, respectively. Let \(\mathbf{e}_{r_{i}}\) denote the \(n\times 1\) indicator vector of row \(i\). We omit the second subscript and write \(\mathbf{e}_{r}\) if \(i\) is unimportant or clear from context. Any difference \(\mathbf{e}_{r_{i}}-\mathbf{e}_{r_{i^{\prime}}}\) where \(i\neq i^{\prime}\) is denoted \(\mathbf{d}_{r}\). Similar vectors are defined for columns, symbols, and boxes, with letters \(c,s,b\) used accordingly.
We also need a few minor variants of these. A difference \(\mathbf{e}_{r_{i}}-\mathbf{e}_{r_{i^{\prime}}}\) where \(r_{i}\mathop{\not\cong}r_{i^{\prime}}\) is denoted \(\mathbf{d}_{r}^{\sim}\); an analogous vector \(\mathbf{d}_{c}^{\sim}\) is defined for columns in the same bundle. If \(\ell_{00},\ell_{01},\ell_{10},\ell_{11}\) are box indices forming a rectangle, the alternating sum of their indicator vectors is denoted \(\mathbf{d}_{b}^{\mathbb{C}}\). Given two distinct row bundles, say \(B_{0}\) and \(B_{1}\), a combination \(\sum_{r\in B_{0}}\mathbf{e}_{r}-\sum_{r\in B_{1}}\mathbf{e}_{r}\) is denoted \(\mathbf{d}_{r}^{\dagger}\). Similar vectors can be defined for columns and boxes. We remark that all vectors involving the letter '\(\mathbf{d}\)' have a sum of coordinates equal to zero.
As an important convention, when we take a Kronecker product of vectors from the above, it is regarded as a \(4n^{2}\times 1\) vector with zeros inserted as needed. For instance, \(\mathbf{j}_{r}\otimes\mathbf{j}_{c}\) has entries equal to \(1\) supported on all row-column edges, and is assumed to vanish on edges of all other types (even though these do not appear in the product). Additionally, choices are assumed to be canonical and consistent within a vector. For instance, if \(\mathbf{d}_{s}\) appears twice, it is assumed to represent the same symbol indices; if \(\mathbf{d}_{r}^{\dagger}\) and \(\mathbf{d}_{b}^{\dagger}\) appear together, it is assumed the row bundle and box bundle coincide.
Although it does not convey much combinatorial understanding, eigenvectors in this form can in principle be checked using block matrix multiplication. As an example, consider \(\mathbf{v}=\mathbf{d}_{r}^{\sim}\otimes\mathbf{d}_{c}\), a type (A) eigenvector for \(\theta_{1}=n\). The four blocks of the product \(M\mathbf{v}\) can be computed one at a time:
\[(nI_{r}\otimes I_{c})(\mathbf{d}_{r}^{\sim}\otimes\mathbf{d}_{c}) =n\mathbf{d}_{r}^{\sim}\otimes\mathbf{d}_{c}\] \[(I_{r}\otimes J_{sc})(\mathbf{d}_{r}^{\sim}\otimes\mathbf{d}_{c}) =\mathbf{d}_{r}^{\sim}\otimes\mathbf{0}=\mathbf{0}\] \[(J_{sr}\otimes I_{c})(\mathbf{d}_{r}^{\sim}\otimes\mathbf{d}_{c}) =\mathbf{0}\otimes\mathbf{d}_{c}^{\sim}=\mathbf{0}\] \[(H_{rcb}^{\top}\otimes\mathbf{j}_{s})(\mathbf{d}_{r}^{\sim} \otimes\mathbf{d}_{c}) =H_{rcb}^{\top}(\mathbf{d}_{r}^{\sim}\otimes\mathbf{d}_{c})\otimes \mathbf{j}_{s}=\mathbf{0}\otimes\mathbf{j}_{s}=\mathbf{0},\]
where the latter is because every box intersects both or neither of the row indices defining \(\mathbf{d}_{r}^{\sim}\). It follows that \(M\mathbf{v}=n\mathbf{v}\).
We next consider in more detail the projectors onto the eigenspaces of \(M\).
### Projectors and a generalized inverse for \(M\)
Since \(M\) is symmetric, the projectors \(E_{j}\) onto eigenspaces for \(\theta_{j}\) are pairwise orthogonal idempotents summing to \(I\). Moreover, we have \(E_{j}\in\mathfrak{A}\) for each \(j\) as a general fact of coherent configurations; see for instance [10].
The projectors can be computed as \(E_{i}=V_{i}(V_{i}^{\top}V_{i})V_{i}^{\top}\), where \(V_{i}\) is a matrix whose columns are a basis of eigenvectors for \(\theta_{i}\). As a special case, since \(V_{4}\) is the all-ones vector, we have \(E_{4}=\frac{1}{4n^{2}}J\). The structure of entries for each of the other projectors is shown in Figure 11. Intensity of blue/red correspond respectively to extreme positive/negative entries, while shades of green/yellow correspond to positive/negative entries which are smaller in magnitude.
Knowing the eigenvalues and eigenspace projectors for \(M\) can be used to compute a generalized inverse \(M^{+}\) satisfying \(MM^{+}M=M\). We explain this computation in the rest of this section.
The spectral decomposition of \(M\) is given by \(M=nE_{1}+2nE_{2}+3nE_{3}+4nE_{4}\). In what follows, \(E_{0}\) will also be denoted \(K\), since it projects onto the kernel of \(M\). Although \(M\) itself is not invertible,
\begin{table}
\begin{tabular}{|c|c|c|} \hline eigenvalue & variety & eigenvector \\ \hline
0 & (A) & \(\mathbf{e}_{r}\otimes\mathbf{j}_{c}-\mathbf{e}_{r}\otimes\mathbf{j}_{s}\) \\ & (B) & \(H_{rcb}\mathbf{e}_{b}-\mathbf{j}\otimes\mathbf{e}_{b}\) \\ & (C) & \(\mathbf{d}_{r}^{\dagger}\otimes\mathbf{e}_{s}-\mathbf{d}_{b}^{\dagger}\otimes \mathbf{e}_{s}\) \\ \hline \(n\) & (A) & \(\mathbf{d}_{r}^{\sim}\otimes\mathbf{d}_{c}\) \\ & (B) & \(\mathbf{d}_{r}^{\sim}\otimes\mathbf{d}_{s}\) \\ & (C) & \(\mathbf{d}_{s}\otimes\mathbf{d}_{b}^{\Box}\) \\ \hline \(2n\) & (A) & \(\mathbf{d}_{r}^{\sim}\otimes\mathbf{j}_{c}+\mathbf{d}_{r}^{\sim}\otimes \mathbf{j}_{s}\) \\ & (B) & \(\mathbf{d}_{r}^{\sim}\otimes\mathbf{d}_{s}+\mathbf{d}_{b}^{\dagger}\otimes \mathbf{d}_{s}\) \\ & (C) & \(\mathbf{j}_{s}\otimes\mathbf{d}_{b}^{\Box}+H_{rcb}\mathbf{d}_{b}^{\Box}\) \\ \hline \(3n\) & (A) & \(\mathbf{d}_{s}\otimes\mathbf{j}_{r}+\mathbf{d}_{s}\otimes\mathbf{j}_{c}+ \mathbf{d}_{s}\otimes\mathbf{j}_{b}\) \\ & (B) & \(\mathbf{d}_{r}^{\dagger}\otimes\mathbf{j}_{c}+\mathbf{d}_{r}^{\dagger}\otimes \mathbf{j}_{s}+\mathbf{d}_{b}^{\dagger}\otimes\mathbf{j}_{s}\) \\ \hline \(4n\) & & \(\mathbf{j}_{r}\otimes\mathbf{j}_{c}+\mathbf{j}_{r}\otimes\mathbf{j}_{s}+ \mathbf{j}_{c}\otimes\mathbf{j}_{s}+\mathbf{j}_{s}\otimes\mathbf{j}_{b}\) \\ \hline \end{tabular}
\end{table}
Table 10. Eigenvectors of \(M\); \(\ast\) means minor variants exist
if we take \(\eta\neq 0\), say \(\eta=n/x\), we can invert the additive shift \(A=M+\eta K\) as
\[A^{-1}=\frac{1}{n}\left(xK+\sum_{j=1}^{4}\frac{1}{j}E_{j}\right). \tag{4.1}\]
This formula results from the \(E_{i}\) being orthogonal idempotents with \(E_{0}+E_{1}+\cdots+E_{4}=I\). Later on, to solve our linear system for fractional Sudoku, we make use of a generalized inverse \(M^{+}\) of the form in (4.1). It turns out that \(x=3/2\), or \(\eta=2n/3\), is a nice choice. A discussion of this choice is given in the next subsection.
With some computer-assisted algebra, we found coefficients to express \(A^{-1}\) in the basis \(\{A_{i}:i=1,\ldots,69\}\) for the adjacency algebra \(\mathfrak{A}\). These are expressed in Table 12. For convenience, we have cleared a denominator of \(9n^{3}\) and then applied an additive shift of \(5/16\). Using our Sage [17] worksheet at [https://github.com/pbd345/sudoku](https://github.com/pbd345/sudoku), the interested reader can compute various symbolic products in \(\mathfrak{A}\), including a verification that Table 12 does indeed give an inverse of \(A\).
Figure 11: Structure of entries of \(E_{0},E_{1}\) (top), \(E_{2},E_{3}\) (bottom) for \((h,w)=(2,3)\)
### Norm bounds
We work with the \(\infty\)-norm of vectors \(\|\mathbf{x}\|_{\infty}=\max\{|x_{1}|,\ldots,|x_{n}|\}\) and the induced norm on matrices
\[\|A\|_{\infty}=\max_{i}\sum_{j}|A_{ij}|.\]
It is straightforward to obtain a bound on the \(\infty\)-norm of (4.1) using the values in Tables 5 and 12. The triangle inequality gives a crude bound of order \(O(n^{-1})\), but we can get an exact value with the help of a computer. First, we store the coefficients of the projectors relative to our coherent configuration basis and make a note of their signs. For each of the four sections corresponding to the edge types, we sum the absolute values of projector coefficients times the section row sums. When we combine these as in (4.1), the result is a list of three piecewise linear functions (one duplicate occurs for two sections), each multiplied by \(n^{-1}\). These functions are
\[f_{1}(x) =3|\tfrac{x}{2}-\tfrac{3}{4}|+4|\tfrac{x}{6}-\tfrac{5}{36}|+2| \tfrac{x}{12}-\tfrac{7}{144}|+2|\tfrac{x}{12}-\tfrac{13}{144}|+2|\tfrac{x}{3}- \tfrac{11}{18}|+3|\tfrac{x}{2}-\tfrac{1}{4}|+1,\] \[f_{2}(x) =2|\tfrac{x}{2}-\tfrac{3}{4}|+4|\tfrac{x}{6}-\tfrac{5}{36}|+2| \tfrac{x}{12}-\tfrac{7}{144}|+2|\tfrac{x}{12}-\tfrac{13}{144}|+|\tfrac{x}{3}- \tfrac{11}{18}|+|\tfrac{x}{3}-\tfrac{1}{9}|+2|\tfrac{x}{2}-\tfrac{1}{4}|+1,\] \[f_{3}(x) =3|\tfrac{x}{2}-\tfrac{3}{4}|+|\tfrac{x}{4}-\tfrac{25}{48}|+6| \tfrac{x}{6}-\tfrac{5}{36}|+3|\tfrac{x}{12}-\tfrac{13}{144}|+3|\tfrac{x}{3}- \tfrac{11}{18}|+3|\tfrac{x}{2}-\tfrac{1}{4}|+1.\]
Graphs for the functions \(f_{i}(x)\) are shown in Figure 13. It turns out that \(\max\{f_{i}(x):i=1,2,3\}\) is minimized at \(x=3/2\), yielding the dominant term \(15/4n\), also an upper bound for all \(h,w\geq 2\). The results of this computation are summarized in the lemma below.
**Lemma 4.2**.: Let \(A=M+\tfrac{2n}{3}K\). Then
\[\|A^{-1}\|_{\infty}=\frac{15}{4n}-\frac{7(h+w)}{8n^{2}}-\frac{4}{9n^{2}}+ \frac{31(h+w)-21}{72n^{3}}<\frac{15}{4n}.\]
We also note the following bound on \(K=E_{0}\).
**Lemma 4.3**.: We have \(\|K\|_{\infty}\leq\frac{11}{2}-\frac{17(h+w)}{6n}+O(n^{-1})\).
\begin{table}
\begin{tabular}{|l|r|r|r|} \hline relations & coeffs & relations & coeffs \\ \hline \(0\) & \(9n^{2}+h+w\) & \(31\) & \(9n^{2}+n+h\) \\ \(1,3,4\) & \(h+w\) & \(33\) & \(n+h\) \\ \(2,5,16,18,38,42,46,50\) & \(w\) & \(37,41\) & \(-9n/2+h+w\) \\ \(6,7,32,34,39,43,54,58\) & \(h\) & \(45,49\) & \(-9nw/2+n+w\) \\ \(9,12\) & \(-9n/2+w+1\) & \(53,57\) & \(-9nh/2+n+h\) \\ \(10,13\) & \(w+1\) & \(61\) & \(9n^{2}+n+h+w-1\) \\ \(11,14,23,26,28,30\) & \(1\) & \(62\) & \(h+w-1\) \\ \(15\) & \(9n^{2}+n+w\) & \(63\) & \(n+w-1\) \\ \(17\) & \(n+w\) & \(64\) & \(w-1\) \\ \(19,35,47,51,55,59\) & \(n\) & \(65\) & \(n+h-1\) \\ \(21,24\) & \(-9n/2+h+1\) & \(66\) & \(h-1\) \\ \(22,25\) & \(h+1\) & \(67\) & \(n-1\) \\ \(27,29\) & \(-7n/2+1\) & \(68\) & \(-1\) \\ \hline \end{tabular}
\end{table}
Table 12. Coefficients of \(9n^{3}A^{-1}+\frac{5}{16}J\)
## 5. **Perturbation**
### Changes to \(M\) resulting from pre-filled entries
Let \(S\) be a partial Sudoku of type \((h,w)\), where \(hw=n\). Recall that \(G_{S}\) is the graph obtained from \(G_{hw}\) by deleting the edges of tiles corresponding to pre-filled entries in \(S\). Suppose \(S\) has the \((1-\delta)n\) availability property. That is, suppose every edge in \(G_{S}\) is contained in at least \((1-\delta)n\) tiles in \(G_{S}\).
Let \(M=WW^{\top}\) and \(M_{S}=W_{S}W_{S}^{\top}\), as introduced in Section 2. To set up our perturbation argument, we are interested in quantifying the change in \(M\) resulting from pre-filling the entries of \(S\). It makes no sense to subtract \(M_{S}\) from \(M\) directly, since these matrices have different sizes. However, we can use a convenient border.
Let \(\widetilde{M}\) denote the \(4n^{2}\times 4n^{2}\) matrix, indexed by edges of \(G\), whose entries are given by
\[\widetilde{M}(e,f)=\begin{cases}M_{S}(e,f)&\text{if $e,f\in E(G_{S})$;}\\ 0&\text{if $e\in E(G_{S})$ and $f\not\in E(G_{S})$;}\\ M(e,f)&\text{if $e\not\in E(G_{S})$.}\end{cases}\]
If we sort the rows and columns so that those indexed by \(E(G_{S})\) come first, then
\[\widetilde{M}=\left[\begin{array}{c|c}M_{S}&O\\ \hline\text{as in $M$}\end{array}\right]. \tag{5.1}\]
Put \(\Delta M=M-\widetilde{M}\). We next estimate \(\|\Delta M\|_{\infty}\) under our sparseness assumption.
For an edge \(e\in E(G_{S})\), let \(U(e)\) denote the set of unavailable options
\[U(e)=\{t\in T(G_{hw}):e\in t\text{ and }f\in t\text{ for some }f\in E(G_{hw}) \setminus E(G_{S})\}.\]
Put \(u(e)=|U(e)|\) and \(\mathbf{u}=(u(e):e\in E(G_{S}))\). In more detail, if \(e\) is an edge of type row-column, say \(e=\{r_{i},c_{j}\}\), then \(U(e)\) keeps track of those symbols \(k\) which are not able to be placed in cell \((i,j)\) because \(k\) already appears in row \(i\) or column \(j\) or box box\((i,j)\). If \(e\) is an edge of type row-symbol, say \(e=\{r_{i},s_{k}\}\), then \(U(e)\) keeps track of those columns \(j\) which are unavailable for symbol \(k\) in row \(i\), either because cell \((i,j)\) was pre-filled or \(k\) appears somewhere else in column \(j\) or box box\((i,j)\). Note that several columns might be eliminated as options if \(k\) appears in a box intersecting row \(i\). Edges of type column-symbol behave in an analogous way. Finally, if \(e\) is an edge of type box-symbol, say \(e=\{b_{\ell},s_{k}\}\), then \(U(e)\) keeps track of those cells \((i,j)\) in box \(\ell\) for which
is not allowed, either because \((i,j)\) was already filled in \(S\), or because \(k\) already appears in the row or column bundle for box \(\ell\).
**Lemma 5.1**.: We have \(\mathbf{0}\leq\Delta M\mathds{1}\leq 4\mathbf{u}\) entrywise. In particular, \(\|\Delta M\|_{\infty}\leq 4\|\mathbf{u}\|_{\infty}\).
Proof.: Entry \(e\) of \((\Delta M)\mathds{1}\) equals \(\sum_{f}\Delta M(e,f)\). The summand is the number of unavailable tiles \(t\) with \(e\in t\) and \(f\in t\). Since each copy \(t\) contains four edges, this count is at most \(4u(e)\).
Now, if \(S\) has the \((1-\delta)\)-availability property, then by definition we have \(\|\mathbf{u}\|_{\infty}\leq\delta n\). And recall that if \(S\) has the \(\epsilon\)-dense property, then it has the \((1-3\epsilon)\)-availability property, as explained at the end of Section 2.1. These, together with Lemma 5.1 immediately give bounds on \(\Delta M\).
**Lemma 5.2**.: With \(\Delta M\) constructed from \(S\) as above, we have
1. \(\|\Delta M\|_{\infty}\leq 4\delta n\) if \(S\) has the \((1-\delta)\)-availability property; and
2. \(\|\Delta M\|_{\infty}\leq 12en\) if \(S\) is \(\epsilon\)-dense;
### A guarantee on nonnegative solutions
The following can be distilled from [4, Section 3].
**Lemma 5.3**.: Let \(A\) be an \(N\times N\) invertible matrix over the reals. Suppose \(A-\Delta A\) is a perturbation. Then
1. \(A-\Delta A\) is invertible provided \(\|A^{-1}\Delta A\|_{\infty}<1\); and
2. the solution \(\mathbf{x}\) to \((A-\Delta A)\mathbf{x}=A\mathds{1}\) is entrywise nonnegative provided \(\|A^{-1}\Delta A\|_{\infty}\leq\frac{1}{2}\).
Lemma 5.3 can be proved using the expansion \((A-\Delta A)^{-1}=\sum_{k=0}^{\infty}(A^{-1}\Delta A)^{k}A^{-1}\). More details on matrix norms and the convergence of this series can be found in Horn and Johnson's book [11].
We would like to apply Lemma 5.3 to the perturbation \(M-\Delta M\), but we must take care to handle the nontrivial kernel. Of various possible approaches, one convenient thing to do is to place those columns of \(K=E_{0}\) corresponding to non-edges of \(G_{S}\) in the perturbation. In more detail, let \(A=M+\eta K\) and observe that \(A\mathds{1}=4n\mathds{1}\). That is, for this choice of \(A\), the right side of the system in Lemma 5.3 is just a scalar multiple of the all-ones vector. Define \(\Delta A=\Delta M+\eta K^{\prime}\), where
\[K^{\prime}(e,f)=\begin{cases}0&\text{if }f\in E(G_{S});\\ K(e,f)&\text{otherwise}.\end{cases}\]
Note that
\[A^{-1}(\eta K^{\prime})=\left(\frac{1}{\eta}K+\sum_{j=1}^{4}\frac{1}{jn}E_{j }\right)(\eta K^{\prime})=K^{\prime}, \tag{5.2}\]
since the columns of \(K^{\prime}\) are orthogonal to each of the other eigenspaces.
**Lemma 5.4**.: Suppose \(S\) is \(\epsilon\)-dense. Then, for large \(h\) and \(w\), \(\|K^{\prime}\|_{\infty}\leq(\epsilon+o(1))\|K\|_{\infty}\).
Proof.: Write \(K=\sum_{i=1}^{m}c_{i}A_{i}\). Fix \(e\in E(G_{hw})\). Then we have
\[\sum_{f\in E(G_{hw})}|K(e,f)|=\sum_{i}|c_{i}|d_{i}(e),\]
where \(d_{i}(e)\) is the number of edges \(f\) with \((e,f)\in R_{i}\). Recall that \(d_{i}(e)\) is zero unless \(e\) is of an edge type corresponding to the first coordinate of \(R_{i}\), and we may assume a canonical choice \(r_{1}c_{1}\), \(r_{1}s_{1}\), \(c_{1}s_{1}\), or \(s_{1}b_{1}\) for \(e\).
Let \(\overline{G_{S}}\) denote the complement of \(G_{S}\) in \(G_{hw}\). Then we have
\[\sum_{f\in E(G_{hw})}|K^{\prime}(e,f)|=\sum_{f\in E(G_{S})}|K(e,f)|=\sum_{i=1}^{ m}|c_{i}|d_{i}^{\prime}(e), \tag{5.3}\]
where \(d_{i}^{\prime}(e)\) is the number of edges \(f\in E(\overline{G_{S}})\) with \((e,f)\in R_{i}\). With the exception of \(i\in I:=\{1,2,4,16,32,62\}\), each relation \(R_{i}\) has an associated feature which, owing to our \(\epsilon\)-density assumption, limits the number of missing edges \(f\) in \(G_{S}\) with \((e,f)\in R_{i}\). These features are indicated in Table 14, along with bounds on leading terms of \(|c_{i}|d_{i}^{\prime}(e)\). A legend and upper bound on corresponding \(d_{i}^{\prime}(e)\) are given in Table 15. Terms with \(i\in I\) are of lower order. Otherwise, when we compute the sum (5.3), we obtain the same leading terms as in the computation of \(\|K\|_{\infty}\), each times \(\epsilon\). The edge type with largest total coefficient of \(\epsilon\) is the box-symbol type, or column 4 in Table 14. This results in
\[\|K^{\prime}\|_{\infty}\leq\left(\frac{11}{2}+O(h^{-1}+w^{-1})\right)\epsilon+ \frac{h+w}{2n}+O(h^{-2}+w^{-2}+n^{-1}).\qed\]
Putting together Lemmas 4.2, 4.3, 5.2 and 5.4(a), we obtain a bound on \(A^{-1}\Delta A\).
**Proposition 5.5**.: Suppose \(S\) is an \(\epsilon\)-dense partial Sudoku of type \((h,w)\) where \(h,w\) are large. Then \(\|A^{-1}\Delta A\|_{\infty}<101\epsilon/2+o(1)\).
\begin{table}
\begin{tabular}{|r r r r|r r r|r r r|r r|} \hline \multicolumn{2}{|c|}{leading} & \multicolumn{2}{|c|}{sparse} & \multicolumn{2}{|c|}{leading} & \multicolumn{2}{|c|}{sparse} & \multicolumn{2}{|c|}{leading} & \multicolumn{2}{|c|}{sparse} & \multicolumn{2}{|c|}{leading} & \multicolumn{2}{|c|}{sparse} \\ \(i\) & term & feature & \(i\) & term & feature & \(i\) & term & feature & \(i\) & term & feature \\ \hline
1 & \(3/2n\) & - & 13 & \(\epsilon/2\) & r & 25 & \(\epsilon/2\) & c & 42 & \(\epsilon/2\) & b \\
2 & \(1/h\) & - & 14 & \(\epsilon/6\) & rb & 26 & \(\epsilon/6\) & cb & 43 & \(\epsilon/6\) & rb \\
3 & \(\epsilon/2\) & r & 15 & \(\epsilon/12\) & all & 27 & \(\epsilon/12\) & all & 44 & \(\epsilon/6\) & cb \\
4 & \(1/w\) & - & 16 & \(1/2h\) & - & 30 & \(\epsilon/3\) & s & 45 & \(\epsilon/12\) & all \\
5 & \(\epsilon/2\) & b & 17 & \(\epsilon/2\) & r & 31 & \(\epsilon/12\) & all & 50 & \(\epsilon/2\) & srb \\
6 & \(\epsilon/3\) & rb & 18 & \(\epsilon/2\) & srb & 32 & \(1/2w\) & - & 51 & \(\epsilon/6\) & rb \\
7 & \(\epsilon/2\) & c & 19 & \(\epsilon/3\) & rb & 33 & \(\epsilon/2\) & c & 52 & \(\epsilon/6\) & s \\
8 & \(\epsilon/3\) & cb & 20 & \(\epsilon/6\) & s & 34 & \(\epsilon/2\) & scb & 53 & \(\epsilon/12\) & all \\
9 & \(\epsilon/12\) & all & 21 & \(\epsilon/12\) & all & 35 & \(\epsilon/3\) & cb & 58 & \(\epsilon/2\) & scb \\
10 & \(\epsilon/2\) & r & 28 & \(\epsilon/3\) & s & 36 & \(\epsilon/6\) & s & 59 & \(\epsilon/6\) & cb \\
11 & \(\epsilon/6\) & rb & 29 & \(\epsilon/12\) & all & 37 & \(\epsilon/12\) & all & 60 & \(\epsilon/6\) & s \\
12 & \(\epsilon/12\) & all & 46 & \(\epsilon/2\) & srb & 54 & \(\epsilon/2\) & scb & 61 & \(\epsilon/12\) & all \\
22 & \(\epsilon/2\) & c & 47 & \(\epsilon/6\) & rb & 55 & \(\epsilon/6\) & cb & 62 & \((h+w)/2n\) & - \\
23 & \(\epsilon/6\) & cb & 48 & \(\epsilon/6\) & s & 56 & \(\epsilon/6\) & s & 63 & \(\epsilon/2\) & b \\
24 & \(\epsilon/12\) & all & 49 & \(\epsilon/12\) & all & 57 & \(\epsilon/12\) & all & 64 & \(\epsilon/2\) & srb \\
38 & \(\epsilon/2\) & b & & & & & & 65 & \(\epsilon/3\) & rb \\
39 & \(\epsilon/6\) & rb & & & & & & 66 & \(\epsilon/2\) & scb \\
40 & \(\epsilon/6\) & cb & & & & & & 67 & \(\epsilon/3\) & cb \\
41 & \(\epsilon/12\) & all & & & & & & 68 & \(\epsilon/3\) & s \\ & & & & & & & & 69 & \(\epsilon/4\) & all \\ \hline \end{tabular}
\end{table}
Table 14: Terms contributing to \(\|K\|_{\infty}\)
\begin{table}
\begin{tabular}{|r l r|r l|r l|r l|r l|r l|r l|r l|} \hline \multicolumn{2}{|c|}{sparse feature} & \multicolumn{2}{|c|}{bound} & \multicolumn{2}{|c|}{sparse feature} & \multicolumn{2}{|c|}{bound} \\ \hline r & cells filled in a row & \(\epsilon n\) & \(\mathrm{b}\) & \(\mathrm{cells\ filled\ in a\ box}\) & \(\epsilon n\) & \(\mathrm{cells\ filled\ in a\ column}\) & \(\epsilon n\) & \(\mathrm{s}\) & occurrences of a symbol & \(\epsilon n\) & \(\mathrm{c}n\) \\ rb & cells filled in a row bundle & \(\epsilon nh\) & srb & \(\mathrm{times\ a\ symbol\ is\ in\ a\ row\ bundle}\) & \(\epsilon h\) & \(\epsilon h\) & \(\mathrm{cells\ filled\ in\ a\ column\ bundle}\) & \(\epsilon w\) & \(\mathrm{cells\ filled\ in\ all\ of\ }S\) & \(\epsilon n^{2}\)
Proof.: From (5.2), submultiplicativity and the triangle inequality,
\[\|A^{-1}\Delta A\|_{\infty}\leq\|A^{-1}\|_{\infty}\|\Delta M\|_{\infty}+\|K^{ \prime}\|_{\infty}<\tfrac{15}{4n}\times 12\epsilon n+\tfrac{11}{2}\epsilon+o(1)= \tfrac{101}{2}\epsilon+o(1).\qed\]
### Proof of the main result
We are now ready to prove our result on partial Sudoku completion under the \(\epsilon\)-dense assumption.
Proof of Theorem 1.1.: Apply Lemma 5.3 to \(A\) and \(\Delta A\) constructed as above. Under the assumption \(\epsilon<1/101\), Proposition 5.5 gives \(\|A^{-1}\Delta A\|_{\infty}<1/2\) for sufficiently large \(h,w\). This implies an entrywise nonnegative solution to \((A-\Delta A)\mathbf{x}=\mathds{1}\). Let \(\mathbf{x}^{\prime}\) denote the restriction of \(\mathbf{x}\) to \(E(G_{S})\). Since \(A-\Delta A\) is block lower-triangular with respect to the partition into edges and non-edges of \(G_{S}\), it follows that \((M_{S}+\eta K[S])\mathbf{x}^{\prime}=\mathds{1}\). We note that \(M_{S}\) and \(K[S]\) are symmetric and satisfy the conditions in Proposition 2.4. Therefore, Lemma 2.5 implies \(M_{S}\mathbf{x}^{\prime}=\mathds{1}\). This, in turn, implies a nonnegative solution to the linear system for completing \(S\) via the coefficient matrix \(W_{S}\).
It is worth a remark that the lower order terms in Lemmas 4.2 and 4.3 are actually negative. This means our hypothesis of large \(h\) and \(w\) is only really used to control the mild lower-order terms in \(K^{\prime}\). In general, our method is robust for small partial Sudoku, often succeeding in practice with densities much larger than \(1/101\). For instance, the completion shown in Figure 1 came from applying the above proof method.
## 6. **Thin boxes**
In this section, we investigate in more detail the case of Sudoku of type \((h,w)\) with fixed width \(w\) and height \(h=n/w\) where \(n\) is a large multiple of \(w\). In Section 1, we observed that there exist non-completable partial \(n\times n\) Sudoku latin squares with no row, column, symbol, or box used very often; indeed, this motivated our row/column bundle condition for \(\epsilon\)-density. The idea is that symbol \(k\) can be forced in an entry \((i,j)\) using only \(O(h+w)\) pre-filled occurrences of \(k\) in the row and column bundle containing \((i,j)\). Now, when \(w\) is fixed and \(h=n/w\), this construction requires a symbol be used \(O(n)\) times. One could hope that all partial Sudoku of type \((n/w,w)\) which are \(\epsilon\)-dense in the sense of latin squares could still admit a completion, even without the column bundle condition.
Notice that this would not follow directly from our earlier work; for instance if \(w=2\), any cell prefilled with symbol \(k\) in the the top-left box prohibits the placement of \(k\) in the same column of the bottom-left box. This eliminates \(n/2\) options for placing \(k\) in that box, resulting in a perturbation which is far too severe for our methods of Section 5.
The next result is a general construction giving a barrier to (fractional) completion in the context of fixed \(w\). It is similar to the second example in Figure 2.
**Proposition 6.1**.: Let \(w\) be an integer, \(w\geq 2\). For any \(\epsilon>1/3w\), there exists a partial Sudoku of type \((h,w)\), with no completion and such that every row, column, symbol, and box is used at most \(\epsilon hw\) times.
Proof.: Suppose \(h\geq w\). Put \(a=\lceil(h+w)/3\rceil\) and \(n=hw\). Take \(A,A^{\prime}\) as disjoint sets of \(a\) symbols, which is possible since \(n=hw\geq 2a\) holds under the assumption \(h\geq w\geq 2\). Let \(\mathcal{L}\) be the leftmost box bundle; that is, \(\mathcal{L}\) consists of the first \(w\) columns and box numbers \(hj+1\) for \(j=1,\dots,w\). Define a partial Sudoku \(S\) of type \((h,w)\) as follows: (a) put the elements of \(A^{\prime}\) in entries \((i,1)\), \(i=1,\dots,a\); (b) put the elements of \(A\) in column \(j\) of the \(j\)th box of \(\mathcal{L}\) for each \(j=2,\dots,w\); (c) put
the elements of \(A\) strictly to the right of \(\mathcal{L}\) in row \(i\) for \(i=a+1,\ldots,2a-w+1\). Several remarks are needed. First, each of the placements of (a) and (b) fit within the respective boxes because \(a\leq h\) holds under the assumption \(h\geq w\geq 2\). Next, we claim that (c) can be done in such a way that no \(h\times w\) box has repeated elements. To achieve this, we can order the elements of \(A\) somehow in first of these rows, starting at column \(w+1\), and then shift this same ordering to the left by multiples of \(w\) in each successive row. To check that there is room for this, we note that \((a-w)w+a\leq n-w\). This holds with equality for \(h=w=a=2\) and otherwise can be easily verified with the estimate \(a\leq(h+w+2)/3\).
Now, by construction, every row, column and box contains at most \(a\) elements. Each symbol of \(A\) occurs \((w-1)+(a-w+1)=a\) times and each symbol of \(A^{\prime}\) occurs exactly once. Given a hypothetical fractional completion of \(S\), consider where the elements of \(A\) would occur in the first box. Columns \(2,\ldots,w\) are not available by (b), and rows \(1,\ldots,2a-w+1\) of the first column are blocked by (a) and (c). So, the elements of \(A\) must fit into the entries \((i,1)\) for \(2a-w+1<i\leq h\). But there are only \(h+w-2a-1<a\) such entries. This is a contradiction, and hence \(S\) has no fractional completion.
Finally, note that by taking \(h\) sufficiently large, we can ensure \(a/n<\epsilon\). This completes the proof.
Proposition 6.1 shows that, in the case of thin boxes with fixed \(w\), any result guaranteeing fractional completion with up to \(\epsilon n\) occurrences of a row, column, symbol, or box (ignoring row bundle density) must have \(\epsilon\) being a function of \(w\). With some minor adaptation, our methods can produce a result of this form. First, though, it is helpful to have a technical lemma on adding entries to latin rectangles. Let us say that an \(m\times n\) partial latin rectangle is \(\epsilon\)-_dense_ if every row has at most \(\epsilon n\) filled cells, every column has at most \(\epsilon m\) filled cells, and every symbol is used at most \(\epsilon m\) times.
**Lemma 6.2**.: Suppose \(0<\epsilon,\delta<\frac{1}{6}\). Let \(n\geq m\) and suppose we are given an \(\epsilon\)-dense \(m\times n\) partial latin rectangle \(P\). Let \(A_{1},\ldots,A_{n}\subset[n]\) with \(|A_{j}|<\delta m\) for each \(j\) and \(|\{j:k\in A_{j}\}|<\delta m\) for each \(k\). Then \(P\) is contained in a \(3(\delta+\epsilon)\)-dense \(m\times n\) partial latin rectangle \(P^{\prime}\) such that, for each \(j=1,\ldots,n\), column \(j\) of \(P^{\prime}\) contains the elements of \(A_{j}\).
Proof.: We construct \(P^{\prime}\) from \(P\) by adding symbols from \(A_{j}\) one column at a time. It can be assumed that \(A_{j}\) is disjoint from the set of symbols already in column \(j\) of \(P\) by replacing \(A_{j}\) with a smaller set. Suppose we have extended \(P\) by \(j-1\) columns for \(1\leq j\leq n\), and let \(P_{j}\) be the resulting array. We may sort the rows of \(P_{j}\) in weakly increasing order of how many symbols are in each. Let \(B_{j}\) consist of the first \(\lceil(\epsilon+2\delta)m\rceil\) row indices; those correspond to rows with the fewest number of symbols. Let \(G_{j}\) denote the bipartite graph with vertex partition \(A_{j},B_{j}\) and an edge drawn between \(k\in A_{j}\) and \(r\in B_{j}\) if and only if symbol \(k\) is available to be placed in row \(r\) of column \(j\). We claim that \(G_{j}\) has a matching that uses every element of \(A_{j}\). Consider symbol \(k\) in \(A_{j}\). It can be placed in any row that doesn't already have symbol \(k\). But \(k\) appears in at most \(\epsilon m\) rows of \(P\) and was previously added at most \(\delta m\) times in forming \(P_{j}\). Therefore,
\[\deg_{G_{j}}(k)\geq(\epsilon+2\delta)m-\epsilon m-\delta m=\delta m>|A_{j}|.\]
So our claim of the existence of the matching follows by Hall's Theorem. Let \(P^{\prime}\) be the resulting array after extending \(P_{n}\). In checking that \(P^{\prime}\) is \(3(\epsilon+\delta)\)-dense, the column condition and symbol condition follow immediately from \(P\) being \(\epsilon\)-dense and the hypotheses on the sets \(A_{j}\). Suppose row \(r\) contains more than \(3(\epsilon+\delta)n\) symbols. By choice of \(B_{j}\), when the last symbol was added to row \(r\), there were at least \(\lfloor(1-\epsilon-2\delta)m\rfloor\) other rows with at least \(3(\delta+\epsilon)n-1\geq 2(\delta+\epsilon)n\) symbols. (Here, we used \(\epsilon n\geq 1\) since otherwise \(P\) is empty.) These rows, along with row \(r\), would account for at least \(\frac{m}{2}\times 2n(\delta+\epsilon)=mn(\delta+\epsilon)\) filled entries. But this exceeds the total number of filled entries in \(P^{\prime}\), leading to a contradiction.
We are now ready for a result on fractional completion for the case of thin boxes without a density assumptions on bundles. The idea is to use Lemma 6.2 to first add symbols with low availability to boxes in the same column bundle, producing a partial Sudoku with the \((1-\delta)\)-availability property. This can then be completed as before using the estimates in Lemmas 5.2 and 5.4.
**Proposition 6.3**.: For each \(w\geq 2\), there exists a constant \(\epsilon=\epsilon(w)\) such that every sufficiently large partial Sudoku of type \((n/w,w)\) in which every row, column, symbol and box is used at most \(\epsilon n\) times admits a fractional completion.
Proof.: Let \(S\) be an \(\epsilon\)-dense partial Sudoku of type \((n/w,w)\) and let \(P\) be the restriction of \(S\) to the top row bundle. Then \(P\) is an \(m\times n\) partial latin rectangle, where \(m=n/w\), and \(P\) is \(w\epsilon\)-dense. Consider the column bundles of \(S\). For the \(i\)th column bundle, let \(Z_{i}\) be the set of symbols which occur in this bundle outside of \(P\). Our strategy is to add the symbols of \(Z_{i}\) to the \(i\)th column bundle of \(P\). Let \(A_{j}\), \(j=1,\ldots,n\), be any sets of symbols whose disjoint union over the \(i\)th bundle equals \(Z_{i}\) for each \(i\). Note that \(|A_{j}|\leq|Z_{i}|\leq\epsilon nw=\epsilon mw^{2}\) and every symbol appears in at most \(\epsilon n=\epsilon mw\) of the \(A_{j}\). By Lemma 6.2, \(P\) can have the entries of \(A_{j}\) added to column \(j\) in such a way that the resulting latin rectangle \(P^{\prime}\) is \(O(w^{2})\epsilon\)-dense. Let us carry out the same procedure for each of the \(w\) row bundles of \(S\). This produces a partial Sudoku \(S^{\prime}\) with the following properties: (1) every row, column, symbol and box is used at most \(O(w^{3})\epsilon n\) times in \(S^{\prime}\); (2) for every box \(b\) of \(S^{\prime}\), the symbols that occur in the column bundle containing \(b\) also occur within \(b\) itself. In particular, (2) says that a column-symbol edge \(\{c_{j},s_{k}\}\in E(G_{S^{\prime}})\) loses no options for tiles because of symbol \(k\) occurring in a different box of the column bundle containing \(c_{j}\). Similarly, a box-symbol edge \(\{b_{\ell},s_{k}\}\in E(G_{S^{\prime}})\) loses no options for tiles. This means \(S^{\prime}\) has the \((1-\delta)\)-availability property, where \(\delta=C_{1}\epsilon\) for some constant \(C_{1}\) depending only on \(w\). Let us construct the matrix \(\Delta M\) based on \(S^{\prime}\) as in Section 5.1. By Lemma 5.2, we have \(\|\Delta M\|_{\infty}<4C_{1}\epsilon n\). We make one minor change to the perturbation set-up of Section 5.2. Define \(\Delta A=\Delta M+\eta K^{\prime\prime}\), where
\[K^{\prime\prime}(e,f)=\begin{cases}0&\text{if $e\not\in E(G_{S^{\prime}})$ or $f\in E(G_{S^{\prime}})$};\\ K(e,f)&\text{otherwise}.\end{cases}\]
For \(\|K^{\prime\prime}\|_{\infty}\), it is straightforward to adapt Lemma 5.4 to get a weaker upper bound of the form \(\|K^{\prime\prime}\|_{\infty}<C_{2}\epsilon\), where \(C_{2}\) depends on \(w\). The key observation is that relation indices \(i\in\{34,54,58,66\}\) having'scb' constraints vanish in \(K^{\prime\prime}\), because \((e,f)\in R_{i}\) with \(e\in E(G_{S^{\prime}})\) implies \(f\in E(G_{S^{\prime}})\) by our construction of \(S^{\prime}\).
Letting \(A=M+\eta K\), for (say) \(\eta=n\), the bound in Lemma 4.2 gives \(\|A^{-1}\|_{\infty}<C_{3}n^{-1}\). Choose \(\epsilon(w)=1/2C_{3}(4C_{1}+C_{2})\). Then with this choice of \(\epsilon\), we have
\[\|A^{-1}\Delta A\|_{\infty}\leq\|A^{-1}\|_{\infty}(\|\Delta M\|_{\infty}+\|nK ^{\prime\prime}\|_{\infty})<C_{3}(4C_{1}+C_{2})\epsilon=\tfrac{1}{2}.\]
It follows from Lemma 5.3 that the system \((A-\Delta A)\mathbf{x}=\mathds{1}\) has a nonnegative solution \(\mathbf{x}\), and we finish the proof as before by restricting \(\mathbf{x}\) to the edges of \(G_{S}\).
For simplicity, we have omitted explicit estimates on the constants \(C_{i}\). Probably the constant \(C_{1}\) related to our construction of \(S^{\prime}\) is the main place where an improvement could be made.
## 7. **Variations and concluding remarks**
Suppose we generalize our setting so that each Sudoku box/cage is an arbitrary polyomino of \(n\) cells. Most of our set-up stays the same, except that the \(n\)-to-\(1\) function \(\mathrm{box}(i,j)\) mapping cells to boxes changes, say to \(\mathrm{box}^{\prime}(i,j)\). If this change is sufficiently small, we can reasonably expect the
same perturbation methods to give a fractional completion guarantee for sparse partial Sudoku of this generalized type.
To add some precision, let us define a polyomino Sudoku as above to have \(\alpha\)-_approximate_ type \((h,w)\) if, for each box \(\ell\), the symmetric difference between \(\mathrm{box}^{-1}(\ell)\) and \((\mathrm{box}^{\prime})^{-1}(\ell)\) can be covered by \(\alpha h\) rows and \(\alpha w\) columns. The \(0\)-approximate case coincides with our standard setting of rectangular boxes. Let \(S\) be an \(\alpha\)-approximate partial Sudoku and define the matrix \(M_{S}\) as before; that is, for unused edges \(e\) and \(f\), \(M_{S}(e,f)\) equals the number of available tiles containing both \(e\) and \(f\). Let \(M^{\prime}\) denote the \(4n^{2}\times 4n^{2}\) matrix for the empty Sudoku with polyomino boxes defined by \(\mathrm{box}^{\prime}\), and let \(M\) be our usual matrix for the case of \(h\times w\) boxes. We note a few observations on \(M^{\prime}\):
* \(M^{\prime}\) agrees with \(M\) on
* diagonal entries (each equals \(n\));
* all entries indexed by edges of type row-column, row-symbol, or column-symbol (since boxes are not involved);
* if \(\{e,f\}=\{r_{i}s_{k},b_{\ell}s_{k}\}\), then \(M^{\prime}(e,f)\) counts the cells shared between row \(i\) and box \(\ell\);
* if \(\{e,f\}=\{c_{j}s_{k},b_{\ell}s_{k}\}\), then \(M^{\prime}(e,f)\) counts the cells shared between column \(j\) and box \(\ell\);
* if \(\{e,f\}=\{r_{i}c_{j},b_{\ell}s_{k}\}\), then \(M^{\prime}(e,f)=1\) if \(\mathrm{box}(i,j)=\ell\) and \(M^{\prime}(e,f)=0\) otherwise.
Under the \(\alpha\)-approximate assumption, the above gives
\[\|M-M^{\prime}\|_{\infty}\leq 2(\alpha h)w+2(\alpha w)h=4\alpha n. \tag{7.1}\]
Assume \(S\) has the \((1-\delta)\)-availability property. Construct \(\widetilde{M}\) using \(M_{S}\) and \(M\) as in (5.1). From (7.1), Lemma 5.2, and the triangle inequality,
\[\|M-\widetilde{M}\|_{\infty}\leq\|M-M^{\prime}\|_{\infty}+\|M^{\prime}- \widetilde{M}\|_{\infty}<4(\alpha+\delta)n. \tag{7.2}\]
Plugging this into the perturbation methods used earlier, we are able to get a variant on our main result for the approximate rectangular setting. For brevity, we give a conservative statement without explicit constants. Here, the \(\epsilon\)-density condition on row/column bundles should be taken as an adaptation of that for rectangular boxes.
**Proposition 7.1**.: There exist positive constants \(\alpha\) and \(\epsilon\) such that every \(\epsilon\)-dense \(\alpha\)-approximate partial Sudoku with large \(h\) and \(w\) has a fractional completion.
It is possibly of interest to consider properties of the matrix \(M^{\prime}\) for specific box arrangements. A 'Pentadoku' is a \(5\times 5\) Sudoku-like puzzle whose cages are pentomino shapes. Each cage (in addition to each line) must contain the numbers from \(1\) to \(5\) exactly once. Figure 16 shows an example of a completed Pentadoku puzzle.
Many different box/cage arrangements are possible. The tilings of a \(5\times 5\) grid by _distinct_ pentominoes are enumerated and displayed at [12]. Setting \(n=5\), we computed the \(100\times 100\) matrix \(M^{\prime}\) corresponding to each of these box arrangements. The nullity of \(M^{\prime}\) was found to equal \(27\) or \(23\), depending on whether an 'I' tile is present or not, respectively. Curiously, this matches what (2.4) would produce for rectangular boxes if we were to substitute \(h+w=3\) or \(2\), respectively.
A generalization which we have not considered could allow two or more simultaneous box patterns. This is natural because the row and column conditions in a latin square can already be viewed as degenerate box conditions. An example for \(n=6\) with both \(2\times 3\) and \(3\times 2\) boxes is given in Figure 17. Using a generalized notion of tile and a suitably enlarged linear system, similar methods as in this paper could apply, at least in principle.
We next make a brief remark about the coherent configuration we used in Section 3. The number of relations was quite large, and pushed many of our estimates and calculations into computer-assisted territory. It is possible that the complexity of the algebra can be reduced. For instance, our matrix \(M\) and the eigenprojectors were all symmetric, meaning that relations pair up with their transpose. Also, every row-column pair uniquely determines a box, so it is plausible that box-symbol edges could be eliminated. We made an attempt to work with the Schur complement of \(M\) relative to box-symbol edges, but some technical challenges discouraged us from going further with that approach.
Methods of algebraic graph theory have been applied to Sudoku before, but in a slightly different way. A _Sudoku graph_ has \(n^{2}\) vertices corresponding to cells, and two vertices are declared adjacent if they share the same row, column or box. Eigenvalues and eigenvectors of Sudoku graphs have been investigated in [1, 13, 18]. Although they have integral eigenvalues and Kronecker-structured \(\{\pm 1,0\}\)-valued eigenvectors, as ours, we could see no way to use the Sudoku graph alone to build the linear system for completion. Still, it would be interesting to explore the Sudoku graph in the context of completing partial Sudoku.
As a last remark, our results are only about fractional completion. For partial latin squares, the iterative absorption methods of [2] are able to convert a sparseness guarantee for fractional completion into a guarantee for (exact) completion. We do not know whether these or other methods could work for the Sudoku setting.
|
2302.02514 | Curves with few bad primes over cyclotomic $\mathbb{Z}_\ell$-extensions | Let $K$ be a number field, and $S$ a finite set of non-archimedean places of
$K$, and write $\mathcal{O}_S^\times$ for the group of $S$-units of $K$. A
famous theorem of Siegel asserts that the $S$-unit equation
$\varepsilon+\delta=1$, with $\varepsilon$, $\delta \in \mathcal{O}_S^\times$,
has only finitely many solutions. A famous theorem of Shafarevich asserts that
there are only finitely many isomorphism classes of elliptic curves over $K$
with good reduction outside $S$. Now instead of a number field, let
$K=\mathbb{Q}_{\infty,\ell}$ which denotes the $\mathbb{Z}_\ell$-cyclotomic
extension of $\mathbb{Q}$. We show that the $S$-unit equation
$\varepsilon+\delta=1$, with $\varepsilon$, $\delta \in \mathcal{O}_S^\times$,
has infinitely many solutions for $\ell \in \{2,3,5,7\}$, where $S$ consists
only of the totally ramified prime above $\ell$. Moreover, for every prime
$\ell$, we construct infinitely many elliptic or hyperelliptic curves defined
over $K$ with good reduction away from $2$ and $\ell$. For certain primes
$\ell$ we show that the Jacobians of these curves in fact belong to infinitely
many distinct isogeny classes. | Samir Siksek, Robin Visser | 2023-02-06T00:54:37Z | http://arxiv.org/abs/2302.02514v1 | # Curves with few bad primes
###### Abstract.
Let \(K\) be a number field, and \(S\) a finite set of non-archimedean places of \(K\), and write \(\mathcal{O}_{S}^{\times}\) for the group of \(S\)-units of \(K\). A famous theorem of Siegel asserts that the \(S\)-unit equation \(\varepsilon+\delta=1\), with \(\varepsilon\), \(\delta\in\mathcal{O}_{S}^{\times}\), has only finitely many solutions. A famous theorem of Shafarevich asserts that there are only finitely many isomorphism classes of elliptic curves over \(K\) with good reduction outside \(S\). Now instead of a number field, let \(K=\mathbb{Q}_{\infty,\ell}\) which denotes the \(\mathbb{Z}_{\ell}\)-cyclotomic extension of \(\mathbb{Q}\). We show that the \(S\)-unit equation \(\varepsilon+\delta=1\), with \(\varepsilon\), \(\delta\in\mathcal{O}_{S}^{\times}\), has infinitely many solutions for \(\ell\in\{2,3,5,7\}\), where \(S\) consists only of the totally ramified prime above \(\ell\). Moreover, for every prime \(\ell\), we construct infinitely many elliptic or hyperelliptic curves defined over \(K\) with good reduction away from \(2\) and \(\ell\). For certain primes \(\ell\) we show that the Jacobians of these curves in fact belong to infinitely many distinct isogeny classes.
Key words and phrases:Shafarevich conjecture, Abelian varieties, cyclic fields, cyclotomic fields, integral points 2010 Mathematics Subject Classification: Primary 11G10, Secondary 11G05 Siksek is supported by the EPSRC grant _Moduli of Elliptic curves and Classical Diophantine Problems_ (EP/S031537/1). Visser is supported by an EPSRC studentship (EP/V520226/1).
A third is the following theorem of Zarhin [25, Corollary 4.2], which asserts that the Tate homomorphism conjecture (also a theorem of Faltings [5] over number fields) continues to hold over \(K_{\infty}\).
**Theorem** (Zarhin).: _Let \(A\), \(B\) be abelian varieties defined over \(K_{\infty,\ell}\), and denote their respective \(\ell\)-adic Tate modules by \(T_{\ell}(A)\), \(T_{\ell}(B)\). Then the natural embedding_
\[\operatorname{Hom}_{K_{\infty}}(A,B)\otimes\mathbb{Z}_{\ell}\hookrightarrow \operatorname{Hom}_{\operatorname{Gal}(\overline{K_{\infty}}/K_{\infty})}(T_{ \ell}(A),T_{\ell}(B))\]
_is a bijection._
Mazur's conjecture is now known to hold for certain elliptic curves. For example, if \(E\) is an elliptic curve defined over \(\mathbb{Q}\) then \(E(\mathbb{Q}_{\infty})\) is finitely generated thanks to theorems of Kato, Ribet and Rohrlich [7, Theorem 1.5]. From this one can deduce [7, Theorem 1.24] that \(X(\mathbb{Q}_{\infty})\) is finite for curves \(X/\mathbb{Q}\) of genus \(\geq 2\) equipped with a non-constant morphism to an elliptic curve \(X\to E\) defined over \(\mathbb{Q}\). We also note that the conjecture of Parshin and Zarhin follows easily from Mazur's conjecture and Faltings' theorem. Indeed, using the Abel-Jacobi map we can deduce from Mazur's conjecture that \(X(K_{\infty})=X(K_{r})\) for suitably large \(r\), and we know that \(X(K_{r})\) is finite by Faltings' theorem.
It is natural to wonder whether other standard conjectures and theorems concerning the arithmetic of curves and abelian varieties over number fields continue to hold over \(K_{\infty}\). The purpose of this paper is to give counterexamples to potential generalizations of certain theorems of Siegel and Shafarevich to \(K_{\infty}\). A theorem of Siegel (e.g. [1, Theorem 0.2.8]) asserts that \((\mathbb{P}^{1}-\{0,1,\infty\})(\mathcal{O}_{K,S})\) is finite for any number field \(K\) and any finite set of primes \(S\). We show that the corresponding statement over \(\mathbb{Q}_{\infty,\ell}\) is false, at least for \(\ell=2\), \(3\), \(5\), \(7\). We denote by \(v_{\ell}\) the totally ramified prime of \(\mathbb{Q}_{\infty,\ell}\) above \(\ell\) (the precise meaning of primes in infinite extensions of \(\mathbb{Q}\) is clarified in Section 2).
**Theorem 1**.: _Let \(\ell=2\), \(3\), \(5\) or \(7\). Let_
\[S\,=\,\begin{cases}\{v_{\ell}\}&\text{if $\ell=2$, $5$, $7$}\\ \varnothing&\text{if $\ell=3$.}\end{cases} \tag{1}\]
_Let \(\mathcal{O}_{S}\) denote the \(S\)-integers of \(\mathbb{Q}_{\infty,\ell}\). Then \((\mathbb{P}^{1}-\{0,1,\infty\})(\mathcal{O}_{S})\) is infinite._
**Remarks**.:
* If \(S=\varnothing\) then \(\mathcal{O}_{S}=\mathcal{O}_{\infty}\) is the set of integers of \(\mathbb{Q}_{\infty,\ell}\). In [6] it is shown that \((\mathbb{P}^{1}-\{0,1,\infty\})(\mathcal{O}_{\infty})=\varnothing\) for \(\ell\neq 3\). The obstruction given in [6] for \(\ell\neq 3\) is local in nature. In essence, Theorem 1 complements this result, showing that we can obtain infinitely many integral or \(S\)-integral points in the absence of the local obstruction. The proof of Theorem 1 is constructive.
* Theorem 1 strongly suggests that the conjecture of Parshin and Zarhin does not admit a straightforward generalization to the broader context of integral points on hyperbolic curves. We also remark that there is a critical difference over \(K_{\infty}\) between complete curves \(X\) of genus \(\geq 2\) and \(\mathbb{P}^{1}-\{0,1,\infty\}\). For the former, the group of \(K_{\infty}\)-points of the Jacobian is expected to be finitely generated by Mazur's conjecture. For the latter, the analogue of the Jacobian is the generalized Jacobian which is \(\mathbb{G}_{m}\times\mathbb{G}_{m}\), and its group of \(K_{\infty}\)-points is \((\mathbb{G}_{m}\times\mathbb{G}_{m})(K_{\infty})=\mathcal{O}_{\infty}^{ \times}\times\mathcal{O}_{\infty}^{\times}\), which is infinitely generated.
Variants of the proof of Theorem 1 give the following.
**Theorem 2**.: _Let \(\ell=2,3\) or \(5\). Let \(S=\{v_{\ell}\}\) and write \(\mathcal{O}_{S}\) for the \(S\)-integers of \(\mathbb{Q}_{\infty,\ell}\). Let_
\[k\,\in\,\begin{cases}\{1,2,3,4,5,6,7,8,10,12,24\}&\text{if $\ell=2,3$},\\ \{1,2,4\}&\text{if $\ell=5$}.\end{cases}\]
_Then \((\mathbb{P}^{1}-\{0,k,\infty\})(\mathcal{O}_{S})\) is infinite._
Shafarevich's conjecture asserts that for a number field \(K\), a dimension \(n\), a degree \(d\), and a finite set of places \(S\), there are only finitely many isomorphism classes of polarized abelian varieties defined over \(K\) of dimension \(n\) with degree \(d\) polarization and with good reduction away from \(S\). This conjecture was proved by Shafarevich for elliptic curves (i.e. \(n=1\)) and by Faltings [5] in complete generality. If we replace \(K\) by \(\mathbb{Q}_{\infty,\ell}\) then the Shafarevich conjecture no longer holds. For example, consider
\[E_{\varepsilon}\,:\,\varepsilon Y^{2}=X^{3}-X\]
where \(\varepsilon\in\mathcal{O}_{\infty}^{\times}\). This elliptic curve has good reduction away from the primes above 2. Moreover, \(E_{\varepsilon}\), \(E_{\delta}\) are isomorphic over \(\mathbb{Q}_{\infty}\) if and only if \(\varepsilon/\delta\) is a square in \(\mathcal{O}_{\infty}^{\times}\). As \(\mathcal{O}_{\infty}^{\times}/(\mathcal{O}_{\infty}^{\times})^{2}\) is infinite, we deduce that there are infinitely many isomorphism classes of elliptic curves over \(\mathbb{Q}_{\infty}\) with good reduction away from the primes above 2. It is however natural to wonder if a sufficiently weakened version of the Shafarevich conjecture continues to hold over \(\mathbb{Q}_{\infty}\). Indeed, the curves \(E_{\varepsilon}\) in the above construction form a single \(\overline{\mathbb{Q}}\)-isomorphism class. This it is natural to ask if, for suitable \(\ell\) and finite set of primes \(S\), does the set of elliptic curves over \(\mathbb{Q}_{\infty}\) with good reduction outside \(S\) form infinitely many \(\overline{\mathbb{Q}}\)-isomorphism classes?
**Theorem 3**.: _Let \(\ell=2\), \(3\), \(5\), or \(7\). Let \(S\) be given by (1) and let \(S^{\prime}=S\cup\{v_{2}\}\) where \(v_{2}\) is the unique prime of \(\mathbb{Q}_{\infty,\ell}\) above \(2\). Then, there are infinitely many \(\overline{\mathbb{Q}}\)-isomorphism classes of elliptic curves defined over \(\mathbb{Q}_{\infty,\ell}\) with good reduction away from \(S^{\prime}\) and with full 2-torsion in \(\mathbb{Q}_{\infty,\ell}\). Moreover, these elliptic curves form infinitely many distinct \(\mathbb{Q}_{\infty,\ell}\)-isogeny classes._
**Remarks**.:
* By [6, Lemma 2.1], a rational prime \(p\neq\ell\) is inert in \(\mathbb{Q}_{\infty,\,\ell}\) if and only if \(p^{\ell-1}\neq 1\ (\text{mod}\ \ell^{2})\). It follows from this that 2 is inert in \(\mathbb{Q}_{\infty,\ell}\) for \(\ell=3\), 5, 7 and 11.
* Faltings' proof [5] of the Mordell conjecture can be considered to have three major steps. In the first step, Faltings proves the Tate homomorphism conjecture. In the second step, Faltings derives the Shafarevich conjecture from the Tate homomorphism conjecture, and in the final step Faltings uses the 'Parshin trick' to deduce the Mordell conjecture from the Shafarevich conjecture. Although Zarhin has extended the Tate homomorphism conjecture to \(K_{\infty}\), Theorem 3 suggests that there is no plausible strategy for proving the conjecture of Parshin and Zarhin by mimicking Faltings' proof of the Mordell conjecture.
It is natural to wonder if the isogeny classes appearing in the proof of Theorem 3 are finite or infinite. Rather reassuringly they turn out to be finite.
**Theorem 4**.: _Let \(E\) be an elliptic curve over \(\mathbb{Q}_{\infty,\ell}\) without potential complex multiplication. Then the \(\mathbb{Q}_{\infty,\ell}\)-isogeny class of \(E\) is finite._
The original version of Shafarevich's conjecture [20], (also proved by Faltings [5, Korollar 1]) states that for a given number field \(K\), a genus \(g\) and a finite set of places \(S\), there are only finitely many isomorphism classes of genus \(g\) curves \(C/K\) with good reduction away from \(S\). Again this statement becomes false if we replace \(K\) by \(\mathbb{Q}_{\infty,\ell}\), for any prime \(\ell\).
**Theorem 5**.: _Let \(g\geq 2\) and let \(\ell=3\), \(5\), \(7\), \(11\) or \(13\). There are infinitely many \(\overline{\mathbb{Q}}\)-isomorphism classes of genus \(g\) hyperelliptic curves over \(\mathbb{Q}_{\infty,\ell}\) with good reduction away from \(\{\upsilon_{2},\upsilon_{\ell}\}\)._
**Theorem 6**.: _Let \(\ell\geq 11\) be an odd prime and let \(g=\lfloor\frac{\ell-3}{4}\rfloor\). There are infinitely many \(\overline{\mathbb{Q}}\)-isomorphism classes of genus \(g\) hyperelliptic curves over \(\mathbb{Q}_{\infty,\ell}\) with good reduction away from \(\{\upsilon_{2},\upsilon_{\ell}\}\). Moreover, if_
\[\ell\in\{11,23,59,107,167,263,347,359\},\]
_then the Jacobians of these curves form infinitely many distinct \(\mathbb{Q}_{\infty,\ell}\)-isogeny classes._
The paper is structured as follows. In Section 2 we recall basic results on units and \(S\)-units of the cyclotomic field \(\mathbb{Q}(\zeta_{\ell^{n}})\). In Sections 3-6 we employ identities between cyclotomic polynomials to give constructive proofs of Theorems 1 and 2. Section 7 gives a proof of Theorem 4, making use of a deep theorem of Kato to control the \(\mathbb{Q}_{\infty,\ell}\)-points on certain modular curves. Section 8 uses the integral and \(S\)-integral points on \(\mathbb{P}^{1}-\{0,1,\infty\}\) furnished by Theorem 1 to construct infinite families of elliptic curves over \(\mathbb{Q}_{\infty,\ell}\) for \(\ell=2\), \(3\), \(5\), \(7\), with good reduction away from \(\{\upsilon_{2},\upsilon_{\ell}\}\), which are used to give a proof of Theorem 3. Sections 9 and 10 give proofs of Theorems 5 and 6, making use of the relation, due to Kummer, between the class number of \(\mathbb{Q}(\zeta_{\ell^{n}})^{+}\), and the index of cyclotomic units in the full group of units.
We are grateful to Minhyong Kim for drawing our attention to the conjecture of Parshin and Zarhin, and to Alain Kraus and David Loeffler for useful discussions.
## 2. Units and \(S\)-units of \(\mathbb{Q}(\zeta)\)
Let \(K\) be a subfield of \(\overline{\mathbb{Q}}\). We denote the integers of \(K\) (i.e. the integral closure of \(\mathbb{Z}\) in \(K\)) by \(\mathcal{O}(K)\). Let \(p\) be a rational prime. By a **prime of \(K\) above \(p\)** we mean a map \(\upsilon:K\to\mathbb{Q}\cup\{\infty\}\) satisfying the following
* \(\upsilon(p)=1\), \(\upsilon(0)=\infty\);
* \(\upsilon|_{K^{\infty}}:K^{\times}\to\mathbb{Q}\) is a homomorphism;
* \(\upsilon(1+b)=0\) whenever \(\upsilon(b)>0\).
Suppose \(K=\cup K_{n}\) where \(K_{0}\subset K_{1}\subset K_{2}\subset\cdots\) is a tower of number fields (i.e. finite extensions of \(\mathbb{Q}\)), with \(K_{0}=\mathbb{Q}\). One sees that the primes of \(K\) above \(p\) are in 1-1 correspondence with sequences \(\{\mathfrak{p}_{n}\}\) where
* \(\mathfrak{p}_{n}\) is a prime ideal of \(\mathcal{O}(K_{n})\);
* \(\mathfrak{p}_{n+1}\,|\,\mathfrak{p}_{n}\mathcal{O}(K_{n+1})\);
* \(\mathfrak{p}_{0}=p\mathbb{Z}\).
Indeed, from \(\upsilon\) one obtains the corresponding sequence \(\{\mathfrak{p}_{n}\}\) via the formula \(\mathfrak{p}_{n}=\{\alpha\in\mathcal{O}(K_{n})\,:\,\upsilon(\alpha)>0\}\). Given a sequence \(\{\mathfrak{p}_{n}\}\), we can recover the corresponding \(\upsilon\) by letting
\[\upsilon(\alpha)=\operatorname{ord}_{\mathfrak{p}_{n}}(\alpha)/\operatorname{ ord}_{\mathfrak{p}_{n}}(p)\]
whenever \(\alpha\in K_{n}^{\times}\). Given a finite set of primes \(S\) of \(K\), we define the \(S\)-integers of \(K\) to be the set \(\mathcal{O}(K,S)\) of all \(\alpha\in K\) such that \(\upsilon(\alpha)\geq 0\) for every prime \(\upsilon\notin S\). We let \(\mathcal{O}(K,S)^{\times}\) be the unit group of \(\mathcal{O}(K,S)\); this is precisely the set of \(\alpha\in K^{\times}\) such that \(\upsilon(\alpha)=0\) for every prime \(\upsilon\notin S\). If \(S=\varnothing\) then \(\mathcal{O}(K,S)=\mathcal{O}(K)\) are the integers of \(K\) and \(\mathcal{O}(K,S)^{\times}=\mathcal{O}(K)^{\times}\) are the units of \(K\).
Fix a rational prime \(\ell\). For a positive integer \(n\), let \(\zeta_{\ell^{n}}\) denote a primitive \(\ell^{n}\)-th root of \(1\) which is chosen so that
\[\zeta_{\ell^{n+1}}^{\ell}=\zeta_{\ell^{n}}.\]
Let \(\Omega_{n,\ell}=\mathbb{Q}(\zeta_{\ell^{n}})\); this has degree \(\varphi(\ell^{n})\) where \(\varphi\) is Euler totient function. Let
\[\Omega_{\infty,\ell}=\bigcup_{n=1}^{\infty}\Omega_{n,\ell}.\]
The prime \(\ell\) is totally ramified in each \(\Omega_{n,\ell}\), and we denote by \(\lambda_{n}\) the unique prime ideal of \(\mathcal{O}(\Omega_{n,\ell})\) above \(\ell\). Thus
\[\ell\cdot\mathcal{O}(\Omega_{n,\ell})\,=\,\lambda_{n}^{\varphi(\ell^{n})}. \tag{2}\]
We write \(\upsilon_{\ell}\) for the unique prime of \(\Omega_{\infty,\ell}\) above \(\ell\). For now fix \(n\geq 1\) if \(\ell\neq 2\) and \(n\geq 2\) if \(\ell=2\). We recall that \(\lambda_{n}=(1-\zeta_{\ell^{n}})\cdot\mathcal{O}(\Omega_{n,\ell})\). If \(\ell\uparrow s\) then \((1-\zeta_{\ell^{n}}^{s})\cdot\mathcal{O}(\Omega_{n,\ell})=\lambda_{n}\); we can see this by applying the automorphism \(\zeta_{\ell^{n}}\mapsto\zeta_{\ell^{n}}^{s}\) to (2).
**Lemma 7**.: _Let \(s\) be an integer and let \(t=\operatorname{ord}_{\ell}(s)\). Suppose \(t<n\). Then_
\[(1-\zeta_{\ell^{n}}^{s})\cdot\mathcal{O}(\Omega_{n,\ell})\;=\;\lambda_{n}^{ \ell^{t}}.\]
_Moreover,_
\[\upsilon_{\ell}(1-\zeta_{\ell^{n}}^{s})=\frac{1}{\ell^{n-1-t}(\ell-1)}.\]
Proof.: Write \(\zeta=\zeta_{\ell^{n}}\). Note that \(\zeta^{s}\) is a primitive \(\ell^{n-t}\)-th root of \(1\). Thus
\[(1-\zeta^{s})\cdot\mathcal{O}(\Omega_{n-t,\ell})\;=\;\lambda_{n-t}.\]
As \(\ell\) is totally ramified in \(\Omega_{n,\ell}\), we have
\[(1-\zeta^{s})\cdot\mathcal{O}(\Omega_{n,\ell})\;=\;\lambda_{n}^{[\Omega_{n, \ell}\,:\,\Omega_{n-t,\ell}]}\;=\;\lambda_{n}^{\ell^{t}}.\]
For the final part of the lemma,
\[\upsilon_{\ell}(1-\zeta^{s})=\frac{\operatorname{ord}_{\lambda_{n}}(1-\zeta^{ s})}{\operatorname{ord}_{\lambda_{n}}(\ell)}=\frac{\ell^{t}}{\varphi(\ell^{n})}= \frac{1}{\ell^{n-1-t}(\ell-1)}.\]
**Cyclotomic units and \(S\)-units.** Write \(V_{n}\) for the subgroup of \(\mathcal{O}(\Omega_{n},\{v_{\ell}\})^{\times}\) generated by
\[\big{\{}\pm\zeta_{\ell^{n}},\quad 1-\zeta_{\ell^{n}}^{k}\;:\;1\leq k<\ell^{n} \big{\}}\]
and let
\[C_{n}=V_{n}\cap\mathcal{O}(\Omega_{n})^{\times}.\]
The group \(C_{n}\) is called [22, Chapter 8] the group of **cyclotomic units** in \(\Omega_{n}\). We will often find it more convenient to work with group \(V_{n}\).
**Lemma 8**.: _The abelian group \(V_{n}/\langle\pm\zeta_{\ell}^{n}\rangle\) is free with basis_
\[\big{\{}1-\zeta_{\ell^{n}}^{k}\;:\;1\leq k<\ell^{n}/2,\quad\ell\nmid k\big{\}}. \tag{3}\]
Proof.: The torsion subgroup of \(V_{n}\) is the torsion subgroup of \(\Omega_{n}^{\times}\) which is \(\langle\pm\zeta_{\ell^{n}}\rangle\). Thus \(V_{n}/\langle\pm\zeta_{\ell^{n}}\rangle\) is torsion free. By definition of \(V_{n}\), the group \(V_{n}/\langle\pm\zeta_{\ell^{n}}\rangle\) is generated by \(1-\zeta_{\ell^{n}}^{k}\) with \(\ell^{n}\nmid k\). Write \(k=\ell^{r}d\) with \(\ell\nmid d\); thus \(r<n\). Suppose \(r\geq 1\). Then,
\[1-\zeta_{\ell^{n}}^{k} =1-\zeta_{\ell^{n}}^{\ell^{r}d}\] \[=\prod_{i=0}^{\ell^{r}-1}\big{(}1-\zeta_{\ell^{n}}^{d}\zeta_{\ell ^{n}}^{i}\big{)}\qquad\text{using}\quad 1-X^{\ell^{r}}=\prod_{i=0}^{\ell^{r}-1} \big{(}1-\zeta_{\ell^{r}}^{i}X\big{)}\] \[=\prod_{i=0}^{\ell-1}(1-\zeta_{\ell^{n}}^{d+i\ell^{n-r}}).\]
It follows that \(V_{n}/\langle\pm\zeta_{\ell^{n}}\rangle\) is generated by \(1-\zeta_{\ell^{n}}^{k}\) with \(\ell\nmid k\). If \(\ell^{n}/2<k<\ell^{n}\) and \(\ell+k\) then
\[1-\zeta_{\ell^{n}}^{k}=-\zeta_{\ell^{n}}^{k}\big{(}1-\zeta_{\ell^{n}}^{\ell^{n }-k}\big{)}. \tag{4}\]
Thus (3) certainly generates \(V_{n}/\langle\pm\zeta_{\ell}^{n}\rangle\). Note that (3) has cardinality \(\varphi(\ell^{n})/2\) where \(\varphi\) is the Euler totient function. It therefore suffices to show that \(V_{n}\) has rank \(\varphi(\ell^{n})/2\). A well-known theorem [22, Theorem 8.3] states that \(C_{n}\) has finite index in \(\mathcal{O}(\Omega_{n})^{\times}\) and thus, by Dirichlet's unit theorem, \(C_{n}\) has rank \(-1+\varphi(\ell^{n})/2\). We note that \(C_{n}\) is the kernel of the surjective homomorphism \(V_{n}\to\mathbb{Z}\), sending \(\mu\) to \(\operatorname{ord}_{\lambda_{n}}(\mu)\). Thus \(V_{n}\) has rank \(\varphi(\ell^{n})/2\) completing the proof.
**Lemma 9**.: _Let \(n\geq 2\) if \(\ell\neq 2\) and \(n\geq 3\) if \(\ell=2\). Then \(V_{n-1}\subset V_{n}\). Moreover,_
\[\prod_{\begin{subarray}{c}1\leq k<\ell^{n}/2\\ \ell\nmid k\end{subarray}}(1-\zeta_{\ell^{n}}^{k})^{c_{k}}\;\in\;\langle\pm \zeta_{\ell^{n}},\,V_{n-1}\rangle\]
_if and only if \(c_{k}=c_{m}\) whenever \(k\equiv m\pmod{\ell^{n-1}}\)._
Proof.: The group \(V_{n-1}\) is generated, modulo roots of unity, by \(1-\zeta_{\ell^{n-1}}^{d}\) with \(\ell\nmid d\). By the proof of Lemma 8,
\[1-\zeta_{\ell^{n-1}}^{d}=1-\zeta_{\ell^{n}}^{\ell d}=\prod_{i=0}^{\ell-1}(1- \zeta_{\ell^{n}}^{d+i\ell^{n-1}}).\]
The lemma follows from Lemma 8.
Given \(a\in\mathbb{Z}_{\ell}\), it makes sense to reduce \(a\) modulo \(\ell^{n}\) and therefore it makes sense to write \(\zeta_{\ell^{n}}^{a}\). We write \(\{a\}_{n}\) for the unique integer satisfying
\[0\leq\{a\}_{n}<\ell^{n}/2,\qquad\{a\}_{n}\equiv\pm a\pmod{\ell^{n}}.\]
**Lemma 10**.: _Let \(a_{1},\ldots,a_{r}\in\mathbb{Z}_{\ell}\) and \(c_{1},\ldots,c_{r}\in\mathbb{Z}\). Suppose_
1. \(c_{1}\neq 0\)_._
2. \(a_{1}\neq 0\pmod{\ell}\)_._
3. \(a_{1}\neq\pm a_{2},\pm a_{3},\cdots\pm a_{r}\pmod{\ell^{n}}\)_._
_Write_
\[\varepsilon_{n}\;=\;\prod_{1\leq i\leq r}(1-\zeta_{\ell^{n}}^{a_{i}})^{c_{i}}. \tag{5}\]
_Then, \(\varepsilon_{n}\notin\langle\pm\zeta_{\ell^{n}},V_{n-1}\rangle\) for all sufficiently large \(n\)._
Proof.: If \(a_{j}\equiv 0\pmod{\ell}\) then \((1-\zeta_{\ell^{n}}^{a_{j}})\in V_{n-1}\). We may therefore suppose \(a_{j}\neq 0\pmod{\ell}\) for all \(j\). Write
\[\delta_{n}\;=\;\prod_{1\leq i\leq r}\left(1-\zeta_{\ell^{n}}^{\{a_{i}\}_{n}} \right)^{c_{i}}.\]
In view of the identity (4) it will be sufficient to show that \(\delta_{n}\notin(\pm\zeta_{\ell^{n}},V_{n-1})\) for \(n\) sufficiently large. Also, in view of Lemma 9, it is sufficient to show for sufficiently large \(n\) that \(\{a_{1}\}_{n}\notin\{a_{j}\}_{n}\pmod{\ell^{n}}\) for all \(2\leq j\leq n\). This is equivalent to \(a_{1}\neq\pm a_{j}\) for \(2\leq j\leq n\) which is hypothesis (iii). This completes the proof.
The following corollary easily follows from Lemma 10.
**Corollary 11**.: _Let \(a_{1},\ldots,a_{r}\in\mathbb{Z}_{\ell}\) and \(c_{1},\ldots,c_{r}\in\mathbb{Z}\). Suppose_
1. \(c_{1}\equiv 1\pmod{2}\)_._
2. \(a_{1}\neq 0\pmod{\ell}\)_._
3. \(a_{1}\neq\pm a_{2},\pm a_{3},\cdots\pm a_{r}\pmod{\ell^{n}}\)_._
_Let \(\varepsilon_{n}\) be as in (5). Then, \(\varepsilon_{n}\notin\langle\pm\zeta_{\ell^{n}},V_{n-1},V_{n}^{2}\rangle\) for all sufficiently large \(n\)._
**Units and \(S\)-units from cyclotomic polynomials.** For \(m\geq 1\), let \(\Phi_{m}(X)\in\mathbb{Z}[X]\) be the \(m\)**-th cyclotomic polynomial** defined by
\[\Phi_{m}(X)=\prod_{\begin{subarray}{c}1\leq i\leq m\\ (i,m)=1\end{subarray}}(X-\zeta_{m}^{i}).\]
These satisfy the identity [22, Chapter 2]
\[X^{m}-1=\prod_{d|m}\Phi_{d}(X). \tag{6}\]
It follows from the Mobius inversion formula that
\[\Phi_{m}(X)=\prod_{d|m}(X^{d}-1)^{\mu(m/d)} \tag{7}\]
where \(\mu\) denotes the Mobius function.
**Lemma 12**.: _Let \(\ell\) be a prime and \(n\geq 1\). Let \(m\geq 1\), and suppose \(\ell^{n}\nmid m\)._
1. \(\Phi_{m}(\zeta_{\ell^{n}})\in V_{n}\subseteq\mathcal{O}(\Omega_{n,\ell},S)^{\times}\)_, where_ \(S=\{v_{\ell}\}\)_._
2. _If_ \(m\neq\ell^{u}\) _for all_ \(u\geq 0\)_, then_ \(\Phi_{m}(\zeta_{\ell^{n}})\in C_{n}\subseteq\mathcal{O}(\Omega_{n,\ell})^{ \times}\)_._
_Moreover,_
\[v_{\ell}(\Phi_{\ell^{t}}(\zeta_{\ell^{n}}))\;=\;\begin{cases}\frac{1}{\ell^{n -1}(\ell-1)}&t=0\\ \frac{1}{\ell^{n-t}}&1\leq t\leq n-1.\end{cases}\]
Proof.: Let \(t=\operatorname{ord}_{\ell}(m)<n\). Observe that \(\Phi_{m}(X)\mid(X^{m}-1)\). Hence \(\Phi_{m}(\zeta_{\ell^{n}})\cdot\mathcal{O}(\Omega_{n,\ell})\) divides \((1-\zeta_{\ell^{n}}^{m})\cdot\mathcal{O}(\Omega_{n,\ell})\). By Lemma 7 we have \((1-\zeta_{\ell^{n}}^{m})\cdot\mathcal{O}(\Omega_{n,\ell})=\lambda_{n}^{\ell^{ t}}\), giving (a).
For (b), write \(m=\ell^{t}k\) where \(k>1\). Then \(\Phi_{m}(X)\) divides the polynomial \((X^{m}-1)/(X^{\ell^{t}}-1)\). Therefore \(\Phi_{m}(\zeta)\cdot\mathcal{O}(\Omega_{n,\ell})\) divides
\[\frac{(1-\zeta_{\ell^{n}}^{m})}{(1-\zeta_{\ell^{n}}^{m})}\cdot\mathcal{O}( \Omega_{n,\ell})=\frac{\lambda_{n}^{\ell^{n}}}{\lambda_{n}^{\ell^{n}}}=1\cdot \mathcal{O}(\Omega_{n,\ell}).\]
Thus \(\Phi_{m}(\zeta)\) is a unit, giving (b).
The final part of the Lemma follows from Lemma 7, and the formulae
\[\Phi_{\ell^{t}}(X)=\begin{cases}X-1&t=0\\ (X^{\ell^{t}}-1)/(X^{\ell^{t-1}}-1)&t\geq 1.\end{cases}\]
**Lemma 13**.: _Let \(n\geq 2\) if \(\ell\neq 2\) and \(n\geq 3\) if \(\ell=2\). Then \(V_{n}/\langle\pm\zeta_{\ell^{n}}\rangle\) is free with basis_
\[\{\Phi_{m}(\zeta_{\ell^{n}})\;:\;1\leq m<\ell^{n}/2,\;\ell\neq m\}.\]
Proof.: This follows from Lemma 8 thanks to identities (6) and (7).
## 3. The \(S\)-unit equation over \(\mathbb{Q}(\zeta_{\ell^{n}})^{+}\)
We continue with the notation of the previous section. In particular, let \(K\) be a subfield of \(\overline{\mathbb{Q}}\) and \(S\) be a finite set of primes of \(K\). Let \(k\) be a non-zero rational integer. We shall make frequent use of the correspondence between elements of \((\mathbb{P}^{1}-\{0,k,\infty\})(\mathcal{O}(K,S))\) and the set of solutions to the \(S\)-unit equation
\[\varepsilon+\delta=k,\qquad\varepsilon,\;\delta\in\mathcal{O}(K,S)^{\times},\]
sending \(\varepsilon\in(\mathbb{P}^{1}-\{0,k,\infty\})(\mathcal{O}(K,S))\) to \((\varepsilon,\delta)=(\varepsilon,k-\varepsilon)\).
Now, as before let \(\ell\) be a rational prime, \(n\) is a positive integer. If \(\ell=2\) suppose \(n\geq 2\). Write \(\zeta=\zeta_{\ell^{n}}\). We write \(\Omega_{n,\ell}^{+}=\mathbb{Q}(\zeta+1/\zeta)\) for the index \(2\) totally real subfield of \(\Omega_{n,\ell}\). We write
\[\Omega_{\infty,\ell}^{+}=\bigcup_{n=1}^{\infty}\Omega_{n,\ell}^{+}.\]
In this section, for suitable \(S\), we produce solutions to the \(S\)-unit equations over \(\Omega_{\infty,\ell}^{+}\).
As before, \(\Phi_{m}\) denotes the \(m\)-cyclotomic polynomial. It is convenient to record the first few \(\Phi_{m}\):
\[\Phi_{1}=X-1,\qquad\Phi_{2}=X+1,\qquad\Phi_{3}=X^{2}+X+1,\]
\[\Phi_{4}=X^{2}+1,\qquad\Phi_{5}=X^{4}+X^{3}+X^{2}+X+1,\]
\[\Phi_{6}=X^{2}-X+1,\qquad\Phi_{7}=X^{6}+X^{5}+X^{4}+X^{3}+X^{2}+X+1,\]
\[\Phi_{8}=X^{4}+1,\qquad\Phi_{9}=X^{6}+X^{3}+1,\qquad\Phi_{10}=X^{4}-X^{3}+X^{ 2}-X+1.\]
We shall call a polynomial \(F\in\mathbb{Z}[X]\)**super-cyclotomic** if it is of the form \(X^{m}f_{1}f_{2}\cdots f_{k}\) where each \(f_{i}(X)\) is a cyclotomic polynomial. We know, thanks to Lemma 12, that if \(F\) is super-cyclotomic and \(\ell\) is a prime, then \(F(\zeta_{\ell^{n}})\in\mathcal{O}(\Omega_{n},\{v_{\ell}\})^{\times}\) for \(n\) sufficiently large. We wrote a short computer program that lists all super-cyclotomic polynomials of degree at most \(20\) and searches for ternary relations of the form \(F-G=kH\) with \(F\), \(G\), \(H\) super-cyclotomic, \(\gcd(F,G,H)=1\)
and \(k\) is a positive integer. Note that any such relation \(F-G=kH\) gives points \(\varepsilon_{n}=F(\zeta_{\ell^{n}})/H(\zeta_{\ell^{n}})\in(\mathbb{P}^{1}-\{0,k, \infty\})\big{(}\mathcal{O}(\Omega_{n},\{v_{\ell}\})\big{)}\), for \(n\) sufficiently large. We found the following ternary relations between super-cyclotomic polynomials.
\[\Phi_{2}(X)^{2}-\Phi_{3}(X) =\,X; \tag{9}\] \[\Phi_{2}(X)^{2}-\Phi_{4}(X) =\,2X;\] (10) \[\Phi_{2}(X)^{2}-\Phi_{6}(X) =\,3X;\] (11) \[\Phi_{2}(X)^{2}-\Phi_{1}(X)^{2} =\,4X;\] (12) \[\Phi_{2}(X)^{4}-\Phi_{10}(X) =\,5X\Phi_{3}(X);\] (13) \[\Phi_{2}^{2}(X)\Phi_{3}(X)-\Phi_{1}(X)^{2}\Phi_{6}(X) =\,6X\Phi_{4}(X);\] (14) \[\Phi_{7}(X)-\Phi_{1}(X)^{6} =\,7X\Phi_{6}(X)^{2};\] (15) \[\Phi_{2}(X)^{4}-\Phi_{1}(X)^{4} =\,8X\Phi_{4}(X);\] (16) \[\Phi_{2}(X)^{4}\Phi_{5}(X)-\Phi_{1}(X)^{4}\Phi_{10}(X) =\,10X\Phi_{4}(X)^{3}. \tag{8}\]
From the identities (6) and (7) one easily sees that \(F(X^{k})\) is super-cyclotomic for any super-cyclotomic polynomial \(F\) and any positive integer \(k\), thus each of the nine identities above in fact yields an infinite family of identities. We pose the following open problems:
* Are there ternary linear relations between super-cyclotomic polynomials that are outside these nine families?
* Classify all ternary linear relations between super-cyclotomic polynomials.
**Lemma 14**.: _Let \(c:\Omega_{\ell}\to\Omega_{\ell}\) denote complex conjugation. Let \(n\geq 1\) and let \(\zeta=\zeta_{\ell^{n}}\) be an \(\ell^{n}\)-th root of \(1\). Let \(m\geq 1\) and suppose \(\ell^{n}\downarrow m\). Then_
\[\frac{\Phi_{m}(\zeta)^{c}}{\Phi_{m}(\zeta)}\,=\,\begin{cases}\zeta^{-\varphi(m )}&m\geq 2\\ -\zeta^{-1}&m=1.\end{cases}\]
Proof.: Note that \(\zeta^{c}=\zeta^{-1}\). So
\[\frac{\Phi_{1}(\zeta)^{c}}{\Phi_{1}(\zeta)}=\frac{\zeta^{-1}-1}{\zeta-1}=- \zeta^{-1},\qquad\frac{\Phi_{2}(\zeta)^{c}}{\Phi_{2}(\zeta)}=\frac{\zeta^{-1} +1}{\zeta+1}=\zeta^{-1}.\]
Let \(m\geq 3\). The polynomial \(\Phi_{m}\) is monic of degree \(\varphi(m)\), and its roots are the primitive \(m\)-th roots of \(1\) which come in distinct pairs \(\eta\), \(\eta^{-1}\). Thus the trailing coefficient is \(1\). It follows that \(X^{\varphi(m)}\Phi_{m}(X^{-1})\) is monic and has the same roots as \(\Phi_{m}\), therefore
\[\Phi_{m}(X)=X^{\varphi(m)}\Phi_{m}(X^{-1}).\]
Hence
\[\frac{\Phi_{m}(\zeta)^{c}}{\Phi_{m}(\zeta)}=\frac{\Phi_{m}(\zeta^{-1})}{\Phi_ {m}(\zeta)}=\zeta^{-\varphi(m)}.\]
**Lemma 15**.: _Let \(S=\{v_{\ell}\}\). Let_
\[k\in\{1,2,3,4,5,6,7,8,10\}.\]
_Then \((\mathbb{P}^{1}-\{0,k,\infty\})\big{(}\mathcal{O}(\Omega_{\infty,\ell}^{+},S) \big{)}\) is infinite._
Proof.: The proof makes use of identities (8)-(16). We prove the lemma for \(k=10\) using the identity (16); the other cases are similar. Let \(n\geq 3\) and let \(\zeta=\zeta_{\ell^{n}}\). Let
\[\varepsilon=\frac{\Phi_{2}(\zeta)^{4}\Phi_{5}(\zeta)}{\zeta\Phi_{4}(\zeta)^{3}}, \qquad\delta=\frac{-\Phi_{1}(\zeta)^{4}\Phi_{10}(\zeta)}{\zeta\Phi_{4}(\zeta)^ {3}}.\]
By the identity, \(\varepsilon+\delta=10\). By Lemma 12 we know that \(\varepsilon\), \(\delta\) are \(S\)-units. A priori, \(\varepsilon\), \(\delta\) belong to \(\Omega_{\infty,\ell}\). However, an easy application of Lemma 14 shows that \(\varepsilon^{c}=\varepsilon\) and \(\delta^{c}=\delta\), so \(\varepsilon\), \(\delta\in\Omega_{\infty,\ell}^{+}\). It follows that \(\varepsilon\) is an \(\mathcal{O}(\Omega_{\infty,\ell}^{+},S)\)-point on \(\mathbb{P}^{1}-\{0,10,\infty\}\). This point depends on \(\zeta=\zeta_{\ell^{n}}\). Let us make sure that we really obtain infinitely many such points as we vary \(n\). Write
\[\varepsilon_{n}\;=\;\frac{\Phi_{2}(\zeta_{\ell^{n}})^{4}\Phi_{5}(\zeta_{\ell^ {n}})}{\zeta_{\ell^{n}}\Phi_{4}(\zeta_{\ell^{n}})^{3}}\;=\;\frac{(1-\zeta_{ \ell^{n}}^{2})^{7}(1-\zeta_{\ell^{n}}^{5})}{\zeta_{\ell^{n}}(1-\zeta_{\ell^{n} })^{5}(1-\zeta_{\ell^{n}}^{4})^{3}}\in V_{n}.\]
To show that we obtain infinitely many distinct \(\varepsilon_{n}\) it is enough to show that \(\varepsilon_{n}\notin V_{n-1}\) for \(n\) sufficiently large. This follows by an easy application of Lemma 9; to illustrate this let \(\ell=5\) and suppose \(\varepsilon_{n}\in V_{n-1}\). Note that \(1-\zeta_{5^{n}}^{5}\in V_{n-1}\). It follows that
\[(1-\zeta_{5^{n}})^{-5}(1-\zeta_{5^{n}}^{2})^{7}(1-\zeta_{5^{n}}^{4})^{-3}\; \in\;\langle\pm\zeta_{\ell^{n}},V_{n-1}\rangle.\]
Now in the product on the left the exponent of \(1-\zeta_{5^{n}}\) is \(-5\) whereas the exponent of \(1-\zeta_{5^{n}}^{1+5^{n-1}}\) is \(0\), contradicting Lemma 9. The proof is similar for \(\ell=2\), and for \(\ell\neq 2\), \(5\). It follows that we have infinitely many \(\mathcal{O}(\Omega_{\infty,\ell}^{+},S)\)-points on \(\mathbb{P}^{1}-\{0,10,\infty\}\).
**Proof of Theorem 2 for \(\ell=2\) and \(3\).** For \(\ell=2\), \(3\), we have \(\Omega_{\infty,\ell}^{+}=\mathbb{Q}_{\infty,\ell}\). Indeed, if \(\ell=2\) then \(\mathbb{Q}_{n,2}=\Omega_{n+2,2}^{+}\) and if \(\ell=3\) then \(\mathbb{Q}_{n,3}=\Omega_{n+1,3}^{+}\). Therefore Theorem 2 with \(\ell=2\) and \(3\) follows immediately from Lemma 15 for \(k\in\{1,2,3,4,5,6,7,8,10\}\).
Also, if \(\ell=2\), then the infinitely many solutions \(\varepsilon+\delta=6\) yields infinitely many solutions for \(2\varepsilon+2\delta=12\) and \(4\varepsilon+4\delta=24\). And if \(\ell=3\), then the infinitely many solutions \(\varepsilon+\delta=4\) yields infinitely many solutions \(3\varepsilon+3\delta=12\), and similarly infinitely many solutions \(\varepsilon+\delta=8\) yields infinitely many solutions \(3\varepsilon+3\delta=24\). This proves Theorem 2 for \(\ell=2\), \(3\) and \(k\in\{12,24\}\).
**Proof of Theorem 1 for \(\ell=2\).** Theorem 1 for \(\ell=2\) is simply a special case of Theorem 2.
## 4. The unit equation over \(\mathbb{Q}(\zeta_{\ell^{n}})^{+}\)
For roots of unity \(\alpha\), \(\beta\), we let
\[E(\alpha,\beta) =\frac{\alpha^{2}+\alpha^{-2}}{\left(\alpha\beta^{-1}+\alpha^{-1} \beta\right)\left(\alpha\beta+\alpha^{-1}\beta^{-1}\right)}=\frac{\Phi_{8}( \alpha)}{\Phi_{4}(\alpha\beta)\Phi_{4}(\alpha/\beta)},\] \[F(\alpha,\beta) =\frac{\beta^{2}+\beta^{-2}}{\left(\alpha\beta^{-1}+\alpha^{-1} \beta\right)\left(\alpha\beta+\alpha^{-1}\beta^{-1}\right)}=\frac{\Phi_{8}( \beta)}{\Phi_{4}(\alpha\beta)\Phi_{4}(\beta/\alpha)}.\]
We easily check that
\[E(\alpha,\beta)+F(\alpha,\beta)=1. \tag{17}\]
**Lemma 16**.: _Suppose \(\ell\) is odd and \(n\geq 1\). Let \(\zeta=\zeta_{\ell^{n}}\). Let \(i\), \(j\) be integers satisfying \(i\), \(j\), \(i+j\), \(i-j\not\equiv 0\pmod{\ell^{n}}\). Then \(E(\zeta^{i},\zeta^{j})\), \(F(\zeta^{i},\zeta^{j})\in\mathcal{O}(\Omega_{n,\ell}^{+})^{\times}\), and satisfy the unit equation_
\[\varepsilon+\delta=1,\qquad\varepsilon,\;\delta\in\mathcal{O}(\Omega_{n,\ell}^{+ })^{\times}. \tag{18}\]
_Moreover,_
\[\upsilon_{\ell}(E(\zeta^{i},\zeta^{j})-F(\zeta^{i},\zeta^{j}))=\frac{\ell^{\text{ ord}_{\ell}(i+j)}+\ell^{\text{ord}_{\ell}(i-j)}}{\ell^{n-1}(\ell-1)} \tag{19}\]
Proof.: It is clear that \(E(\zeta^{i},\zeta^{j})\), \(F(\zeta^{i},\zeta^{j})\) are fixed by complex conjugation \(\zeta\mapsto\zeta^{-1}\) and so belong to \(\Omega_{n,\ell}^{+}\). By Lemma 12, \(E(\zeta^{i},\zeta^{j})\) and \(F(\zeta^{i},\zeta^{j})\) are units. It remains to check (19). We observe
\[E(\zeta^{i},\zeta^{j})-F(\zeta^{i},\zeta^{j})=\frac{(\zeta^{i-j}-\zeta^{j-i})( \zeta^{i+j}-\zeta^{-i-j})}{(\zeta^{i-j}+\zeta^{j-i})(\zeta^{i+j}+\zeta^{-i-j}) }=\frac{(\zeta^{2(i-j)}-1)(\zeta^{2(i+j)}-1)}{\Phi_{4}(\zeta^{i-j})\Phi_{4}( \zeta^{i+j})}.\]
The denominator is a unit by Lemma 12. Now (19) follows from Lemma 7.
**Lemma 17**.: _Let \(\ell\) be an odd prime. Then \((\mathbb{P}^{1}-\{0,1,\infty\})(\mathcal{O}(\Omega_{\infty,\ell}^{+}))\) is infinite._
Proof.: We deduce this from Lemma 16. Let us take for example \(i=2\) and \(j=1\). Let \(n\geq 2\) and let
\[\varepsilon_{n}=E(\zeta_{\ell^{n}}^{2},\zeta_{\ell^{n}}),\qquad\delta_{n}=F( \zeta_{\ell^{n}}^{2},\zeta_{\ell^{n}}).\]
By Lemma 16, \(\varepsilon_{n}\), \(\delta_{n}\in\mathcal{O}(\Omega_{\infty,\ell}^{+})^{\times}\) and satisfy \(\varepsilon_{n}+\delta_{n}=1\). Thus \(\varepsilon_{n}\in(\mathbb{P}^{1}-\{0,1,\infty\})(\mathcal{O}(\Omega_{\infty, \ell}^{+}))\). Moreover,
\[\upsilon_{\ell}(2\varepsilon_{n}-1)=\upsilon_{\ell}(\varepsilon_{n}-\delta_{n })=\begin{cases}\frac{2}{\ell^{n-1}(\ell-1)}&\ell>3\\ \frac{2}{3^{n-1}}&\ell=3,\end{cases}\]
by (19). Thus \(\varepsilon_{n}\neq\varepsilon_{m}\) whenever \(n\neq m\). Hence \((\mathbb{P}^{1}-\{0,1,\infty\})(\mathcal{O}(\Omega_{\infty,\ell}^{+}))\) is infinite.
**Remark**.: Lemma 17 applies only for \(\ell\) odd; for \(\ell=2\) it is easy to show that the statement is false. Indeed, and let \(\eta_{n}\) be the prime ideal of \(\mathcal{O}(\Omega_{n,2}^{+})\) above \(2\). Then \(\mathcal{O}(\Omega_{n,2}^{+})/\eta_{n}\cong\mathbb{F}_{2}\), and a solution to \(\varepsilon+\delta=1\) with \(\varepsilon\), \(\delta\in\mathcal{O}(\Omega_{n,2}^{+})^{\times}\) reduced modulo \(\eta_{n}\) gives \(1+1\equiv 1\pmod{2}\) which is impossible.
**Proof of Theorem 1 for \(\ell=3\).** We recall that \(\mathbb{Q}_{\infty,3}=\Omega_{\infty,3}^{+}\). Therefore Theorem 1 for \(\ell=3\) follows immediately from Lemma 17.
## 5. The \(S\)-unit equation over \(\mathbb{Q}_{\infty,5}\)
The purpose of the is section is to prove Theorems 1 and 2 for \(\ell=5\). These in fact follow immediately from the following lemma.
**Lemma 18**.: _Let \(\upsilon_{5}\) be the unique prime of \(\mathbb{Q}_{\infty,5}\) above \(5\), and write \(S=\{\upsilon_{5}\}\). Then_
1. \((\mathbb{P}^{1}-\{0,k,\infty\})(\mathcal{O}(\mathbb{Q}_{\infty,5},S))\) _is infinite for_ \(k=1\)_,_ \(4\)_;_
2. \((\mathbb{P}^{1}-\{0,2,\infty\})(\mathcal{O}(\mathbb{Q}_{\infty,5}))\) _is infinite._
Proof.: Let \(a\in\mathbb{Z}_{5}^{\times}\) be the element satisfying
\[a^{2}=-1,\qquad a\equiv 2\pmod{5};\]
such an element exists and is unique by Hensel's Lemma. Let \(\sigma:\Omega_{\infty,5}\to\Omega_{\infty,5}\) be the field automorphism satisfying
\[\sigma(\zeta_{5^{n}})\;=\;\zeta_{5^{n}}^{a}\]
for \(n\geq 1\). Note that \(\sigma\) is an automorphism of order \(4\), and fixes a subfield of \(\Omega_{\infty,5}\) of index \(4\). This subfield is precisely \(\mathbb{Q}_{\infty,5}\).
Let
\[F=(x_{1}x_{2}^{2}+x_{3}x_{4}^{2})\big{(}x_{1}^{2}x_{4}+x_{2}x_{3}^{2}),\]
\[G=(x_{1}^{2}x_{2}+x_{3}^{2}x_{4})\big{(}x_{1}x_{4}^{2}+x_{2}^{2}x_{3}),\]
\[H=(x_{1}-x_{3})(x_{2}-x_{4})(x_{1}x_{2}-x_{3}x_{4})(x_{1}x_{4}-x_{2}x_{3}).\]
Observe \(F\), \(G\), \(H\) are invariant under the \(4\)-cycle \((x_{1},x_{2},x_{3},x_{4})\). One can check that \(F-G=H\). Let \(n\geq 2\) and write \(\zeta=\zeta_{5^{n}}\). Let
\[\varepsilon_{n}=\frac{F(\zeta,\zeta^{a},\zeta^{a^{2}},\zeta^{a^{3}})}{H(\zeta, \zeta^{a},\zeta^{a^{2}},\zeta^{a^{3}})},\qquad\delta_{n}=-\frac{G(\zeta,\zeta^{ a},\zeta^{a^{2}},\zeta^{a^{3}})}{H(\zeta,\zeta^{a},\zeta^{a^{2}},\zeta^{a^{3}})}.\]
From the identity \(F-G=H\) we have \(\varepsilon_{n}+\delta_{n}=1\). We shall show that \(\varepsilon_{n}\), \(\delta_{n}\in\mathcal{O}(\mathbb{Q}_{\infty,5},S)^{\times}\).
Since \(\sigma\) cyclically permutes \(\zeta,\zeta^{a},\zeta^{-1},\zeta^{-a}\) we conclude that \(f(\zeta,\zeta^{a},\zeta^{-1},\zeta^{-a})\in\mathbb{Q}_{\infty,5}\) for \(f=F\), \(G\), \(H\). Thus \(\varepsilon_{n}\), \(\delta_{n}\in\mathbb{Q}_{\infty,5}\). Moreover,
\[F = x_{2}x_{3}^{3}x_{4}^{2}\cdot\Phi_{2}(x_{1}x_{2}^{2}/x_{3}x_{4}^{ 2})\Phi_{2}(x_{1}^{2}x_{4}/x_{2}x_{3}^{2}),\] \[G = x_{2}^{2}x_{3}^{3}x_{4}\cdot\Phi_{2}(x_{1}^{2}x_{2}/x_{3}^{2}x_{ 4})\Phi_{2}(x_{1}x_{4}^{2}/x_{2}^{2}x_{3}),\] \[H = x_{2}x_{3}^{3}x_{4}^{2}\cdot\Phi_{1}(x_{1}/x_{3})\cdot\Phi_{1}(x _{2}/x_{4})\cdot\Phi_{1}(x_{1}x_{2}/x_{3}x_{4})\cdot\Phi_{1}(x_{1}x_{4}/x_{2} x_{3}).\]
Hence
\[\varepsilon_{n} = \frac{\Phi_{2}(\zeta^{2+4a})\Phi_{2}(\zeta^{4-2a})}{\Phi_{1}( \zeta^{2})\Phi_{1}(\zeta^{2a})\Phi_{1}(\zeta^{2+2a})\Phi_{1}(\zeta^{2-2a})}\] \[= \frac{(1-\zeta^{4+8a})(1-\zeta^{8-4a})}{(1-\zeta^{2})(1-\zeta^{2a })(1-\zeta^{2+2a})(1-\zeta^{2-2a})(1-\zeta^{2+4a})(1-\zeta^{4-2a})}.\]
and
\[\delta_{n} = \frac{-\zeta^{2a}\Phi_{2}(\zeta^{4+2a})\Phi_{2}(\zeta^{2-4a})}{ \Phi_{1}(\zeta^{2})\Phi_{1}(\zeta^{2a})\Phi_{1}(\zeta^{2+2a})\Phi_{1}(\zeta^{2 -2a})}\] \[= \frac{-\zeta^{2a}(1-\zeta^{8+4a})(1-\zeta^{4-8a})}{(1-\zeta^{2}) (1-\zeta^{2a})\big{(}1-\zeta^{2+2a}\big{)}(1-\zeta^{2-2a})(1-\zeta^{4+2a}) \big{(}1-\zeta^{4-2a}\big{)}}.\]
We checked, using the fact that \(a\equiv 7\pmod{25}\), that the exponents of \(\zeta\) in the above expressions for \(\varepsilon_{n}\) and \(\delta_{n}\) all have \(5\)-adic valuation \(0\) or \(1\). It follows from this that \(\varepsilon_{n}\), \(\delta_{n}\in V_{n}\subseteq\mathcal{O}(\Omega_{n},S)^{\times}\) for \(n\geq 2\). Hence \(\varepsilon_{n}\), \(\delta_{n}\in\mathbb{Q}_{\infty,5}\cap\mathcal{O}(\Omega_{n},S)^{\times}= \mathcal{O}(\mathbb{Q}_{\infty,5},S)^{\times}\) for \(n\geq 2\). To complete the proof of the lemma for \(k=1\) it is enough to show that \(\varepsilon_{n}\neq\varepsilon_{m}\) for \(n>m\), and for this it is enough to show that \(\varepsilon_{n}\notin(\pm\zeta_{5^{n}},V_{n-1})\) for \(n\geq 2\). Since \(a\equiv 7\pmod{25}\) we see that
\[4+8a\equiv 10,\qquad 8-4a\equiv 5,\qquad 2+4a\equiv 5,\qquad 4-2a\equiv 15 \pmod{25}.\]
Thus the factors
\[1-\zeta^{4+8a},\qquad 1-\zeta^{8-4a},\qquad 1-\zeta^{2+4a},\qquad 1-\zeta^{4-2a}\]
all belong to \(V_{n-1}\). Hence it is enough to show that
\[(1-\zeta^{2})(1-\zeta^{2a})(1-\zeta^{2+2a})(1-\zeta^{2-2a}) \tag{20}\]
does not belong to \(\langle\pm\zeta_{5^{n}},V_{n-1}\rangle\). However, the exponents \(2\), \(2a\), \(2+2a\), \(2-2a\) are respectively \(2\), \(4\), \(1\), \(3\) modulo \(5\), and hence certainly distinct modulo \(5^{n-1}\). It follows from Lemma 9 that the product (20) does not belong to \(\langle\pm\zeta_{5^{n}},V_{n-1}\rangle\) completing the proof for \(k=1\).
The proof for \(k=2\) is similar, and is based on the identity \(F-G=2H\) where
\[F\;=\;(x_{1}^{2}+x_{1}x_{3}+x_{3}^{2})(x_{2}^{2}+x_{2}x_{4}+x_{4}^{2}) \;=\;x_{3}^{2}x_{4}^{2}\cdot\Phi_{3}(x_{1}/x_{3})\cdot\Phi_{3}(x_{2}/x_{4}),\] \[G\;=\;(x_{1}^{2}-x_{1}x_{3}+x_{3}^{2})(x_{2}^{2}-x_{2}x_{4}+x_{4}^ {2}) \;=\;x_{3}^{2}x_{4}^{2}\cdot\Phi_{6}(x_{1}/x_{3})\cdot\Phi_{6}(x_{2}/x_{4}),\] \[H\;=\;(x_{1}x_{4}+x_{2}x_{3})(x_{1}x_{2}+x_{3}x_{4})\;=\;x_{2}x_{3 }^{2}x_{4}\cdot\Phi_{2}(x_{1}x_{4}/x_{2}x_{3})\cdot\Phi_{2}(x_{1}x_{2}/x_{3}x_ {4}),\]
and likewise the proof for \(k=4\) is based on the identity \(F-G=4H\) where
\[F\;=\;(x_{1}+x_{3})^{2}(x_{2}+x_{4})^{2}=x_{3}^{2}x_{4}^{2}\cdot \Phi_{2}(x_{1}/x_{3})^{2}\Phi_{2}(x_{2}/x_{4})^{2},\] \[G\;=\;(x_{1}-x_{3})^{2}(x_{2}-x_{4})^{2}=x_{3}^{2}x_{4}^{2}\cdot \Phi_{1}(x_{1}/x_{3})^{2}\Phi_{1}(x_{2}/x_{4})^{2},\] \[H\;=\;(x_{1}x_{2}+x_{3}x_{4})(x_{1}x_{4}+x_{2}x_{3})=x_{2}x_{3}^{2 }x_{4}\cdot\Phi_{2}(x_{1}x_{2}/x_{3}x_{4})\Phi_{2}(x_{1}x_{4}/x_{2}x_{3}).\]
**Remark**.: It is appropriate to remark on how the identities in the above proof were found. Write
\[\Psi_{m}(X,Y)=Y^{\varphi(m)}\Phi_{m}(X/Y)\]
for the homogenization of the \(m\)-th cyclotomic polynomial. Now consider
\[f(x_{1},x_{2},x_{3},x_{4})=\Psi_{m}(u,v)\]
where \(u\), \(v\) are monomials in variables \(x_{1},x_{2},x_{3},x_{4}\). Let \(\ell\) be a prime. We see that evaluating any such \(f\) at \((\zeta^{\alpha},\zeta^{\beta},\zeta^{\gamma},\zeta^{\delta})\) gives an element of \(V_{n}\) (provided that it does not vanish). We considered products of such \(f\) of total degree up to \(20\) and picked out ones that are invariant under the \(4\)-cycle \((x_{1},x_{2},x_{3},x_{4})\), and searched for ternary relations between them. This yielded the identities used in the above proof.
Proof of Theorems 1 and 2 for \(\ell=5\).: Theorems 1 and 2 for \(\ell=5\) follow immediately from Lemma 18.
## 6. The \(S\)-unit equation over \(\mathbb{Q}_{\infty,7}\)
**Lemma 19**.: _Let \(\upsilon_{7}\) be the unique prime of \(\mathbb{Q}_{\infty,7}\) above \(7\), and write \(S=\{\upsilon_{7}\}\). Then \((\mathbb{P}^{1}-\{0,1,\infty\})(\mathcal{O}(\mathbb{Q}_{\infty,7},S))\) is infinite._
Proof.: In view of the proof of Lemma 18, it would be natural to seek polynomials \(F\), \(G\), \(H\) in variables \(x_{1},\ldots,x_{6}\) satisfying the following properties
* \(F\pm G=H\);
* \(F\), \(G\), \(H\) are invariant under the \(6\)-cycle \((x_{1},x_{2},\ldots,x_{6})\);
* each is a product of polynomials \[f(x_{1},x_{2},\ldots,x_{6})=\Psi_{m}(u,v)\] with \(u\), \(v\) monomials in \(x_{1},\ldots,x_{6}\).
Unfortunately, an extensive search has failed to produce any such triple of polynomials. We therefore need to proceed a little differently.
Let \(a\in\mathbb{Z}_{7}\) be the element satisfying
\[a^{2}+a+1=0,\qquad a\equiv 2\pmod{7};\]
such an element exists and is unique by Hensel's Lemma. Let \(\sigma\), \(c:\Omega_{\infty,7}\to\Omega_{\infty,7}\) be the field automorphisms satisfying
\[\sigma(\zeta_{7^{n}})\;=\;\zeta_{7^{n}}^{a},\qquad c\big{(}\zeta_{7^{n}}\big{)} \;=\;\zeta_{7^{n}}^{-1}\]
for \(n\geq 1\). Then \(\mathbb{Q}_{\infty,7}\) is the field fixed by the subgroup of \(\mathrm{Gal}(\Omega_{\infty,7}/\mathbb{Q})\) generated by \(\sigma\) and \(c\). We work with polynomials in variables \(x_{1}\), \(x_{2}\), \(x_{3}\). Let
\[F =\,(x_{1}x_{2}^{2}+x_{3}^{3})(x_{2}x_{3}^{2}+x_{1}^{3})(x_{3}x_{1 }^{2}+x_{2}^{3})\] \[G =\,(x_{1}-x_{2})(x_{2}-x_{3})(x_{3}-x_{1})(x_{1}x_{2}-x_{3}^{2})( x_{2}x_{3}-x_{1}^{2})(x_{3}x_{1}-x_{2}^{2})\] \[H =\,(x_{1}^{2}x_{2}+x_{3}^{3})(x_{2}^{2}x_{3}+x_{1}^{3})(x_{3}^{2} x_{1}+x_{2}^{3}).\]
These satisfy the identity \(F-G=H\). Moreover, they are invariant under the \(3\)-cycle \((x_{1},x_{2},x_{3})\) and all the factors are of the form \(\Psi_{m}(u,v)\) where \(m=1\) or \(2\), and where \(u\), \(v\) are suitable monomials in \(x_{1}\), \(x_{2}\), \(x_{3}\). Evaluating any of \(F\), \(G\), \(H\) at \((\zeta,\zeta^{a},\zeta^{a^{2}})\) yields an \(S\)-unit belonging to \(\Omega_{n,7}^{(\sigma)}\). Now we let
\[F^{\prime}=\frac{F(x_{1}^{2},x_{2}^{2},x_{3}^{2})}{x_{1}^{6}x_{2}^{6}x_{3}^{6} },\qquad G^{\prime}=\frac{G(x_{1}^{2},x_{2}^{2},x_{3}^{2})}{x_{1}^{6}x_{2}^{6}x _{3}^{6}},\qquad H^{\prime}=\frac{H(x_{1}^{2},x_{2}^{2},x_{3}^{2})}{x_{1}^{6}x _{2}^{6}x_{3}^{6}}.\]
Observe that the rational functions \(F^{\prime}\), \(G^{\prime}\), \(H^{\prime}\) satisfy \(F^{\prime}-G^{\prime}=H^{\prime}\) and are moreover invariant under the \(3\)-cycle \((x_{1},x_{2},x_{3})\). Moreover, \(F^{\prime}\), \(G^{\prime}\), \(H^{\prime}\) evaluated at \((\zeta,\zeta^{a},\zeta^{a^{2}})\) yield \(S\)-units belonging to \(\Omega_{n,7}^{(\sigma)}\). We need to check that these in fact belong to \(\mathbb{Q}_{n-1,7}=\Omega_{n,7}^{(\sigma,c)}\) and so we need to check that these expressions are invariant under \(c\). This follows immediately on observing that \(F^{\prime}\), \(G^{\prime}\), \(H^{\prime}\) may be rewritten as
\[F^{\prime} =\,\left(\frac{x_{1}x_{2}^{2}}{x_{3}^{3}}+\frac{x_{3}^{3}}{x_{1}x _{2}^{2}}\right)\left(\frac{x_{2}x_{3}^{2}}{x_{1}^{3}}+\frac{x_{1}^{3}}{x_{2}x _{3}^{2}}\right)\left(\frac{x_{3}x_{1}^{2}}{x_{2}^{3}}+\frac{x_{2}^{3}}{x_{3} x_{1}^{2}}\right)\] \[G^{\prime} =\,\left(\frac{x_{1}}{x_{2}}-\frac{x_{2}}{x_{1}}\right)\left( \frac{x_{2}}{x_{3}}-\frac{x_{3}}{x_{2}}\right)\left(\frac{x_{3}}{x_{1}}-\frac{ x_{1}}{x_{3}}\right)\left(\frac{x_{1}x_{2}}{x_{2}^{3}}-\frac{x_{3}^{2}}{x_{1}x _{2}}\right)\left(\frac{x_{2}x_{3}}{x_{1}^{2}}-\frac{x_{1}^{2}}{x_{2}x_{3}} \right)\left(\frac{x_{3}x_{1}}{x_{2}^{2}}-\frac{x_{2}^{2}}{x_{3}x_{1}}\right)\] \[H^{\prime} =\,\left(\frac{x_{1}^{2}x_{2}}{x_{3}^{3}}+\frac{x_{3}^{3}}{x_{1}^ {2}x_{2}}\right)\left(\frac{x_{2}^{2}x_{3}}{x_{1}^{3}}+\frac{x_{1}^{3}}{x_{2} ^{2}x_{3}}\right)\left(\frac{x_{3}^{2}x_{1}}{x_{2}^{3}}+\frac{x_{2}^{3}}{x_{3} ^{3}x_{1}}\right).\]
Thus \(F^{\prime}\), \(G^{\prime}\), \(H^{\prime}\) evaluated at \((\zeta,\zeta^{a},\zeta^{a^{2}})\) yield elements of \(\mathcal{O}(\mathbb{Q}_{\infty,7},S)^{\times}\). We write
\[\varepsilon_{n}=\frac{F^{\prime}(\zeta,\zeta^{a},\zeta^{a^{2}})}{H^{\prime}( \zeta,\zeta^{a},\zeta^{a^{2}})},\qquad\delta_{n}=-\frac{G^{\prime}(\zeta,\zeta ^{a},\zeta^{a^{2}})}{H^{\prime}(\zeta,\zeta^{a},\zeta^{a^{2}})}.\]
Then \(\varepsilon_{n}\), \(\delta_{n}\) belong to \(\mathcal{O}(\mathbb{Q}_{\infty,7},S)^{\times}\) and satisfy \(\varepsilon_{n}+\delta_{n}=1\). In fact it is straightforward to check that \(\varepsilon_{n}\notin(\pm\zeta_{7^{n}},V_{n-1})\), from which it follows that \(\varepsilon_{n}\neq\varepsilon_{m}\) for \(n>m\). The details are similar to those of the proof of Lemma 18 and we omit them.
## 7. Isogeny classes of elliptic curves over \(\mathbb{Q}_{\infty,\ell}\)
The purpose of this section is to prove Theorem 4. Since isogenous elliptic curves share the same set of bad primes, the corresponding theorem over number fields is an immediate consequence of Shafarevich's theorem. However, as we intend to show in the following section, Shafarevich's theorem does not generalize to elliptic curves over \(\mathbb{Q}_{\infty,\ell}\). We shall instead rely on a theorem of Kato to control \(\mathbb{Q}_{\infty,\ell}\)-points on certain modular Jacobians.
Our first lemma shows that there are only finitely many primes that can divide the degree of a cyclic isogeny of \(E\).
**Lemma 20**.: _Let \(\ell\) be a prime and let \(E/\mathbb{Q}_{\infty,\ell}\) be an elliptic curve without potential complex multiplication. Then there is a constant \(B\), depending on \(E\), such that for primes \(p\geq B\), the elliptic curve \(E\) has no \(p\)-isogenies defined over \(\mathbb{Q}_{\infty,\ell}\)._
Proof.: Let \(n\) be the least positive integer such that \(E\) admits a model defined over \(\mathbb{Q}_{n,\ell}\). By a famous theorem of Serre [14], there is a constant \(B\), depending on \(E\), such that for \(p\geq B\) the mod \(p\) representation
\[\overline{\rho}_{E,p}\,:\,\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q}_{n, \ell})\to\mathrm{GL}_{2}(\mathbb{F}_{p})\]
is surjective. We may suppose that \(B\geq 5\). Thus, for \(p\geq B\), the Galois group \(\mathrm{Gal}(\mathbb{Q}_{n,\ell}(E[p])/\mathbb{Q}_{n,\ell})\) is isomorphic to \(\mathrm{GL}_{2}(\mathbb{F}_{p})\) which is non-solvable. We will show that \(E\) has no \(p\)-isogeny defined over \(\mathbb{Q}_{\infty,\ell}\). Suppose otherwise. Then such an isogeny is in fact defined over \(\mathbb{Q}_{m,\ell}\) for some \(m\geq n\). It follows that the extension \(\mathbb{Q}_{m,\ell}(E[p])/\mathbb{Q}_{m,\ell}\) has Galois group isomorphic to a subgroup of a Borel subgroup of \(\mathrm{GL}_{2}(\mathbb{F}_{p})\), with is solvable. As the extension \(\mathbb{Q}_{m,\ell}/\mathbb{Q}_{n,\ell}\) is cyclic, we conclude that \(\mathbb{Q}_{m,\ell}(E[p])/\mathbb{Q}_{n,\ell}\) is solvable. However, this contains the non-solvable subextension \(\mathbb{Q}_{n,\ell}(E[p])/\mathbb{Q}_{n,\ell}\), giving a contradiction.
We shall make use of the following theorem of Kato [9, Theorem 14.4] building on work of Rohrlich [11].
**Theorem 21** (Kato).: _Let \(\ell\) be a prime. Let \(A\) be an abelian variety defined over \(\mathbb{Q}\) and admitting a surjective map \(J_{1}(N)\to A\) for some \(N\geq 1\). Then \(A(\mathbb{Q}_{\infty,\ell})\) is finitely generated._
**Lemma 22**.: _Let \(p\), \(\ell\) be primes. Let \(E\) be an elliptic curve defined over \(\mathbb{Q}_{\infty,\ell}\) without potential complex multiplication. Then, for \(m\) sufficiently large, \(E\) has no \(p^{m}\)-isogenies defined over \(\mathbb{Q}_{\infty,\ell}\)._
Proof.: Let \(r\) be the least positive integer such that the modular curve \(X=X_{0}(p^{r})\) has genus at least \(2\), and write \(J=J_{0}(p^{r})\) for the corresponding modular Jacobian. It follows from Kato's theorem that \(J(\mathbb{Q}_{\infty,\ell})\) is finitely generated, and therefore that \(J(\mathbb{Q}_{\infty,\ell})=J(\mathbb{Q}_{n,\ell})\) for some \(n\geq 1\). Consider the Abel-Jacobi map
\[X\hookrightarrow J,\qquad P\mapsto[P-\infty]\]
where \(\infty\in X(\mathbb{Q})\) denotes the infinity cusp. It follows from this embedding that \(X(\mathbb{Q}_{\infty,\ell})=X(\mathbb{Q}_{n,\ell})\). By Faltings' theorem, this set is finite.
Let \(k=\#X(\mathbb{Q}_{\infty,\ell})\) and let \(s=kr\). To prove the lemma we in fact show that \(E\) has no cyclic isogenies of degree \(p^{s}\) defined over \(\mathbb{Q}_{\infty,\ell}\). Suppose otherwise, and let \(\psi:E\to E^{\prime}\) be a cyclic isogeny of degree \(p^{s}\) defined over \(\mathbb{Q}_{\infty,\ell}\). Then, we may factor \(\psi\) into a sequence of cyclic isogenies defined over \(\mathbb{Q}_{\infty,\ell}\)
\[E=E_{0}\ \stackrel{{\psi_{1}}}{{\longrightarrow}}\ E_{1}\ \stackrel{{\psi_{2}}}{{\longrightarrow}}\ E_{2}\,...\stackrel{{ \psi_{k}}}{{\longrightarrow}}\ E_{k}=E^{\prime}\]
where \(\psi_{i}\) is degree \(p^{r}\). Note that \(E_{i}\) and \(E_{j}\) are non-isomorphic over \(\overline{\mathbb{Q}}\) for \(i\neq j\); indeed they are related by a cyclic isogeny and \(E\) does not have potential complex multiplication. Thus the elliptic curves \(E_{0},E_{1},\ldots,E_{k}\) support distinct \(\mathbb{Q}_{\infty,\ell}\)-points on \(X=X_{0}(p^{r})\). This contradicts the fact that \(\#X(\mathbb{Q}_{\infty,\ell})=k\).
**Remark**.: A famous theorem of Serre [13, Section 2.1] asserts that the \(p\)-adic Tate module of a non-CM elliptic curve defined over a number field is irreducible. It is in fact possible to deduce Lemma 22 from Serre's theorem for \(\ell\neq p\), but we have been unable to do this for \(\ell=p\).
Proof of Theorem 4.: Let \(E^{\prime}\) belong to the \(\mathbb{Q}_{\infty,\ell}\)-isogeny class of \(E\). Let \(\psi:E\to E^{\prime}\) be an isogeny defined over \(\mathbb{Q}_{\infty,\ell}\). This has kernel of the form \(\mathbb{Z}/a\times\mathbb{Z}/ab\) where \(a\), \(b\) are positive integers, and so it can be factored into a composition
\[E\to E/E[a]\cong E\to E^{\prime}\]
where the final morphism is cyclic of degree \(b\). Thus to prove the proposition, it is enough to show that \(E\) has finitely many cyclic isogenies defined over \(\mathbb{Q}_{\infty,\ell}\). The degree of any such isogeny is divisible by primes \(p<B\) where \(B\) is as in Lemma 20. Also, for any \(p<B\), we know the exponent of \(p\) in the degree of a cyclic isogeny \(E\to E^{\prime}\) is bounded by Lemma 22. Thus there are finitely many cyclic isogenies of \(E\) defined over \(\mathbb{Q}_{\infty,\ell}\).
## 8. From \(S\)-unit equations to elliptic curves
The aim of this section is to prove Theorem 3. We start by recalling a few facts about Legendre elliptic curves (Proposition III.1.7 of [17] and its proof). Let \(K\) be a field of characteristic \(\neq 2\) and let \(\lambda\in(\mathbb{P}^{1}-\{0,1,\infty\})(K)\). Associated to \(\lambda\) is the Legendre elliptic curve
\[E_{\lambda}\::\:Y^{2}=X(X-1)(X-\lambda).\]
This model respectively has discriminant and \(j\)-invariant
\[\Delta=16\lambda^{2}(1-\lambda)^{2},\qquad j=\frac{64(\lambda^{2}-\lambda+1)^ {3}}{\lambda^{2}(1-\lambda)^{2}}. \tag{21}\]
Moreover, for \(\lambda\), \(\mu\in(\mathbb{P}^{1}-\{0,1,\infty\})(K)\), the Legendre elliptic curves \(E_{\lambda}\) and \(E_{\mu}\) are isomorphic over \(K\) (or over \(\overline{K}\)) if and only if
\[\mu\:\in\:\left\{\lambda,\,\frac{1}{\lambda},\,1-\lambda,\,\frac{1}{1-\lambda },\,\frac{\lambda}{\lambda-1},\,\frac{\lambda-1}{\lambda}\right\}.\]
Now let \(K\) be a number field and \(S\) a finite set of non-archimedean places. We let \(S^{\prime}\) be the set of non-archimedean places which are either in \(S\) or above \(2\). We let \(\lambda\in(\mathbb{P}^{1}-\{0,1,\infty\})(\mathcal{O}(K,S))\). Then \(\lambda\), \(1-\lambda\in\mathcal{O}(K,S)^{\times}\). It follows from the expression for the discriminant that \(E_{\lambda}\) has good reduction away for \(S^{\prime}\).
**Proof of Theorem 3.** Let \(\ell=2\), \(3\), \(5\) or \(7\). Let \(S\) be given by (1) and let \(S^{\prime}=S\cup\{\upsilon_{2}\}\) as in the statement of Theorem 3. In proving Theorem 1 we constructed, for each positive integer \(n\), elements \(\varepsilon_{n}\), \(\delta_{n}=1-\varepsilon_{n}\), belonging \(\mathbb{Q}_{\infty,\ell}\cap V_{n}\subseteq\mathcal{O}(\mathbb{Q}_{\infty,\ell},S)^{\times}\), and moreover verified, for \(n\geq 2\), that \(\varepsilon_{n}\notin\langle\zeta_{\ell^{n}},V_{n-1}\rangle\). We let
\[E_{n}\::\:Y^{2}=X(X-1)(X-\varepsilon_{n}).\]
Then \(E_{n}\) is defined over \(\mathbb{Q}_{\infty,\ell}\) and has good reduction away from \(S^{\prime}\). We claim, for \(n>m\), that \(E_{n}\) and \(E_{m}\) are not isomorphic, even over \(\overline{\mathbb{Q}}\). To see this, suppose \(E_{n}\) and \(E_{m}\) are isomorphic. Then \(\varepsilon_{n}\) equals one of \(\varepsilon_{n}^{\pm 1}\), \(\delta_{m}^{\pm 1}\), \((-\varepsilon_{m}\delta_{m})^{\pm 1}\). This gives a contradiction as all of these belong to \(\langle\pm\zeta_{\ell^{n}},V_{n-1}\rangle\). This proves the claim.
It remains to show that the \(E_{n}\) form infinitely many isogeny classes over \(\mathbb{Q}_{\infty,\ell}\). However, this immediately follows from Theorem 4 and the following lemma.
**Lemma 23**.: _For \(n\) sufficiently large, \(E_{n}\) does not have potential complex multiplication._
Proof.: Suppose \(E_{n}\) has potential complex multiplication by an order \(R\) in an imaginary quadratic field \(K\). Write \(j=j(E_{n})\). By standard CM theory [16, Theorem 5.7], we know that \(\operatorname{Gal}(K(j)/K)\cong\operatorname{Pic}(R)\) and \([\mathbb{Q}(j):\mathbb{Q}]=[K(j):K]\). Since in our case \(\mathbb{Q}(j)/\mathbb{Q}\) is Galois, \(\operatorname{Gal}(\mathbb{Q}(j)/\mathbb{Q})\cong\operatorname{Gal}(K(j)/K) \cong\operatorname{Pic}(R)\). However, \(\mathbb{Q}(j)\subset\mathbb{Q}_{\infty,\ell}\) is totally real. It follows [16, page 124] that \(\operatorname{Pic}(R)\) is an elementary abelian \(2\)-group. Since \(\mathbb{Q}(j)\subset\mathbb{Q}_{\infty,\ell}\), the Galois group of \(\mathbb{Q}(j)/\mathbb{Q}\) is cyclic of order \(\ell^{n}\) for some \(n\). Thus, \(j\in\mathbb{Q}\) if \(\ell\neq 2\), and \(j\in\mathbb{Q}_{1,2}=\mathbb{Q}(\sqrt{2})\) if \(\ell=2\). However, from the expression for \(j\) in (21) we know that \([\mathbb{Q}(\varepsilon_{n}):\mathbb{Q}(j)]\leq 6\). Thus \(\varepsilon_{n}\) belongs to a subfield of \(\mathbb{Q}_{\infty,\ell}\) of degree at most \(12\). The lemma follows since, by Siegel's theorem, the \(S\)-unit equation has only finitely many solutions in any number field.
## 9. Hyperelliptic curves over \(\mathbb{Q}_{\infty,\ell}\) with few bad primes
Let \(\ell\) be an odd prime. Let \(g\geq 2\) be an integer satisfying
\[\begin{cases}g\equiv(\ell-3)/4\;\;\text{or}\;-1\pmod{(\ell-1)/2}&\text{ if }\ell\equiv 3\pmod{4}\\ g\equiv-1\pmod{(\ell-1)/4}&\text{ if }\ell\equiv 1\pmod{4}.\end{cases} \tag{22}\]
Then there is a positive integer \(k\) such that
\[k\cdot\left(\frac{\ell-1}{2}\right)\;=\;\begin{cases}2g+1\text{ or }2g+2&\text{ if } \ell\equiv 3\pmod{4}\\ 2g+2&\text{ if }\ell\equiv 1\pmod{4}.\end{cases} \tag{23}\]
Let \(n\geq 2\) be a positive integer satisfying
\[\ell^{n-1}\;\geq\;k. \tag{24}\]
In this section we construct a hyperelliptic \(D_{n}\) curve of genus \(g\) defined over \(\mathbb{Q}_{n-1,\ell}\) with good reduction away from the primes above \(2\), \(\ell\).
Write
\[\mathcal{Z}_{n}=\{\zeta\in\Omega_{n,\ell}\quad:\quad\zeta^{\ell^{n}}=1,\quad \zeta^{\ell^{i}}\neq 1\text{ if }i<n\}\]
for the set of primitive \(\ell^{n}\)-th roots of \(1\). Write
\[\mathcal{Z}_{n}^{+}\;=\;\{\zeta+\zeta^{-1}\;:\;\zeta\in\mathcal{Z}_{n}\}\; \subset\;\Omega_{n,\ell}^{+}.\]
We note that any element of \(\mathcal{Z}_{n}^{+}\) generates \(\Omega_{n,\ell}^{+}\).
**Lemma 24**.: \(\#\mathcal{Z}_{n}^{+}=\ell^{n-1}(\ell-1)/2\)_._
Proof.: We note that \(\#\mathcal{Z}_{n}=\varphi(\ell^{n})=\ell^{n-1}(\ell-1)\). Suppose \(\alpha\), \(\beta\in\mathcal{Z}_{n}\). Then
\[(\alpha+\alpha^{-1})-(\beta+\beta^{-1})\;=\;\alpha^{-1}\cdot(1-\alpha\beta) \cdot(1-\alpha\beta^{-1}). \tag{25}\]
Thus \(\alpha+\alpha^{-1}=\beta+\beta^{-1}\) if and only if \(\alpha=\beta\) or \(\alpha=\beta^{-1}\). The lemma follows.
Write
\[G_{n}=\operatorname{Gal}(\Omega_{n,\ell}^{+}/\mathbb{Q}_{n-1,\ell}),\qquad H_ {n}=\operatorname{Gal}(\Omega_{n,\ell}^{+}/\Omega_{n-1,\ell}^{+}).\]
We note that these are both cyclic subgroups of \(\operatorname{Gal}(\Omega_{n,\ell}^{+}/\mathbb{Q})\) having orders
\[\#G_{n}=(\ell-1)/2,\qquad\#H_{n}=\ell.\]
**Lemma 25**.: _Fix \(\zeta\in\mathcal{Z}_{n}\). Let_
\[\eta_{i}\;=\;\zeta^{1+\ell^{n-1}(i-1)}+\zeta^{-1-\ell^{n-1}(i-1)},\qquad 1 \leq i\leq\ell. \tag{26}\]
_Then \(\eta_{1},\ldots,\eta_{\ell}\in\mathcal{Z}_{n}^{+}\) form a single orbit under the action of \(H_{n}\), but have pairwise disjoint orbits under the action of \(G_{n}\)._
Proof.: Let \(\kappa\in\operatorname{Gal}(\Omega_{n,\ell}/\mathbb{Q})\) be given by \(\kappa(\zeta)=\zeta^{1+\ell^{n-1}}\). We note that \(\kappa\) has order \(\ell\) and fixes \(\Omega_{n-1,\ell}\). We denote the restriction of \(\kappa\) to \(\Omega_{n,\ell}^{+}\) by \(\tau\); this is a cyclic generator of \(H_{n}\). Note that
\[\eta_{i}=\tau^{i-1}(\zeta+\zeta^{-1}),\qquad 1\leq i\leq\ell.\]
Let \(\sigma_{1}\), \(\sigma_{2}\in G_{n}\). Let \(1\leq i<j\leq\ell\) and suppose \(\sigma_{1}(\eta_{i})=\sigma_{2}(\eta_{j})\). Thus \(\sigma_{1}\tau^{i-1}(\eta_{1})=\sigma_{2}\tau^{j-1}(\eta_{1})\), so \(\tau^{1-j}\sigma_{2}^{-1}\sigma_{1}\tau^{i-1}\) fixes \(\eta_{1}\). As \(\eta_{1}\) generates \(\Omega_{n,\ell}^{+}\), we have \(\tau^{1-j}\sigma_{2}^{-1}\sigma_{1}\tau^{i-1}=1\) is the identity element in \(\operatorname{Gal}(\Omega_{n,\ell}^{+}/\mathbb{Q})\). However, \(\operatorname{Gal}(\Omega_{n,\ell}^{+}/\mathbb{Q})\) is abelian, so
\[\tau^{i-j}=\sigma_{1}^{-1}\sigma_{2}\in G_{n}\cap H_{n}=\{1\}.\]
Since \(1\leq i\leq j\leq\ell\) and \(\tau\) has order \(\ell\) we have \(i=j\).
The Galois group \(G_{n}\) acts faithfully on \(\mathcal{Z}_{n}^{+}\). This action has \(\ell^{n-1}\) orbits. Assumption (24) ensures that the number of orbits is at least \(k\). If \(k>\ell\), then we **extend** the list \(\eta_{1},\dots,\eta_{\ell}\in\mathcal{Z}_{n}^{+}\) to \(\eta_{1},\dots,\eta_{k}\in\mathcal{Z}_{n}^{+}\), so that the \(\eta_{i}\) continue to have disjoint orbits under the action of \(G_{n}\); if \(\ell=3\) the choice of \(\eta_{4}\) will be important later, and we choose \(\eta_{4}=\zeta^{2}+\zeta^{-2}\). Consider the curve
\[D_{n}\,:\,Y^{2}=\prod_{j=1}^{k}\prod_{\sigma\in G_{n}}(X-\eta_{j}^{\sigma}). \tag{27}\]
**Lemma 26**.: _The curve \(D_{n}\) is hyperelliptic of genus \(g\), is defined over \(\mathbb{Q}_{n-1,\ell}\), and has good reduction away from the primes above \(2\) and \(\ell\)._
Proof.: Our assumption on the orbits ensures that the polynomial on the right handside of (27) is separable. By (23), the degree of the polynomial is either \(2g+1\) or \(2g+2\). Thus \(D_{n}\) is a hyperelliptic curve of genus \(g\). A priori, \(D_{n}\) is defined over \(\Omega_{n,\ell}^{+}\). However, the roots of the hyperelliptic polynomial are permuted by the action of \(G_{n}=\operatorname{Gal}(\Omega_{n,\ell}^{+}/\mathbb{Q}_{n-1,\ell})\) and so the polynomial belongs to \(\mathbb{Q}_{n-1,\ell}[X]\). Hence \(D_{n}\) is defined over \(\mathbb{Q}_{n-1,\ell}\).
Let \(u_{1},\dots,u_{d}\) be the roots of the hyperelliptic polynomial. Then the discriminant of hyperelliptic polynomial is
\[\prod_{1\leq i<j\leq d}(u_{i}-u_{j})^{2}.\]
However, \(u_{i}\), \(u_{j}\) are distinct elements of \(\mathcal{Z}_{n}^{+}\). Thus there are \(\alpha\), \(\beta\in\mathcal{Z}_{n}\) with \(\alpha\neq\beta\), \(\beta^{-1}\) such that \(u_{i}=\alpha+\alpha^{-1}\), \(u_{j}=\beta+\beta^{-1}\). From the identity (25),
\[u_{i}-u_{j}\,=\,\alpha^{-1}(1-\alpha\beta^{-1})(1-\alpha\beta).\]
Since \(\alpha\beta\) and \(\alpha\beta^{-1}\) are non-trivial \(\ell\)-power roots of \(1\), we see that \(u_{i}-u_{j}\) is a \(\{v_{\ell}\}\)-unit, and hence the discriminant of the hyperelliptic polynomial of \(D_{n}\) is a \(\{v_{\ell}\}\)-unit.
Given four pairwise distinct elements \(z_{1}\), \(z_{2}\), \(z_{3}\), \(z_{4}\) of a field \(K\), we shall employ the notation \((z_{1},z_{2}\,;\,z_{3},z_{4})\) to denote the **cross ratio**
\[(z_{1},z_{2}\,;\,z_{3},z_{4})\,=\,\frac{(z_{1}-z_{3})(z_{2}-z_{4})}{(z_{1}-z_{4 })(z_{2}-z_{3})}.\]
We extend the cross ratio to four distinct elements \(z_{1},z_{2},z_{3},z_{4}\) of \(\mathbb{P}^{1}(K)\) in the usual way. We let \(\operatorname{GL}_{2}(K)\) act on \(\mathbb{P}^{1}(K)\) via fractional linear transformations
\[\gamma(z)=\frac{az+b}{cz+d},\qquad\gamma=\begin{pmatrix}a&b\\ c&d\end{pmatrix}.\]
It is well-known and easy to check that these fractional linear transformations leave the cross ratio unchanged:
\[\left(\gamma(z_{1}),\gamma(z_{2})\,;\,\gamma(z_{3}),\gamma(z_{4})\right)\,=\,(z_{ 1},z_{2}\,;\,z_{3},z_{4}).\]
**Lemma 27**.: _Let \(\overline{K}\) be an algebraically closed field of characteristic \(0\). Let_
\[D\,:\,Y^{2}=\prod_{i=1}^{d}(X-a_{i}),\qquad D^{\prime}\,:\,Y^{2}=\prod_{i=1}^{ d}(X-b_{i}),\]
_be genus \(g\) curves defined over \(\overline{K}\) where the polynomials on the right are separable. If \(D\), \(D^{\prime}\) are isomorphic then there is some permutation \(\mu\in S_{d}\) such that for all quadruples of pairwise distinct indices \(1\leq r,s,t,u\leq d\)_
\[(a_{r},a_{s}\,;\,a_{t},a_{u})\,=\,(b_{\mu(r)},b_{\mu(s)}\,;\,b_{\mu(t)},b_{\mu (u)}).\]
Proof.: We shall make use of the following standard description (e.g. [2, Proposition 6.11]) of isomorphisms of hyperelliptic curves: every isomorphism \(\pi\,:\,D\to D^{\prime}\) is of the form
\[\pi(X,Y)\,=\,\left(\frac{aX+b}{cX+d},\frac{eY}{(cX+d)^{g+1}}\right)\]
for some
\[\gamma=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\operatorname{GL}_{2}(\overline{K}),\qquad e\in\overline{K} ^{\kappa}.\]
Observe that \(\pi(a_{i},0)\) has \(Y\)-coordinate \(0\); thus
\[\{\gamma(a_{1}),\ldots,\gamma(a_{d})\}\,=\,\{b_{1},\ldots,b_{d}\}.\]
Hence there is a permutation \(\mu\in S_{d}\) such that \(\gamma(a_{i})=b_{\mu(i)}\). The lemma follows from the invariance of the cross ratio under the action of \(\operatorname{GL}_{2}(\overline{K})\).
**Lemma 28**.: _Let \(\ell\geq 11\) be prime. Then there is some \(a\in\mathbb{Z}_{\ell}^{\times}\) of order \(\ell-1\) such that_
\[1+a^{2}\,\not\equiv\,0,\,\pm(1-a^{2}), \pm(a+a^{3}),\,\pm(a-a^{3}),\] \[\pm(1+a^{3}),\,\pm(1-a^{3}),\,\pm(a+a^{2}),\,\pm(a-a^{2})\pmod{ \ell}. \tag{28}\]
Proof.: Making use of the fact that a polynomial of degree \(n\) has at most \(n\) roots, we see that the number of \(a\in\mathbb{F}_{\ell}\) that **do not satisfy** (28) is (very crudely) bounded by \(37\). An element \(a\in\mathbb{Z}_{\ell}^{\times}\) of order \(\ell-1\) is the unique Hensel lift of an element \(a\in\mathbb{F}_{\ell}^{\times}\) of order \(\ell-1\). There are precisely \(\varphi(\ell-1)\) elements of order \(\ell-1\) in \(\mathbb{F}_{\ell}^{\times}\). A theorem of Shapiro [15, page 23], asserts that \(\varphi(n)>n^{\log 2/\log 3}\) for \(n\geq 30\). We note that if \(\ell\geq 317\) then \(\varphi(\ell-1)\geq 316^{\log 2/\log 3}\approx 37.8\), and so the lemma holds for \(\ell\geq 317\). For the range \(11\leq\ell\leq 317\) we checked the lemma by brute force computer enumeration.
**Lemma 29**.: _Let \(n>m\) be sufficiently large. Then \(D_{n}\) and \(D_{m}\) are non-isomorphic, even over \(\overline{\mathbb{Q}}\)._
Proof.: Note that all roots of the hyperelliptic polynomial for \(D_{n}\) in (27) belong to \(\mathcal{Z}_{n}^{+}\). It follows from (25) that the cross ratio of any four of them belongs to \(V_{n}\). Suppose \(D_{n}\) and \(D_{m}\) are isomorphic. Let \(u_{1},u_{2},u_{3},u_{4}\) be any distinct roots of the hyperelliptic polynomial for \(D_{n}\) given in (27). Then, by Lemma 27,
\[(u_{1},u_{2}\,;\,u_{3},u_{4})\,\in V_{m}\subseteq V_{n-1}.\]
We shall obtain a contradiction through a careful choice of the four roots \(u_{1},\ldots,u_{4}\).
We first suppose that \(k\geq 2\) and \(\ell\geq 5\). Let \(\zeta=\zeta_{\ell^{n}}\) and \(b=1+\ell^{n-1}\). Then, by Lemma 25, \(\eta_{1}=\zeta+\zeta^{-1}\) and \(\eta_{2}=\zeta^{b}+\zeta^{-b}\). Let \(a\in\mathbb{Z}_{\ell}^{\times}\) have order \(\ell-1\). Let \(\kappa\in\operatorname{Gal}(\Omega_{n,\ell}/\mathbb{Q}_{n-1,\ell})\) be given by \(\kappa(\zeta)=\zeta^{a}\). Then \(\kappa\) is a cyclic generator for \(\operatorname{Gal}(\Omega_{n,\ell}/\mathbb{Q}_{n-1,\ell})\). We shall denote the restriction of \(\kappa\) to \(\Omega_{n,\ell}^{+}\) by \(\mu\). Then \(\mu\) is a cyclic generator for \(G_{n}=\operatorname{Gal}(\Omega_{n,\ell}^{+}/\mathbb{Q}_{n-1,\ell})\) having order \((\ell-1)/2\). We shall take
\[u_{1} =\eta_{1}=\zeta+\zeta^{-1}, u_{2} =\mu(\eta_{1})=\zeta^{a}+\zeta^{-a},\] \[u_{3} =\eta_{2}=\zeta^{b}+\zeta^{-b}, u_{4} =\mu(\eta_{2})=\zeta^{ab}+\zeta^{-ab}.\]
We compute the cross ratio with the help of identity (25), finding
\[(u_{1},u_{2}\,;\,u_{3},u_{4})\;=\;\frac{(1-\zeta^{1+b})(1-\zeta^{1-b})(1- \zeta^{a+ab})(1-\zeta^{a-ab})}{(1-\zeta^{1+ab})(1-\zeta^{1-ab})(1-\zeta^{a+b}) (1-\zeta^{a-b})}.\]
As \(b\equiv 1\pmod{\ell}\), and clearly \(a\not\equiv\pm 1\pmod{\ell}\), it is easy to check that \(1+b\) is the only one out of the eight exponents of \(\zeta\) above that is \(\pm 2\pmod{\ell}\). Therefore by Lemma 10, the cross ratio is not an element of \((\pm\zeta_{\ell^{n}},V_{n-1})\) for \(n\) sufficiently large, giving a contradiction for the case \(k\geq 2\) and \(\ell\geq 5\).
Next we suppose that \(k=1\). It follows from (23) that \(\ell\geq 11\). We choose \(a\in\mathbb{Z}_{\ell}^{\times}\) as in Lemma 28, and, as above, take \(\mu\) to be the corresponding generator of \(G_{n}\) of order \((\ell-1)/2\geq 5\). We take
\[u_{i}=\mu^{i-1}(\eta_{1})=\zeta^{a^{i-1}}+\zeta^{-a^{i-1}},\qquad 1\leq i\leq 4;\]
observe that these are four roots of the hyperelliptic polynomial of \(D_{n}\) given in (27). The assumption that \(\ell\geq 11\) ensures that \(a\) has order \(\geq 10\) and so \(u_{1},u_{2},u_{3},u_{4}\) are indeed pairwise distinct. We compute the cross ratio with the help of identity (25), finding
\[(u_{1},u_{2}\,;\,u_{3},u_{4})\;=\;\frac{(1-\zeta^{1+a^{2}})(1-\zeta^{1-a^{2}}) (1-\zeta^{a+a^{3}})(1-\zeta^{a-a^{3}})}{(1-\zeta^{1+a^{3}})(1-\zeta^{1-a^{3}} )(1-\zeta^{a+a^{2}})(1-\zeta^{a-a^{2}})}.\]
Using Lemma 9 and our choice of \(a\) given by Lemma 28 we conclude that this cross ratio does not belong to \((\pm\zeta_{\ell^{n}},V_{n-1})\) for \(n\) sufficiently large. This gives a contradiction for the case \(k=1\).
Finally, we consider \(\ell=3\). It follows from (23) that \(k\geq 5\). Recall our choices of \(\eta_{1}\), \(\eta_{2}\), \(\eta_{3}\) in Lemma 25, and our choice of \(\eta_{4}=\zeta^{2}+\zeta^{-2}\) in the particular case \(\ell=3\). We choose the four roots \(u_{i}=\eta_{i}\) for \(i=1,\ldots,4\), and obtain,
\[(u_{1},u_{2}\,;\,u_{3},u_{4})\;=\;\frac{(1-\zeta^{2+2\times 3^{n-1}})(1-\zeta^{ -2\times 3^{n-1}})(1-\zeta^{3+3^{n-1}})(1-\zeta^{-1+3^{n-1}})}{(1-\zeta^{3})(1- \zeta^{-1})(1-\zeta^{2})(1-\zeta^{-3^{n-1}})}.\]
As before, with the help of Lemma 10, we easily verify that the cross ratio is not an element of \((\pm\zeta_{\ell^{n}},V_{n-1})\) for \(n\) sufficiently large. This completes the proof.
**Proof of Theorem 5.** If \(\ell=3\) or \(5\) then (22) does not impose any restriction on the genus. Therefore we obtain, as above, for every genus \(g\geq 2\), infinitely many \(\overline{\mathbb{Q}}\)-isomorphism classes of genus \(g\) hyperelliptic curves, defined over \(\mathbb{Q}_{\infty,\ell}\), with good reduction away from \(\{v_{2},v_{\ell}\}\).
It remains to deal with \(\ell=7\), \(11\) and \(13\). Here, (22) imposes the restriction
\[g\equiv\begin{cases}1\text{ or }2\bmod 3&\text{if }\ell=7\\ 2\text{ or }4\bmod 5&\text{if }\ell=11\\ 2\bmod 3&\text{if }\ell=13.\end{cases}\]
We very briefly sketch how to remove the restriction. Instead of \(D_{n}\) defined as in (27), we consider the more general
\[D_{n}\,:\,Y^{2}=h(X)\cdot\prod_{j=1}^{k}\prod_{\sigma\in G_{n}}(X-\eta_{j}^{ \sigma})\]
where
* \(h\) is a monic divisor of \(X(X-1)(X+1)\);
* \(k\) and \(h\) are chosen to obtain the desired genus;
* \(\eta_{j}\in\mathcal{Z}_{n}^{+}\) are chosen as before.
These \(D_{n}\) are clearly defined over \(\mathbb{Q}_{n-1,\ell}\). To check that they have good reduction away from \(S^{\prime}=\{\upsilon_{2},\upsilon_{\ell}\}\), we need to verify that the difference of any two distinct roots \(u\), \(v\) of the hyperelliptic polynomial belongs to \(\mathcal{O}(\Omega_{n},S^{\prime})^{\times}\). The proof of Lemma 26 shows this if \(u\), \(v\in\mathcal{Z}_{n}^{+}\). For the remaining possible differences it is enough to note that
\[\alpha+\alpha^{-1}=\alpha^{-1}\Phi_{4}(\alpha),\qquad\alpha+\alpha^{-1}+1= \alpha^{-1}\Phi_{3}(\alpha),\qquad\alpha+\alpha^{-1}-1=\alpha^{-1}\Phi_{6}(\alpha)\]
which are all units by Lemma 12. We omit the remaining details.
## 10. Isogeny classes of hyperelliptic curves over \(\mathbb{Q}_{\infty,\ell}\)
A beautiful theorem of Kummer asserts that the index of the cyclotomic units \(C_{n}\) in the full unit group \(\mathcal{O}(\Omega_{n,\ell})^{\times}\) equals the class number \(h_{n}^{+}\) of \(\Omega_{n,\ell}^{+}\). In this section, with the help of Kummer's theorem, we prove for certain primes \(\ell\) the existence of infinitely many isogeny classes of hyperelliptic Jacobians over \(\mathbb{Q}_{\infty,\ell}\) with good reduction away from \(\ell\). We first prove a few elementary lemmas.
**Lemma 30**.: _Let \(K\) be a field of characteristic not \(2\), and let \(L=K(\sqrt{\alpha_{1}},\ldots,\sqrt{\alpha_{r}})\) where \(\alpha_{i}\in K^{\times}\). Then for any \(x\in K\) such that \(\sqrt{x}\in L\), we have_
\[x\,=\,\alpha_{1}^{e_{1}}\cdots\alpha_{r}^{e_{r}}q^{2}\]
_for some integers \(e_{i}\in\mathbb{Z}\) and \(q\in K\)._
Proof.: Let \(M\) be a field of characteristic not \(2\), and let \(d\in M\) be a non-square. Let \(x\in M\) and suppose \(\sqrt{x}\in M(\sqrt{d})\). Then \(\sqrt{x}=y+z\sqrt{d}\) for some \(y\), \(z\in M\). Squaring, we deduce that \(yz=0\). Thus \(x=y^{2}\) or \(x=dz^{2}\).
We now prove the lemma by induction on \(r\). The above establishes the case \(r=1\). Let \(r\geq 2\), and let \(x\in K\) satisfy \(\sqrt{x}\in L\). Letting \(M=K(\sqrt{\alpha_{1}},\ldots,\sqrt{\alpha_{r-1}})\) we see that \(x\in M\) and \(\sqrt{x}\in M(\sqrt{\alpha_{r}})\). Thus, by the above, \(\sqrt{x}\in M\) or \(\sqrt{x\alpha_{r}}\in M\). In other words,
\[\sqrt{x\cdot\alpha_{r}^{e}}\,\in\,M=K(\sqrt{\alpha_{1}},\ldots,\sqrt{\alpha_{ r-1}})\]
for some \(e\in\{0,1\}\). By the inductive hypothesis, there are \(e_{1},\ldots,e_{r-1}\in\mathbb{Z}\) and \(q\in K\) such that
\[x\cdot\alpha_{r}^{e}\,=\,\alpha_{1}^{e_{1}}\cdots\alpha_{r-1}^{e_{r-1}}q^{2}.\]
The proof is complete on taking \(e_{r}=-e\)
**Lemma 31**.: _Let \(\ell\) be an odd prime. Let \(q\in\Omega_{\infty,\ell}\) satisfy \(q^{2}\in V_{n}\). If the class number \(h_{n}^{+}\) of \(\Omega_{n,\ell}^{+}\) is odd, then \(q\in V_{n}\)._
Proof.: Let \(q\in\Omega_{\infty,\ell}\) satisfy \(q^{2}\in V_{n}\subset\Omega_{n,\ell}\). As the extension \(\Omega_{\infty,\ell}/\Omega_{n,\ell}\) is pro-\(\ell\), we conclude that \(q\in\Omega_{n,\ell}\). However, \(V_{n}\subseteq\mathcal{O}(\Omega_{n,\ell},\{v_{\ell}\})^{\times}\), where, as usual, \(v_{\ell}\) denotes the prime above \(\ell\). Thus \(q\in\mathcal{O}(\Omega_{n,\ell},\{v_{\ell}\})^{\times}\). We claim that
\[[\mathcal{O}(\Omega_{n,\ell},\{v_{\ell}\})^{\times}:V_{n}]\,=\,h_{n}^{+}.\]
The lemma follows immediately from the claim. To prove the claim, consider the commutative diagram with exact rows
where \(\kappa(\alpha)=\operatorname{ord}_{(1-\zeta)}(\alpha)\). By the snake lemma,
\[\mathcal{O}(\Omega_{n,\ell},\{v_{\ell}\})^{\times}/V_{n}\,\cong\,\mathcal{O}( \Omega_{n,\ell})^{\times}/C_{n}.\]
Write \(C_{n}^{\ast}=C_{n}\cap\Omega_{n,\ell}^{\ast}\). The aforementioned theorem of Kummer asserts that
\[[\mathcal{O}(\Omega_{n,\ell})^{\times}:C_{n}]\,=\,[\mathcal{O}(\Omega_{n, \ell}^{+})^{\times}:C_{n}^{+}]\,=\,h_{n}^{+};\]
see, for example, [22, Exercise 8.5] for the first equality, and [22, Theorem 8.2] for the second. This proves the claim.
**Lemma 32**.: _Let \(K\) be a field of characteristic \(\neq 2\). Let \(f\in K[X]\) be a monic separable polynomial of odd degree \(d\geq 5\). Write \(f=\prod_{i=1}^{d}(X-\alpha_{i})\) with \(\alpha_{i}\in\overline{K}\). Let \(C/K\) be a hyperelliptic curve given by \(Y^{2}=f(X)\) with Jacobian \(J\). Then_
\[K(J[2])=K(\alpha_{1},\dots,\alpha_{d}),\qquad K(J[4])=K(J[2])\,\Big{(}\big{\{} \sqrt{\alpha_{i}-\alpha_{j}}\big{\}}_{1\leq i,j\leq d}\Big{)}.\]
Proof.: Write \(\infty\) for the point at infinity on the given model for \(C\). The expression given for \(K(J[2])\) is well-known; it may be seen by observing (see, for example [12]) that the classes of the classes of degree \(0\) divisors \([(\alpha_{i},0)-\infty]\) with \(i=1,\dots,d\) generate \(J[2]\).
Yelton [23, Theorem 1.2.2] gives a high-powered proof of the given expression for \(K(J[4])\). For the convenience of the reader we give a more elementary argument. Let \(L=K(J[2])\). The theory of \(2\)-descent on hyperelliptic Jacobians furnishes, for any field \(M\supseteq L\), an injective homomorphism [12], [19]
\[J(M)/2J(M)\,\hookrightarrow\,\prod_{i=1}^{d}M^{\ast}/(M^{\ast})^{2}\]
known as the \(X-\Theta\)-map. This in particular sends the \(2\)-torsion point \([(\alpha_{i},0)-\infty]\) to
\[\left((\alpha_{i}-\alpha_{1})\,,\dots\,,\,(\alpha_{i}-\alpha_{i-1})\,,\prod_{ j\neq i}(\alpha_{i}-\alpha_{j})\,,\,(\alpha_{i}-\alpha_{i+1})\,,\,\dots\,,\,(\alpha_{i}- \alpha_{d})\right).\]
The field \(K(J[4])\) is the smallest extension of \(M\) of \(L\) such that all the images of the \(2\)-torsion generators \([(\alpha_{i},0)-\infty]\) are trivial in \(\prod_{i=1}^{d}M^{\ast}/(M^{\ast})^{2}\). This is plainly the extension
\[M=L\,\Big{(}\big{\{}\sqrt{\alpha_{i}-\alpha_{j}}\big{\}}_{1\leq i,j\leq d} \Big{)}.\]
**Lemma 33**.: _Let \(p\) be a prime for which \(2\) is a primitive root (i.e. \(2\) is a generator for \(\mathbb{F}_{p}^{\times}\)). Let \(G\) be a cyclic group of order \(p\), and let \(V\) be an \(\mathbb{F}_{2}[G]\)-module with \(\dim_{\mathbb{F}_{2}}(V)=p-1\). Suppose that the action of \(G\) on \(V-\{0\}\) is free. Then \(V\) is irreducible._
Proof.: Let \(W\) be a \(\mathbb{F}_{2}[G]\)-submodule of \(V\), and write \(d=\dim_{\mathbb{F}_{2}}(W)\). Since the action of \(G\) on \(V-\{0\}\) is free, the set \(W-\{0\}\) consists of \(G\)-orbits, all having size \(p\). However, \(\#(W-\{0\})=2^{d}-1\), and so \(p\mid(2^{d}-1)\). By assumption, \(2\) is a primitive root modulo \(p\), therefore \((p-1)\mid d\). Since \(W\) is an \(\mathbb{F}_{2}\)-subspace of \(V\) which has dimension \(p-1\), we see that \(W=0\) or \(W=V\).
**Lemma 34**.: _Let \(\ell=2p+1\), where \(\ell\) and \(p\) are odd primes. Suppose \(2\) is a primitive root modulo \(p\). Let \(g=(\ell-3)/4\). Let \(n\geq 2\) and let \(D_{n}/\mathbb{Q}_{n-1,\ell}\) be the hyperelliptic curve defined in Section 9. Let \(A/\mathbb{Q}_{\infty,\ell}\) be an abelian variety and let \(\phi:J(D_{n})\to A\) be an isogeny defined over \(\mathbb{Q}_{\infty,\ell}\). Then \(\phi=2^{r}\phi_{\rm odd}\) where \(\phi_{\rm odd}:J(D_{n})\to A\) is an isogeny of odd degree._
We remark if \(\ell\) and \(p\) are primes with \(\ell=2p+1\) then \(p\) is called a Sophie-Germain prime, and \(\ell\) is called as safe prime.
Proof of Lemma 34.: Note that, in the notation of Section 9, \(k=1\), and the hyperelliptic polynomial for \(D_{n}\) has odd degree \(2g+1=(\ell-1)/2=p\), and consists of a single orbit under action of \(G_{n}={\rm Gal}(\Omega_{n}^{+}/\mathbb{Q}_{n-1,\ell})\):
\[D_{n}\,:\,y^{2}\,=\,\prod_{\sigma\in G_{n}}(X-\eta_{1}^{\sigma}),\qquad\eta_{ 1}=\zeta_{\ell^{n}}+\zeta_{\ell^{n}}^{-1}.\]
In particular, the hyperelliptic polynomial is irreducible over \(\mathbb{Q}_{\infty,\ell}\). It follows from this (e.g. [19, Lemma 4.3]) that \(J(\mathbb{Q}_{\infty,\ell})[2]=0\), where \(J\) denotes \(J(D_{n})\) for convenience. We note, by Lemma 32, that \(\mathbb{Q}_{\infty,\ell}(J[2])=\mathbb{Q}_{\infty,\ell}(\eta_{1})=\Omega_{ \infty,\ell}^{+}\). We consider the action of \(G_{\infty}\coloneqq{\rm Gal}(\Omega_{\infty,\ell}^{+}/\mathbb{Q}_{\infty,\ell})\) on \(J[2]\). The group \(G_{\infty}\) is cyclic of order \((\ell-1)/2=p\). Any element fixed by this action belongs to \(J(\mathbb{Q}_{\infty,\ell})[2]=0\). Thus \(G_{\infty}\) acts freely on \(V-\{0\}\), where \(V\mathbin{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{ \mathop{\mathop{\mathop{\mathop{\leftleftleftleft(\mathop{\leftleft(\mathop{\leftleftleft({ \leftleftleftleft({\leftleftleftleftleft({\leftleftleftleftleftleft({ \left( \left( \left( { \left( { { 1 }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\\\\\\\\\\ \\\\\\\\\ \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\}\\}\\\}\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\\\\\}\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\}\\\\\\\\\\\
Proof of Lemma 35.: Write \(J_{n}\) for \(J(D_{n})\). Suppose there is an isogeny \(\phi:J_{n}\to J_{m}\) defined over \(\mathbb{Q}_{\infty,\ell}\). By Lemma 34 we may suppose that \(\phi\) has odd degree, and so \(\ker(\phi)\cap J_{n}[4]=0\). Thus \(\phi\) restricted to \(J_{n}[4]\) induces an isomorphism of \(\operatorname{Gal}(\overline{\mathbb{Q}}/\mathbb{Q}_{\infty,\ell})\)-modules \(J_{n}[4]\simeq J_{m}[4]\). In particular, \(\mathbb{Q}_{\infty,\ell}(J_{n}[4])=\mathbb{Q}_{\infty,\ell}(J_{m}[4])\). As in the proof of Lemma 34 we have \(\mathbb{Q}_{\infty,\ell}(J_{n}[2])=\mathbb{Q}_{\infty,\ell}(J_{m}[2])=\Omega_{ \infty,\ell}^{+}\). Thus, by Lemma 32, the equality \(\mathbb{Q}_{\infty,\ell}(J_{n}[4])=\mathbb{Q}_{\infty,\ell}(J_{m}[4])\) may be rewritten as
\[\Omega_{\infty,\ell}^{+}\Big{(}\big{\{}\sqrt{\vartheta_{n,i}-\vartheta_{n,j} }\big{\}}_{1\leq i,j\leq(\ell-1)/2}\Big{)}\;=\;\Omega_{\infty,\ell}^{+}\Big{(} \big{\{}\sqrt{\vartheta_{m,i}-\vartheta_{m,j}}\big{\}}_{1\leq i,j\leq(\ell-1) /2}\Big{)}\]
where \(\vartheta_{r,i}:=\mu_{r}^{i-1}(\zeta_{\ell^{r}}+\zeta_{\ell^{r}}^{-})\) where \(\mu_{r}\) is a cyclic generator of \(G_{r}\). This, in particular, implies that
\[\sqrt{\vartheta_{n,2}-\vartheta_{n,1}}\;\in\;\Omega_{\infty,\ell}^{+}\Big{(} \big{\{}\sqrt{\vartheta_{m,i}-\vartheta_{m,j}}\big{\}}_{1\leq i,j\leq(\ell-1) /2}\Big{)}\]
We apply Lemma 30 to obtain
\[\vartheta_{n,2}-\vartheta_{n,1}=\pm\prod_{1\leq i<j\leq\frac{\ell-1}{2}}( \vartheta_{m,i}-\vartheta_{m,j})^{e_{i,j}}\cdot q^{2}\]
for some integers \(e_{i,j}\in\mathbb{Z}\) and \(q\in\Omega_{\infty,\ell}^{+}\). By Lemma 31, we have \(q\in V_{n}\). The generator \(\mu_{n}\) of \(G_{n}\) is given by \(\mu_{n}(\zeta_{\ell^{n}}+\zeta_{\ell^{n}}^{-1})=\zeta_{\ell^{n}}^{a}+\zeta_{ \ell^{n}}^{-a}\) where \(a\in\mathbb{Z}_{\ell}^{\times}\) has order \((\ell-1)\). Note
\[\vartheta_{n,2}-\vartheta_{n,1}\;=\;\zeta_{\ell^{n}}^{a}+\zeta_{\ell^{n}}^{-a} -\zeta_{\ell^{n}}-\zeta_{\ell^{n}}^{-1}\;=\;\zeta_{\ell^{n}}^{-a}(1-\zeta_{ \ell^{n}}^{a+1})(1-\zeta_{\ell^{n}}^{a-1}).\]
Thus,
\[(1-\zeta_{\ell^{n}}^{a+1})(1-\zeta_{\ell^{n}}^{a-1})\;\in\;\langle\pm\zeta_{ \ell^{n}},V_{m},V_{n}^{2}\rangle.\]
However, \((a+1)\not\equiv(a-1)\pmod{\ell}\). Now Corollary 11 gives a contradiction.
**Proof of Theorem 6.** Let \(\ell\geq 11\). Let
\[g\;=\;\big{\lfloor}(\ell-3)/4\big{\rfloor}\;=\;\begin{cases}(\ell-3)/4&\ell \equiv 3\pmod{4}\\ (\ell-5)/4&\ell\equiv 1\pmod{4}.\end{cases}\]
Thus \(g\) satisfies (22). Let \(D_{n}\) be as in Section 9. By Lemma 26, the hyperelliptic curve \(D_{n}/\mathbb{Q}_{n-1,\ell}\) has genus \(g\), and good reduction away from \(\{v_{2},v_{\ell}\}\). Moreover, by Lemma 29, we have \(D_{n}\) and \(D_{m}\) are non-isomorphic, even over \(\overline{\mathbb{Q}}\), for \(n>m\) sufficiently large.
Now suppose
* \(\ell=2p+1\) where \(p\) is also an odd prime;
* \(2\) as a primitive root modulo \(p\).
It then follows from Lemma 35 that \(J(D_{n})\) and \(J(D_{m})\) are non-isogenous over \(\mathbb{Q}_{\infty,\ell}\) provided \(h_{n}^{+}\) is odd for all \(n\), where \(h_{n}^{+}\) denotes the class number of \(\Omega_{n,\ell}^{+}\). Write \(h_{n}\) for the class number of \(\Omega_{n,\ell}\). It is known thanks to the work of Estes [4] that \(h_{1}\) is odd for all primes \(\ell\) satisfying (i) and (ii) (a simplified proof of this result is given Stevenhagen [18, Corollary 2.3]). Moreover, Ichimura and Nakajima [8] show, for primes \(\ell\leq 509\), that the ratio \(h_{n}/h_{1}\) is odd for all \(n\). The primes \(11\leq\ell\leq 509\) satisfying both (i) and (ii) are \(11,\,23,\,59,\,107,\,167,\,263,\,347,\,359\). Thus for these primes \(h_{n}\) is odd for all \(n\). As \(h_{n}^{+}\mid h_{n}\) (see for example [22, Theorem 4.10]), we know for these primes that \(h_{n}^{+}\) is odd for all \(n\). This completes the proof.
**Remark.**
* A key step in our proof of Theorem 6 is showing that \(J(D_{n})[2]\) is irreducible as an \(\mathbb{F}_{2}[G_{\infty}]\)-module whenever \(\ell=2p+1\) where \(p\) is a prime having \(2\) as a primitive root. It can be shown for all other \(\ell\) that the \(\mathbb{F}_{2}[G_{\infty}]\)-module \(J(D_{n})[2]\) is in fact reducible.
* Another key step is the argument in the proof of Lemma 35 showing that for \(n>m\) sufficiently large, the Jacobians \(J(D_{n})\) and \(J(D_{m})\) are not related via odd degree isogenies defined over \(\mathbb{Q}_{\infty,\ell}\). This step can be made to work, with very minor modifications to the argument, for all \(\ell\geq 11\), and all choices of genus \(g\) given in (22).
|
2310.02705 | B-physics from Lattice Gauge Theory | We discuss the main issues in dealing with heavy quarks on the lattice and
shortly present the different approaches used. We discuss a selection of
computations covering first the b-quark mass and the B(s) meson decay constants
as the consolidated results (neglecting isospin breaking corrections). In the
second part we consider recent calculations of form factors for tree-level
semileptonic decays with emphasis on the tensions between the results produced
by different collaborations. We propose benchmark quantities and tests suited
to investigate the origin of such tensions. Finally, we review computations of
the bag parameters parameterising neutral meson mixing and provide an overview
on a few recent developments in the field. | J. Tobias Tsang, Michele Della Morte | 2023-10-04T10:18:00Z | http://arxiv.org/abs/2310.02705v2 | # \(B\)-physics from Lattice Gauge Theory
###### Abstract
We discuss the main issues in dealing with heavy quarks on the lattice and shortly present the different approaches used. We discuss a selection of computations covering first the \(b\)-quark mass and the \(B_{(*)}\) meson decay constants as the consolidated results (neglecting isospin breaking corrections). In the second part we consider recent calculations of form factors for tree-level semileptonic decays with emphasis on the tensions between the results produced by different collaborations. We propose benchmark quantities and tests suited to investigate the origin of such tensions. Finally, we review computations of the bag parameters parameterising neutral meson mixing and provide an overview on a few recent developments in the field.
**Keywords: Lattice Gauge Theory, \(B\)-physics**
## 1 Introduction
The large \(b\)-quark mass and the comparably long lifetime of \(B_{(s)}\) mesons, allow for a plethora of experimental observables which can be measured precisely (see for example Refs. [1] and [2] for a discussion of the opportunities at LHC and Belle II, respectively). If precision Standard Model (SM) predictions are available, these can be used to test the SM as well as to search for and constrain New Physics (NP) beyond the Standard Model (BSM). In addition to direct comparisons between experiment and theory, the self-consistency of the SM can be tested by over-constraining the CKM matrix [3, 4] and testing its unitarity.
Lattice QCD is the only known tool to provide non-perturbative, _ab initio_, systematically improvable precision predictions. In lattice QCD a finite Euclidean space-time volume (\(L\)) is discretised (with lattice spacing \(a\)) and a representative _ensemble_ of the gauge field configurations is sampled via Monte Carlo methods. On these configurations, correlation functions are computed from which hadronic masses and matrix elements can be extracted. This is repeated for multiple choices of the simulation parameters (\(a\), \(L\) and the quark masses \(am_{q}\)) and to make contact with experiment the observables are inter/extrapolated to the physical world (\(a\to 0,L\to\infty,am_{q}\to am_{q}^{\rm phys}\)). It is noteworthy, that since lattice QCD simulations take bare quark masses as inputs, predictions at unphysical quark masses can be made, which can be used to test effective field theories (EFTs) such as Chiral Perturbation Theory (\(\chi\)PT) or Heavy Quark Effective Theory (HQET). Modern simulations include contributions from sea effects of
two degenerate light quarks and the strange quark (\(N_{f}=2+1\)) and the charm quark (\(N_{f}=2+1+1\)).
In recent years lattice QCD calculations of hadronic observables containing a \(b\) quark have made huge advances with precision predictions with all systematic uncertainties under control for several quantities.1 Here, we comment on some particular challenges when simulating heavy quarks (Sec. 2) before summarising the status of computations of the \(b\)-quark mass and leptonic decay constants (Sec. 3), semileptonic decay form factors (Sec. 4) and neutral meson mixing (Sec. 5). In Sec. 6 we comment on recent, more exploratory developments before concluding in Sec 7.
Footnote 1: For a recent lattice review we refer to the latest plenary proceedings at a Lattice Conference [5].
## 2 Lattice challenges for heavy quarks
Including heavy quarks, such as the \(b\) quark, in lattice simulations of QCD poses a multi-scale problem and currently all approaches still rely on the use (even though possibly at different stages) of EFTs, typically HQET [6, 7] or Non-Relativistic QCD (NRQCD) [8]. The infrared scale is set by the lattice extent \(L\) and the dynamics of the light degrees of freedom, such as the pions, should not be distorted by the finite size of the lattice. The standard requirement is to have \(m_{\pi}L>4\), with \(m_{\pi}\) the pion mass. The ultraviolet scale is instead set by the lattice spacing \(a\), the finest resolution in the system. In order to properly resolve the propagation of the heavy degrees of freedom, such as the \(b\) quark, the mass \(m_{b}\) should be far from the cutoff \(1/a\) or in other words \(am_{b}\) should be smaller than one. Substituting the physical value for \(m_{b}\), one concludes that lattices with \(L/a\) significantly larger than 100 are needed to keep both finite size and discretisation (or cutoff) effects under control. While such simulations are becoming feasible with current machines and algorithms, in order to perform a controlled continuum extrapolation (\(a\to 0\)) at the \(b\)-quark mass one will need simulations at even finer resolutions. This might require new algorithms in order to ensure an ergodic sampling of the configuration space (see Ref. [9]).
In this context EFTs are used essentially in two, non-exclusive, ways. Either they are implemented directly on the lattice by (formally) expanding physical quantities in inverse powers of the scale associated to the heavy degrees of freedom (e.g., the \(b\)-quark mass), or they are used to extrapolate results obtained for heavier-than-charm-quark masses to the \(b\)-quark mass. In a hard-cutoff regularisation, such as the lattice regularisation, the first approach often produces power-divergences (divergences in inverse powers of the lattice spacing), due to the mixing between operators of different dimensions. Those indeed naturally appear in an expansion in terms of couplings of negative dimension such as \(1/m_{b}\). As pointed out in Refs. [10, 11], these power divergences need to be removed non-perturbatively if one wants to perform a continuum limit extrapolation.
In practice one considers two EFTs, one being HQET (or NRQCD) and the other the Symanzik Effective Theory (SyEFT) [12, 13], therefore implementing the O(\(a\)) improvement programme to systematically reduce cutoff effects. In SyEFT higher-dimensional operators are introduced depending on the symmetries of the lattice action. Dimensions are compensated by powers of the lattice spacing and the coefficients are tuned in order to remove the leading lattice artifacts. Roughly speaking, different approaches implement the two EFTs in different order.
In O(\(a\))-improved HQET, one considers the static Lagrangian density
\[\mathcal{L}_{stat}(x)\!=\!\overline{\psi}_{h}(x)D_{0}\psi_{h}(x)\,;\;\;\; \frac{1+\gamma_{0}}{2}\psi_{h}\!=\!\psi_{h} \tag{1}\]
for the heavy quarks. This locally preserves the heavy flavour number and is symmetric under continuous SU(2) rotations in Dirac space
\[\psi_{h}\to V(\vec{\phi})\psi_{h}\;,\quad\mbox{with}\quad V(\vec{\phi})=e^{ \phi_{i}\epsilon_{ijk}\sigma_{jk}}\;, \tag{2}\]
where \(\vec{\phi}\) is the transformation parameter. Such symmetries are only realised in the infinite mass limit; they are not symmetries of QCD and are broken by \(1/m_{b}\) corrections. At the static order a term \(\delta m\,\overline{\psi}(x)\psi(x)\) can be added to the Lagrangian with \(\delta m\) being a power divergent (as \(1/a\)) parameter producing a shift in the energy levels of heavy-light systems. This is the first occurrence of the power divergences mentioned above. The subtraction of this divergence can be performed through a non-perturbative matching between HQET and
QCD, as discussed in Ref. [14] for the example of the computation of the \(b\)-quark mass.
Higher orders in \(1/m_{b}\) appear with operators of dimension 5 and larger. Including those in the Lagrangian would produce a non-renormalisable theory and therefore such corrections are treated order-by-order as space-time volume insertions in correlation functions computed in the static theory (see Ref. [15] for a general discussion, applied to the calculation of the \(b\)-quark mass including \(1/m_{b}\) corrections). Similarly, when considering matrix elements, local operators have a static expression and higher order (and higher dimensional) corrections that appear with appropriate powers of \(1/m_{b}\). Those have to be treated together with the higher order terms in the Lagrangian and a framework for a complete and non-perturbative matching of the action and the vector and axial heavy-light currents between QCD and HQET has been put forward in Ref. [16].
The Symanzik improvement programme is implemented in a straightforward way for the action and operators. As only the static action is directly simulated (higher orders being treated as insertions) the symmetries of the static theory can be used in classifying the mixing between operators of different dimensions for the O(\(a\))-improvement.
Concerning the quantities reviewed here, results from non-perturbative HQET are available for the \(B_{s}\to K\ell\nu\) form factors [17; 18], the \(b\)-quark mass [15; 19], \(B_{(s)}\)-meson decay constants [11; 20; 21] and mixing parameters [22; 23]. Since most recent results use different formulations for the \(b\) quark, we will not further discuss this approach here.
By interchanging the order in which the Symanzik expansion and the Heavy Quark expansion are performed, one is led to the relativistic heavy-quark formulation such as the one introduced in Ref. [24], which is now known as the _Fermilab action_. The main property is that the coefficients of the higher dimensional operators appearing through the Symanzik improvement programme are allowed to depend explicitly on the heavy-quark mass \(m_{h}\). In this way the relativistic heavy-quark actions, at fixed lattice spacing, interpolate between the massless limit and the infinite-mass (static) limit. In particular, this implies that for \(am_{h}\gg 1\) one expects to recover the power-like divergences of lattice HQET. The starting point is the anisotropic clover action density
\[\mathcal{L}_{Fermilab}(x)=a^{4}\overline{\psi}(x)\left(m_{0}+ \gamma_{0}D_{0}+\zeta\vec{\gamma}\cdot\vec{D}\right.\] \[\left.-\frac{a}{2}D_{0}^{2}-\frac{a}{2}\zeta\vec{D}^{2}+\frac{ia }{4}c_{SW}\sigma_{\mu\nu}F_{\mu\nu}\right)\psi(x)\;, \tag{3}\]
where the anisotropy parameter \(\zeta\), the clover coefficient \(c_{SW}\) and the mass parameter \(m_{0}\) are tuned to reproduce experimental values for \(B\)-mesons spectral quantities such as the dispersion relation and the vector-pseudoscalar splitting [24; 25; 26]. The heavy-quark symmetries emerge naturally in the action in eq. (3) and therefore HQET can be used to model and estimate cutoff effects [27; 28; 29]. A variant of the Fermilab action, where the parameters \(m_{0}a\), \(\zeta\) and \(c_{P}\) (a generalised version of \(c_{SW}\)) are tuned non-perturbatively [30], is known as the _Relativistic Heavy Quark_ (RHQ) [26; 31] action.
Other effective approaches have been used to treat heavy quarks on the lattice. NRQCD is formally an expansion of QCD in powers of the heavy-quark velocity \(v\) and a lattice version has been introduced in Refs. [32; 33]. Leading operators as well as operators suppressed by O(\(v^{2}\)) are included in typical applications together with operators needed for O(\(a^{2}\))-improvement. The expression for the lattice action is rather lengthy and can be found in Ref. [33]. Operators up to dimension 7 appear at the order mentioned above. Dimensions are compensated by powers of the heavy-quark mass and the coefficients are determined through matching with QCD, typically performed at tree-level or at most at one-loop.2 NRQCD is hence clearly non-renormalisable by power counting and the continuum limit at fixed heavy-quark mass does not exist. The accuracy that can be reached depends on the existence of a window in the lattice spacing where both cut-off effects (positive powers of \(a\)) and power-like divergences (negative powers of \(a\)) are under control. In actual computations the precision is at the percent level and the approach is becoming less and less used, since it would be computationally quite demanding to improve it (by including operators of even higher dimension and performing the matching at high loop orders).
Footnote 2: Recently, a calculation with a non-perturbative tuning of the NRQCD action parameters has become available [34], however this is not the standard in the existing literature.
Finally, in the most recent applications, heavy quarks are discretised using regularisations originally introduced for the light flavours. This is often referred to as _fully relativistic formulations_ and is possible because of the very fine lattice spacings that can be reached in modern simulations (around 0.04 fm) and because of the use of highly improved actions, where mass-dependent cutoff effects start at O(\((am_{h})^{2}\)). Since the \(b\)-quark mass typically only satisfies \(am_{b}\lesssim 1\) for the finest lattice spacing, HQET is still used for the simultaneous continuum and heavy-mass extrapolations.
The HPQCD Collaboration has introduced the use of highly improved staggered quarks (HISQ action) [35] at very fine lattice spacings and for \(b\) quarks in [36]. The first study concerned the \(B_{s}\)-meson decay constant but by now the method has been applied to a variety of quantities that will be discussed in the following.
In a less direct approach, the ETM Collaboration has been using automatically O(\(a\))-improved twisted mass fermions with masses in the heavier-than-charm region, together with results in the static approximation to interpolate to the \(b\)-quark mass. The main implementation is through the ratio method [37], where suitable ratios of heavy-light quantities, for different heavy-quark mass, are constructed such that they possess a well defined static limit (typically equal to 1). \(B\)-physics quantities are then obtained as an interpolation (rather than an extrapolation) between the static value and the results of the simulations around the charm.
Results from the use of relativistic heavy-quark actions or fully relativistic actions represent the majority of recent results, including those that we are going to review here in the remaining part of this contribution.
## 3 The \(b\)-quark mass and
leptonic decay constants
### \(b\)-quark mass
The \(b\)-quark mass is a parameter of QCD and hence, from the theoretical point of view, knowing its value is of fundamental importance. Lattice determinations of the \(b\)-quark mass are three times more precise than continuum ones [38] and they drive the accuracy on the parameter. In the latest edition of the FLAG review [39] the values
\[\overline{m}_{b}(\overline{m}_{b})=4.203(11)\,\mathrm{GeV}\;, \tag{4}\]
for the mass in the \(\overline{\mathrm{MS}}\) scheme (illustrated as the magenta band in Fig. 1), and
\[M_{b}^{\mathrm{RGI}}=6.934(58)\,\mathrm{GeV}\;, \tag{5}\]
for the Renormalisation Group Invariant (RGI) mass, are quoted using results with \(2+1+1\) dynamical flavours. As a remark, in the RGI case the error (which is around one percent) is dominated by the RG-evolution required in the definition and therefore by the uncertainty on the \(\Lambda\)-parameter of QCD.
The most recent computations of the \(b\)-quark mass entering the FLAG average for \(N_{f}=2+1+1\) are shown in Fig. 1. These results use the ratio method described above [42, 43] or tune the heavy-quark mass, possibly in some conveniently defined intermediate scheme, in order to reproduce (or extrapolate to) the \(B_{(s)}\)-meson mass [44] or extract the \(b\)-quark mass from the analysis of moments of heavy current-current correlation functions [40, 41, 45]. The latter method is rather novel and has already produced some of the most precise results and hence deserves a more detailed discussion. The idea is first introduced in Ref. [40] and consists of computing moments
\[G_{n}=\sum_{t}(t/a)^{n}G(t)\;, \tag{6}\]
Figure 1: Summary of recent results for the \(b\)-quark mass in the \(\overline{\mathrm{MS}}\) scheme for calculations including the dynamical effects of \(2+1+1\) flavours. Individual results shown are HPQCD14A [40], HPQCD14B [41], ETM16 [42], Gambin17 [43], F/M/T18 [44], and HPQCD21 [45]. The magenta band corresponds to the recommended value from the FLAG collaboration [39] based on these results.
of the zero-momentum two-point function \(G(t)\) of the heavy pseudoscalar density \(j_{5}=am_{0h}\overline{\psi}_{h}\gamma_{5}\psi_{h}\), where \(am_{0h}\) is the bare heavy-quark mass. More precisely, in order to reduce discretisation errors, one computes the ratios \(\tilde{R}_{n}\) defined as
\[\tilde{R}_{n}=\begin{cases}G_{4}/G_{4}^{(0)}&\text{for}\;n=4\;,\\ \frac{1}{m_{0h}}\left(G_{n}/G_{n}^{(0)}\right)^{1/(n-4)}&\text{for}\;n\geq 6 \;,\end{cases} \tag{7}\]
with \(G_{n}^{(0)}\) being the lowest perturbative order in the expansion of the correlation function. The key observation is that for low values of \(n\) the moments are short-distance dominated and they become more and more perturbative as the quark mass is increased. By matching the lattice results to the continuum perturbative prediction, one can simultaneously extract the heavy-quark mass in the \(\overline{\rm MS}\) scheme and the strong coupling \(\alpha_{s}\). The coefficients in the expansion of \(\tilde{R}_{n}\), for low \(n\), are known to order \(\alpha_{s}^{3}\) included [46, 47, 48, 49, 50], and the method can therefore provide rather accurate estimates of the strong coupling and the \(b\)-quark mass.
From the phenomenological point of view a precise knowledge of the \(b\)-quark mass is required for testing the properties of the SM-like Higgs particle. Its dominant decay mode is \(b\overline{b}\), with a branching ratio of about 60%. The theoretical estimate of that depends quadratically on the \(b\)-quark mass, which therefore contributes about 2% to the total uncertainty. The signal strengths (ratio of experimental to theoretical estimates) based on CMS and ATLAS data [51, 52, 53] are consistent with the SM with an error of about 10%. The PDG [38] averages the results to a signal strength of 0.99(12), the error being currently dominated by the experimental uncertainties. Summary plots for the different production channels, taken from Refs. [51, 52] are shown in Fig. 2.
### Decay constants
Within the Weak Effective Hamiltonian framework, and neglecting electromagnetic interactions, the decay rate for tree-level process \(B^{+}\to\ell^{+}\nu_{\ell}\) can be expressed as
\[\Gamma(B\to\ell\nu)=\frac{m_{B}}{8\pi}G_{F}^{2}f_{B}^{2}|V_{ub}|^{2}m_{\ell}^ {2}\bigg{(}1-\frac{m_{\ell}^{2}}{m_{B}^{2}}\bigg{)}\,. \tag{8}\]
Similarly, the decay rate for the loop-mediated process \(B_{q}^{0}\to\ell^{+}\ell^{-}\) (\(q=d,s\)) depends on the product \(|V_{tb}^{*}V_{tq}|^{2}\) and the decay constant \(f_{B_{q}}^{2}\). These hadronic parameters are defined by the QCD matrix element (\(q=u,d,s,c\))
\[\langle 0|A_{bq}^{\mu}|B_{q}(p)\rangle=if_{B_{q}}p^{\mu}\;\;\text{and}\;A_{bq}^{ \mu}=\overline{b}\gamma_{\mu}\gamma_{5}q\;, \tag{9}\]
where the left-hand-side of the first equation is exactly what is computed on the lattice. By combining experimental measurements with results for the decay constants one can therefore obtain exclusive determinations of CKM matrix elements such as \(|V_{ub}|\). The measured channel is \(B^{-}\to\tau^{-}\overline{\nu}\) with uncertainties around 20% on the branching ratio from the two experiments Belle and BaBar [54, 55] and these results show a tension a bit below the level of two combined standard deviations. The averages of lattice results for the decay constants have an uncertainty below one percent (for \(N_{f}=2+1+1\)) [39], hence the determinations of CKM matrix elements from leptonic decays are currently dominated by the experimental error. In that sense \(B\)-mesons decay constants have become a benchmark computation for new methods dealing with heavy quarks on the lattice. A much more precise exclusive estimate of \(|V_{ub}|\) is extracted from the semileptonic channel \(B\to\pi\ell\nu\), whose relevant form factors are discussed in the next Section. There theoretical and experimental errors are of comparable size.
QED and radiative corrections to leptonic decays of \(B\) mesons are of interest as they may be enhanced. Refs. [56, 57] discuss terms of \({\rm O}(m_{b}/\Lambda)\) and logarithmic enhancements, for \(B_{(s)}\to\mu^{+}\mu^{-}\). In general one expects large collinear logarithms of the form \(\log\left(m_{b}/m_{\ell}\right)\) because of the different scales involved. It would obviously be very important to confirm such effects in a fully non-perturbative lattice computation. The situation is not as advanced as for pions and kaons where
Figure 2: Signal strengths for \(H\to b\overline{b}\) from ATLAS (left) and CMS (right). Figure from Refs. [51, 52].
the strategy put forward in Ref. [58] could be used. This method relies on the use of a point-like approximation in part of the computation (the real photon emission), which is equivalent to neglecting structure dependent contributions. Such contributions are large for heavy mesons as shown in Refs. [59, 60, 61], where radiative decays of mesons have been studied on the lattice. The effect should be even more pronounced for mesons in which case one expects a faster deterioration of the noise to signal ratio for the relevant correlation functions [61] in Euclidean time.
## 4 Exclusive semileptonic decays
Differential decay rates for semileptonic decays of a heavy-hadron into a lighter hadron are commonly parameterised as a sum of products of non-perturbative form factors and known kinematic factors,
(10)
where is the momentum transfer onto the lepton pair. Depending on whether the process is tree-level or loop-induced, whether the final state is a pseudoscalar (PS) or a vector (V) state, and whether it is a baryonic or a mesonic decay the number of form factors that are summed over differs. For the simplest case of a mesonic tree-level PS PS transition (e.g. ) there are two form factors ; for a loop-level induced PS PS transition (e.g. ) there are three form factors (). For tree-level induced PS V decays (e.g. ) there are four () and for loop-level induced PS V (e.g. ) there are seven (). Depending on the decay, there are also kinematic constraints, relating form factors at particular values of to each other.
### Challenges for form factors
The data analysis relating simulated correlation functions to physical form factors is lengthy and challenging. Whilst it is in principle understood how to perform the simulations required for the prediction of semileptonic form factors, it is costly to generate data sets which enable data-driven control (guided by theory) over the required extrapolations listed in the following:
_Excited states:_ On the lattice, form factors can be related to Euclidean matrix elements of some current between the initial and final state and are extracted as the ground state matrix-element of a three-point function. Accurately isolating the correct matrix element is a trade-off between sufficiently large separations allowing for the excited state contributions to be exponentially suppressed, and small enough values of to maintain good control over statistical uncertainties. Based on chiral perturbation theory (PT), Ref. [62] recently advocated that the expected first excited state energy to this type of three-point functions is expected to be of the size and might therefore potentially be hard to disentangle from the desired signal. As an example, JLQCD in their computation of form factors for semileptonic transitions [63], which we discuss in detail later, shows results for the three-point functions for different source-sink separations, concluding that results stabilise for a separation of about 1.5 fm.
_(Heavy-quark)-chiral-continuum extrapolation:_
Typical calculations of these form factors either take place at the physical -quark mass using an effective action inspired discretisation, or at lighter-than-physical heavy-quark masses. All of the results currently available in the literature utilise ensembles with heavier-than-physical light-quark masses, so that a chiral extrapolation is required in addition to the continuum limit. These extrapolations are typically guided by heavy meson PT and therefore rely on inputs stemming from the world at physical quark masses and on low energy constants (LECs), which have to be determined. This makes controlled predictions very challenging when simulations take place at heavier-than-physical light-quark masses and lighter-than-physical heavy-quark masses.
_Kinematic coverage:_ Accessible Fourier momenta are of the form where is a vector of integers and is required to remain small
in order to control discretisation effects. Due to the heavy \(b\)-quark mass, it is not typically possible to cover the kinematically allowed range \(q^{2}\in[0,(m_{H}-m_{h})^{2}\equiv q_{\rm max}^{2}]\) whilst maintaining control over discretisation effects. As a consequence, current calculations only cover a portion \([q_{\rm min,dat}^{2},q_{\rm max}^{2}]\) of the kinematic range at physical kinematics.4 This necessitates extrapolations of the lattice form factors over the full kinematical range.
Footnote 4: At unphysical kinematics, typically for \(m_{h}\ll m_{b}\), some calculations cover the kinematic range corresponding to the choice of simulated masses.
### \(z\)-expansions and unitarity constraints
The extrapolations over the full kinematic range are typically carried out as a model-independent \(z\)_-expansion_, a conformal mapping based on Ref. [64] (or variants thereof [65, 66]). After removing terms corresponding to sub-threshold poles (\(P_{X}\)) and an appropriate normalisation (\(\phi_{X}(q^{2})\)), the form factor is expressed as a polynomial in the conformal variable \(z\),
\[P_{X}(q^{2})\phi_{X}(q^{2})f_{X}(q^{2})=\sum_{n=0}^{\infty}a_{n}z^{n}. \tag{11}\]
Since the conformal variable satisfies \(|z|<1\), this is a convergent series that, for a given truncation, can then be fitted to available lattice data. Based on dispersion relations, a unitarity bound \(\sum|a_{n}|^{2}\leq 1\) can be derived.5 Whether the unitarity bound is satisfied can either be checked _a posteriori_ or the constraint can be imposed directly in the fit, either via the dispersive matrix method [69] or via a recently proposed Bayesian Inference framework [68]. The latter allows to trade truncation effects for statistical noise encoding our ignorance of the neglected higher order terms, which are however regulated by the unitarity constraint. In general, multiple decay channels with the same quark-level transition can be incorporated into the same unitarity constraint, strengthening the constraints on \(a_{n}\) coefficients. As an example, since the current in \(B\to\pi\ell\nu\) and in \(B_{s}\to K\ell\nu\) transitions is the same, the unitarity constraint involves a combination of the corresponding form factors, which should therefore be fitted simultaneously. The unitarity bound will be closer to saturation (compared to imposing the constraint on the individual decay channels), resulting in less freedom and hence tighter constraints on the higher order \(z\)-expansion coefficients. This can be systematically improved by including further decays of the same current (e.g. \(\Lambda_{b}\to p\ell\nu\), \(B_{s}\to K^{*}\ell\nu\),...).
Footnote 5: Depending on the pole structure of the decay, the exact form of this bound can vary [67, 68].
### Analysis strategies
Most collaborations perform their analyses in a multi-step procedure, consisting of
1. The extraction of the desired masses and hadronic matrix elements from the correlation function data.
2. The extrapolation to zero lattice spacing and physical quark masses and a continuous description of the momentum transfer in the kinematic range covered by the data. The fit is then evaluated at a number of _reference \(q^{2}\)-values_ and a full error budget including the correlations between the form factors at the reference values is assembled.
3. An extrapolation over the full kinematic range, typically using a model-independent \(z\)-extrapolation.
Some collaborations have advocated combining the second and the third step into the _modified \(z\)-expansion_[70] in which the coefficients of the \(z\)-expansion are changed into functions of the lattice spacing and the quark masses. However, since this is no longer based on an underlying effective field theory, this approach might introduce model dependences, which are hard to quantify with current data.
In the following, we restrict the discussion to form factor calculations with results of multiple collaborations available and cases where tensions have been observed. In particular we will address \(B_{(s)}\to\) PS and \(B_{(s)}\to\) V tree-level decays that can be used for the determination of CKM matrix elements and which tend to be most mature.
### Tree-level \(B_{(s)}\to\) PS decays
Results for \(B\to\pi\ell\nu\) form factors covering part of the kinematic range exist from JLQCD [71] (JLQCD22), RBC/UKQCD [72] (RBCUKQC15)
and Fermilab/MILC [73] (FNALMILC15).6 For \(B_{s}\to K\ell\nu\), results from RBC/UKQCD [76] (superseding Ref. [72]), FNAL/MILC [77], and HPQCD [78] exist. Ref. [71] features domain wall fermions for all quarks and extrapolates from \(m_{c}\leq m_{h}\lesssim 2.44m_{c}\) to the physical \(b\)-quark mass. All other results utilise effective action approaches for the \(b\) quark. Figure 3 summarises the status for \(B\to\pi\), in the range where lattice data typically exist. One can clearly see a tension near \(q_{\rm max}^{2}\) (the vertical black dashed line) for the \(f_{0}\) form factor. This tension is more severe than is apparent from a plot, since the reference values underlying the individual datasets are highly correlated. When attempting to jointly fit synthetic data points for these results, the FLAG collaboration reports \(\chi^{2}/{\rm dof}=43.6/12\)[39], necessitating an error inflation by the PDG scale factor \(\sqrt{\chi^{2}/{\rm dof}}\). The situation is similar for \(B_{s}\to K\ell\nu\) as is demonstrated in Ref. [68]7, where the world data cannot be jointly fitted with an acceptable \(p\)-value and \(\chi^{2}/{\rm dof}=3.89\) is the best value that can be achieved. A possible explanation for some of the discrepancies between results, stemming from the assignment of external parameters to the pole structure imposed in the HM\(\chi\)PT has been put forward in Ref. [68], but further independent computations are needed to confirm this.
Footnote 6: The result by HPQCD [75] uses gauge field configurations which are also part of FNAL/MILC15 calculation but span a smaller range in the key parameters such as the lattice spacing and pion masses.
Footnote 7: The FLAG report does not yet include the recent RBC/UKQCD23 [76] result.
For the \(b\to c\) transitions \(B_{(s)}\to D_{(s)}\) there are fewer independent results. \(B\to D\) form factors have been published by HPQCD [79] and FNAL/MILC [80]. The calculations based on two and four lattice spacings respectively are compatible. The FNAL/MILC result dominates the combined fit, but due to overlapping gauge field configurations (the \(N_{f}=2+1\) asqtad ensembles) between the two computations these results are expected to be statistically correlated.
Two results by the HPQCD collaboration [81; 82] exist for \(B_{s}\to D_{s}\) form factors. The former based on the \(N_{f}=2+1\) asqtad ensembles with NRQCD \(b\)-quark and HISQ \(c\)-quark, the latter based on the \(N_{f}=2+1+1\) HISQ ensembles with HISQ for all quark flavours. Whilst the result covers the entire kinematic range at unphysical kinematics, this is not the case at the physical values of the \(b\)-quark mass. To predict form factors at physical masses, an extrapolation of the form factors in the heavy-quark mass is required. This is complicated since the momentum transfer \(q^{2}\) is also a function of the heavy-quark mass. Ref. [82] uses a modified \(z\)-expansion to simultaneously perform the extrapolations to physical light and heavy-quark masses, zero lattice spacing and a continuous description of the momentum transfer \(q^{2}\) in the physical range.
Additional independent results for all of the above would be very desirable in order to address tensions (where present) and to test the choices
Figure 3: Comparison plot of recent results for \(B\to\pi\) form factors from Refs. [71; 72; 73]. The individual data point HPQCD16 [74] stems from a \(N_{f}=2+1+1\) calculation, but only provides as results \(f_{0}(q_{\rm max}^{2})\).
that have been made in the literature. Computations are ongoing by several collaborations, in particular by the RBC/UKQCD [83, 84], the JLQCD collaboration [85] and the FNAL/MILC collaboration [86].
### \(B_{(s)}\to D_{(s)}^{*}\) decays
In recent years the process \(B\to D^{*}\ell\nu\) (as well as \(B\to D\ell\nu\) covered in the previous section) has received much attention, due to long-standing tensions between experiment and theory concerning these decays [87]. In 2021 FNAL/MILC [88] produced the first comprehensive calculation of the four form factors involved away from the zero-recoil point. In 2023 two additional results appeared (HPQCD [89] and JLQCD [63]), which, at the time of writing this report, are only available as pre-prints. Due to the interest in these very recent results, we compare the three calculations in detail in the following
_Lattice set-ups:_ The FNAL/MILC [88] work uses \(15\;N_{f}=2+1\) ensembles with the asqtad fermion action with five lattice spacings \(a\in[0.045,0.15]\,\mathrm{fm}\) and a range of (root-mean-square) pion masses8 down to \(m_{\pi}^{RMS}\sim 250\,\mathrm{MeV}\). Both \(b\) and \(c\) quarks are simulated with the Fermilab action. The HPQCD [89] computation uses five \(N_{f}=2+1+1\) ensembles with the HISQ action for all sea and valence quarks including three lattice spacings \(a\in[0.044,0.090]\) and two ensembles with physical Goldstone-pion masses The JLQCD [63] calculation uses nine \(N_{f}=2+1\) ensembles with the domain wall action for all quarks and three lattice spacings \(a\in[0.04,0.08]\,\mathrm{fm}\). Simulations take place in the range \(226\,\mathrm{MeV}\lesssim m_{\pi}\lesssim 500\,\mathrm{MeV}\). HPQCD and JLQCD simulate a range of heavy-quark masses below the physical \(b\)-quark mass with the constraint \(am_{q}\leq 0.7\) and \(0.8\), respectively and therefore require an extrapolation in the heavy-quark mass.
Footnote 8: For staggered quarks several light states have the quantum numbers of the pion. Their masses differ by _taste-breaking_\(O(a^{2})\) effects. The lightest state is called Goldstone-pion.
Heavy-to-heavy transitions are often described as a function of the kinematic variable \(w=v_{B}\cdot v_{D^{*}}\). Since all the mentioned works take place in the \(B\)-meson rest-frame this reduces to \(w=E_{D^{*}}/M_{D^{*}}\). The range of this parameter approximately corresponding to the semileptonic range is \(w\in[1,1.5]\). The FNAL/MILC, and JLQCD results cover the range from \(w=1\) to \(w\sim 1.175\) and \(1.1\), respectively on all their ensembles. The HPQCD result covers the range from \(w=1\) to \(1.05\), \(1.20\), and \(1.39\) on their \(a\approx 0.09,0.06\), and \(0.044\,\mathrm{fm}\) ensembles, respectively.
_Lattice to continuum:_ Each of the three works perform a fit to simultaneously extrapolate to the continuum and to physical quark masses and to obtain a continuous description of the form factors as a function of \(w\).
All three collaborations use \(\chi\)PT (or staggered variants of this) to extrapolate to the physical pion mass and describe the kinematic behaviour as an expansion in \((w-1)^{k}\) up to quadratic (JLQCD, FNAL/MILC) or cubic (HPQCD) order.
FNAL/MILC includes discretisation effects of order \(\alpha_{s}a\Lambda\), \((a\Lambda)^{2}\) and \((a\Lambda)^{3}\). JLQCD accounts for terms \((a\Lambda)^{2}\) and \((am_{q})^{2}\) since due to the use of domain wall fermions discretisation effects from odd powers of the lattice spacing are absent (up to terms proportional to the residual mass which are negligible). JLQCD's heavy quark extrapolation is performed by first dividing out the matching factor between HQET and QCD and then parameterising this result in powers of \(1/m_{h}\) up to order 1. HPQCD parameterises discretisation effects and the heavy-quark mass dependence as products of \((am_{c})^{2i}(am_{h})^{2j}[(\Lambda/M_{H_{s}})^{k}-(\Lambda/M_{B_{s}})^{k}]\) for \(i,j,k=0,1,2,3\). One such term is present for each order of \((w-1)^{n}\) for \(n=0,1,2,3\).
Per form factor, the resulting fits typically have 8 parameters for JLQCD, 10 parameters for FNAL/MILC and more than 250 parameters for HPQCD. JLQCD performs the fits as frequentist \(\chi^{2}\)-minimisations, whilst FNAL/MILC and HPQCD use a Bayesian framework with Gaussian priors for their fit parameters.
One caveat concerning the HPQCD computation [89] is that for each form factor the \(B\to D^{*}\) and \(B_{s}\to D_{s}^{*}\) data are fitted jointly, imposing that all the fit parameters are the same. The only freedom for the fit to distinguish between \(B\to D^{*}\) and \(B_{s}\to D_{s}^{*}\) is by some of these parameters multiplying an SU(3) breaking term. Since Ref. [89] does not present fits to the individual channels it is not possible to quantify the impact of this choice. Ref. [89] however compares the result obtained for \(B_{s}\to D_{s}^{*}\) from the simultaneous fit to \(B\to D^{*}\) and \(B_{s}\to D_{s}^{*}\)[89] to the previous computation
of only \(B_{s}\to D_{s}^{*}\) form factors [90] in the helicity basis, which will be described below.
\(z\)_-expansions:_ Fermilab/MILC and JLQCD perform a two-step analysis, evaluating their chiral-continuum-heavy-quark extrapolation at three values of \(w\) in the range of data where the simulations took place (\(w_{\rm ref}^{\rm FNAL/MILC}\in\{1.03,1.10,1.17\}\), \(w_{\rm ref}^{\rm JLQCD}\in\{1.025,1.060,1.100\}\)). They each assemble a full error budget for all four form factors at these data points and quantify all correlations. These reference values serve as inputs for a subsequent \(z\)-expansion, which extrapolates the form factors over the full kinematic range.
HPQCD provides results for a \(z\) expansion, but does not clarify what input data was used for this \(z\)-expansion. It is unclear whether the fit was performed to reference \(w\)-values, and if so, how they were chosen, how the full error budget for these reference values is computed and what the correlations between these values are.
Since all three collaborations use the same conventions for the \(z\)-expansion, the results can be directly compared and the coefficients from the three collaborations are listed in Table 1 and displayed in Figure 4.9 There are clearly several tensions between the results for the lower order coefficients which will need to be understood.
Footnote 9: One minor difference is however, that JLQCD enforces the kinematic constraint relating \({\cal F}_{1}\) and \({\cal F}_{2}\) at \(w_{\rm max}\), whilst the other two collaborations check it _a posteriori_.
\(B_{s}\to D_{s}^{*}\) _form factor results by HPQCD:_ We now turn our attention to the comparison of the results for the \(B_{s}\to D_{s}^{*}\) form factors from the two HPQCD calculations from 2023 [89] and 2021 [90].
The only difference in the ensembles underlying this dataset is one additional ensemble in [89] at the intermediate lattice spacing with physical pion mass providing additional data in the range \(w\in[1.0,1.2]\). Since there are no valence light quarks in the decay \(B_{s}\to D_{s}^{*}\) this is not expected to significantly alter the result. Furthermore, stemming from largely the same ensembles, the two results are highly statistically correlated, so that any discrepancies between the results are expected to stem from systematic uncertainties.
The form factors \(f\) and \(g\) largely remain within one standard deviation, but with a reduction of the quoted uncertainties by a factor of approximately 3.5. A similar reduction of uncertainties is quoted for the form factor \({\cal F}_{1}\). Whilst this form factor agrees with the previous result near \(w\sim 1\), the slope with \(w\) is very different and in the intermediate \(w\) region, there is an approximately \(2\sigma\) tension between the two results. For the final form factor \({\cal F}_{2}\) the error reduction is less significant, but the slope near \(w\approx 1\) is different between the two calculations.
The authors of Ref. [89] explain the observed differences with the change in analysis strategy from an expansion in powers of the conformal variable \(z\) to an expansion in the variable \((w-1)\) and by the joint fitting of \(B\to D^{*}\) and \(B_{s}\to D_{s}^{*}\) data. Since the joint fits in Ref. [89] are dominated by the statistically more precise \(B_{s}\to D_{s}^{*}\) data, the latter seems unlikely to explain the differences. According to Ref. [89], the main contribution to the total uncertainty is statistical, with systematic uncertainties accounting for at most 15%, 10%, 25%, and 20% of the variance of \(f\), \(g\), \({\cal F}_{1}\), and \({\cal F}_{2}\), respectively. Further investigations are required to understand the differences between Refs. [89] and [90]. It would be interesting if correlated differences between the two results could be produced.
### Desirable benchmark quantities, comparisons and checks
The extrapolations required to predict form factors that can be related to the physical world from the underlying lattice data points are more complex than those for quantities that do not depend
Figure 4: Comparison of the \(z\)-expansion coefficients for the \(B\to D^{*}\) form the factors \(g\), \(f\), \({\cal F}_{1}\) and \({\cal F}_{2}\) obtained by FNAL/MILC (blue circles), HPQCD (red triangles), and JLQCD (green squares).
on a kinematic variable. This is mainly due to the fact that the kinematically allowed range (\(q_{\rm max}^{2}\) or equivalently \(w_{\rm max}\)) changes as a function of the mass of the initial and the final states. This is further aggravated by the increase of statistical noise as the heavy-quark mass becomes heavier and as larger momenta are induced. In particular when simulating multiple heavy-quark masses, this often leads to a comparably small number of precise data points (at small \(m_{h}\), small \(\vec{p}\)) and a lot of data points with sometimes orders of magnitude larger uncertainties (large \(m_{h}\), large \(\vec{p}\)). These in turn need to be described by (often complicated) multidimensional fits. As a result a small portion of data points drive the fit, typically those furthest away from the desired physical parameters. The fit functions that are employed often have many parameters. In order to assess the weight of the different data in the fit and to determine the relevant fit parameters, it could help to \(i\)) fit only the less precise data, to see what their effect is, and/or \(ii\)) start by only including the most precise data (and therefore using simple fit ansatze) and then adding less precise data until results stabilise. This would provide an assessment of which portions of the covered parameter space constrain the fit. On a related note we remark that whilst datapoints with very large uncertainties (relative to the majority of the data) do not significantly contribute to the \(\chi^{2}\), they alter the interpretation of how good a particular fit result is, by reducing the \(\chi^{2}\)/dof without adding much information. In the extreme case where the relative uncertainty of data points differs by orders of magnitude, it might be worth to develop a notion of _effective degrees of freedom_.
Given the tensions in several form factor computations, it would be desirable to devise some easier-to-compute benchmark quantities that all collaborations could provide, somewhat analogous to the window quantities in the hadronic vacuum polarisation contribution to the anomalous magnetic moment of the muon [91]. These should be designed to allow for comparisons between different computations with reduced sources of systematic uncertainties. Particularly well suited are quantities which disentangle the effects of the different dimensions of the fit, such as the chiral, continuum, kinematic and heavy quark extrapolations. We conclude this section with the incomplete list of suggested benchmark quantities below:
* A full error budget for the form factors \(f_{0}(q_{\rm max}^{2})\) for PS \(\to\) PS transitions and \(f(w=1)\) for PS \(\to\) V transitions solely based on the zero momentum data points. Since all hadrons are at rest, these can be obtained from the most precise data points and no interpolation in the final state energy is required.
* Generalising the previous point: the form factor at kinematic reference points. Whilst several collaborations tend to provide these as input for subsequent kinematic extrapolations, they are typically obtained from a joint fit of all simulated momentum values. Performing such checks only from data in the vicinity of the kinematic reference point would allow more direct comparisons which do not rely on the broader choice of kinematic description of the data. For example, differences between choosing a modified \(z\) expansion, an expansion in \((w-1)\) or an expansion in the final state energy could be assessed. It would also shed light on how much of the information content stems from simulated data near the kinematic reference point as opposed to potentially more precise data which is kinematically far away.
* For simulations which take place at lighter-than-physical heavy-quark masses, it would be valuable to perform the chiral-continuum-kinematic extrapolation at fixed (unphysical) heavy-quark mass. If data was generated with this in mind, one could simulate data at, for
\begin{table}
\begin{tabular}{c c c c} \hline \hline Coeffs & HPQCD & JLQCD & FNAL/MILC \\ \hline \(a_{0}^{g}\) & \(0.0312(15)\) & \(0.0291(18)\) & \(0.0330(12)\) \\ \(a_{3}^{g}\) & \(-0.088(52)\) & \(-0.045(35)\) & \(-0.156(55)\) \\ \(a_{2}^{g}\) & \(-0.07(95)\) & \(-1.0(1.7)\) & \(-0.12(98)\) \\ \hline \(a_{9}^{f}\) & \(0.01212(14)\) & \(0.01198(19)\) & \(0.01229(23)\) \\ \(a_{1}^{f}\) & \(-0.003(19)\) & \(0.018(11)\) & \(-0.003(12)\) \\ \(a_{2}^{f}\) & \(-0.10(63)\) & \(-0.10(45)\) & \(0.07(53)\) \\ \hline \(a_{0}^{\tau_{1}}\) & \(0.002032(24)\) & \(0.002006(31)\) & \(0.002059(38)\) \\ \(a_{1}^{\tau_{1}}\) & \(-0.0102(43)\) & \(0.0013(41)\) & \(-0.0058(25)\) \\ \(a_{2}^{\tau_{2}}\) & \(-0.048(96)\) & \(-0.03(21)\) & \(-0.013(91)\) \\ \hline \(a_{0}^{\tau_{2}}\) & \(0.0421(26)\) & \(0.0484(16)\) & \(0.0509(15)\) \\ \(a_{1}^{\tau_{2}}\) & \(-0.257(95)\) & \(-0.059(87)\) & \(-0.328(67)\) \\ \(a_{2}^{\tau_{2}}\) & \(0.05(98)\) & \(-0.9(1.1)\) & \(-0.02(96)\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: \(z\)-expansion coefficients quoted by the three collaborations.
example, half the \(b\)-quark mass and make predictions of the form factors at this choice of kinematics. This would remove the heavy-quark extrapolation dimension from the fit and hence disentangle the two most complicated extrapolations: the heavy-quark mass extrapolation and the continuum-limit. Furthermore, since the simulated heavy-quark mass in lattice units is smaller, better control over the continuum limit would be expected. These results could be compared between different collaborations.
* Assessment of the contamination of ground state matrix elements and masses by excited states is a recurring feature of lattice QCD calculations. The excited state energies obtained from correlation function fits are physical quantities and should (up to discretisation effects) agree for different fermion formulations at equivalent quark masses. If this data was included in publications, it would help to build confidence in the absence of excited state contamination or to understand their nature.
* In connection with the discussion on the varying size of statistical uncertainties, it would be valuable if the statistical correlations between all data points stemming from the same ensemble were given. Since this is a vital ingredient of the fit extrapolating to physical parameters, this information is required in order to reproduce results. Furthermore, these correlations should be universal (up to discretisation effects), a fact that could be checked if these results were provided by all collaborations.
## 5 Neutral meson mixing
Neutral mesons such as the \(B_{q}^{0}\) (\(q=d,s\)) mix with their antiparticles \(\bar{B}_{q}^{0}\) through box diagrams such as the one displayed in Figure 5. Due to this mixing, the mass and flavour eigenstates of the \(B_{q}-\bar{B}_{q}\) system do not coincide, resulting in experimentally measurable mass and width differences. Since the box diagrams are top-quark and therefore short distance dominated, the relevant non-perturbative matrix elements are calculable on the lattice. An operator product expansion of the box diagrams above yields 5 independent parity-even dimension-6 operators, whose matrix elements can be computed in LQCD.
For historical reasons, the matrix elements under consideration are typically cast into the form of _bag parameters_\({\cal B}_{B_{q}}^{(i)}\) where \(i=1,...,5\) and \(q=d,s\), quantifying the departure of the matrix element from the vacuum saturation approximation (VSA). These are defined as
\[{\cal B}_{B_{q}}^{(i)}(\mu)=\frac{\left\langle B_{q}\right|{\cal O}_{i}(\mu) \left|\bar{B}_{q}\right\rangle}{f_{B_{q}}^{2}M_{B_{q}}^{2}\eta_{q}^{(i)}(\mu)}\,, \tag{12}\]
where the \(\eta_{q}^{(i)}\) ensure that the expression is unity in the VSA. For phenomenological applications, typically the quantities \(f_{B_{q}}\sqrt{{\cal B}_{B_{q}}^{(i)}}\) are of relevance, but depending on the application, the bag parameters or the matrix elements themselves are also of interest.
In the SM, the experimentally measured mass difference \(\Delta m_{q}\) is related to \(f_{B_{q}}^{2}{\cal B}_{B_{q}}^{(1)}\) by known multiplicative factors and the product of CKM matrix elements \(V_{tq}V_{tb}^{*}\) (\(q=d,s\)), and hence precise knowledge of the non-perturbative inputs enables the determination of CKM matrix element containing the top quark and thereby contribute to tests of CKM unitarity. Since the non-perturbative matrix elements are independent of the UV-properties of the theory under consideration they can also be used to probe BSM models. Finally, several theoretical uncertainties cancel in the SU(3)-breaking ratios \({\cal B}_{B_{s}}^{(1)}/{\cal B}_{B_{d}}^{(1)}\) and \(\xi\)
\[\xi\equiv\frac{f_{B_{s}}{\cal B}_{B_{s}}^{(1)}}{f_{B_{d}}{\cal B}_{B_{d}}^{(1 )}}\propto\sqrt{\frac{\Delta m_{s}}{\Delta m_{d}}}\left|\frac{V_{td}}{V_{ts}} \right|\,, \tag{13}\]
so that non-perturbative determinations of this quantity provide more stringent bounds on \(|V_{td}/V_{ts}|\).
Full results for all 5 operators (and in both systems) are available from ETM [92] (\(N_{f}=2\)), FNAL/MILC [93] (\(N_{f}=2+1\)) and HPQCD [94]
Figure 5: Example box diagram mediating neutral meson mixing
(\(N_{f}=2+1+1\)). To date, the RBC/UKQCD collaboration computed the ratios \(f_{B_{s}}/f_{B}\), \({\cal B}^{(1)}_{B_{s}}/{\cal B}^{(1)}_{B_{d}}\) and \(\xi\) from \(N_{f}=2+1\) QCD [95]. One difference between these computation is which quantities are directly computed and which ones are inferred using available results for \(f_{B_{q}}\) from the literature. For maximally exploitable phenomenological applications of these results, it would be desirable to have access to \({\cal B}^{(i)}_{B_{q}}\) and \(f_{B_{q}}\sqrt{{\cal B}^{(i)}_{B_{q}}}\) from the same computation, in order to be able to quantify all correlations between these observables.
Table 2 summarises some of the key properties of these results, which are highly complementary: In addition to differences in the choice of sea, valence light, and valence heavy quark actions, the range of lattice spacings and simulated pion masses differs significantly. When chiral symmetry is maintained, the five bag parameters mix in a continuum-like fashion under renormalisation, so the renormalisation pattern is block diagonal. In the formulations used by the ETM and the RBC/UKQCD collaborations, this property is preserved in the lattice simulation and hence simplifies the renormalisation procedure. The treatment of the heavy-quark also differs, with FNAL/MILC and HPQCD employing an effective action approach and ETM and RBC/UKQCD using a fully relativistic set-up.
Observables are often quoted in the Renormalisation Group Invariant (RGI) scheme. Results for the phenomenologically relevant quantities \(\hat{\cal B}^{(1)}_{B_{q}}\), \(f_{B_{q}}\sqrt{\hat{\cal B}^{(1)}_{B_{q}}}\) for \(q=d,s\) and their SU(3)-breaking ratios are compared in Fig. 6. In addition to the works already described, this also includes HPQCD09 [96] and RBC/UKQCD14 [97]. Results for the remaining 4 bag parameters in the \(\overline{\rm MS}\) scheme at \(\mu=m_{b}\) are shown in Figure 7.10
Footnote 10: The results from ETM for \({\cal B}^{(4)}_{B_{q}}\) and \({\cal B}^{(5)}_{B_{q}}\) have been recast into the same form as the results presented by HPQCD and FNAL/MILC, to guarantee that they are unity in the VSA. The required input for \(M_{B_{s}}\), \(M_{B_{d}}\), \(\overline{m}_{b}(m_{b})\), \(\overline{m}_{d}(m_{b})\) and \(\overline{m}_{s}(m_{b})\) were taken from Ref. [93].
\begin{table}
\begin{tabular}{l|c c c c} Work & ETM [92] & FNAL/MILC [93] & HPQCD [94] & RBC/UKQCD [95] \\ \hline \(N_{f}\) & 2 & 2+1 & 2+1+1 & 2+1 \\ sea action & Wtm & asqtad & HISQ & DWF \\ valence light action & OS & asqtad & HISQ & DWF \\ valence heavy action & OS & Fermilab & NRQCD & DWF \\ physical heavy quark & ratio method & tuned & tuned & extrap. in \(1/M_{H_{s}}\) \\ directly computed & \(f\), \({\cal B}\) & \(f\sqrt{\cal B}\) & \({\cal B}\) & \(\xi\), \({\cal B}_{s}/{\cal B}_{d}\) \\ \(m_{\pi}^{\rm simulated}\) [MeV] & [280,500] & [257,670]\({}^{\dagger}\) & [241,311]\({}^{\dagger}\) & [139,430] \\ range of \(a\) [fm] & [\(0.052,0.098\)] & [0.045,0.12] & [0.088,0.147] & [0.073,0.114] \\ number of \(a\) & 4 & 4 & 3 & 3 \\ renormalisation & RI-MOM & 1-loop PT & 1-loop PT & RI-SMOM \\ continuum like mixing? & Yes & No & No & Yes \\ \end{tabular}
\end{table}
Table 2: Some key properties of the results discussed in the text. The abbreviations for the various fermion actions stand for Wilson twisted mass (Wtm), \(a^{2}\)-tadpole improved staggered quarks (asqtad), highly improved staggered quarks (HISQ), domain wall fermions (DWF) and Osterwalder-Seiler (OS). \({}^{\dagger}\)The root-mean-square pion mass is listed. The corresponding goldstone pion masses ranges are [177, 555] for FNAL/MILC and [131, 313] for HPQCD.
Figure 6: Summary of the first bag parameter and \(f\sqrt{\cal B}\) in the RGI scheme. The magenta bands correspond to the values of a weighted average of \(N_{f}>2\) results with its value given in the top right corner of each panel.
The red left facing triangles representing the ETM \(N_{f}=2\) result are shown with open symbols, since the calculation does not quantify any uncertainty related to the missing sea strange quark, which in many observables has been shown to be significant. Since missing sea-charm effects are assumed to be negligible at this level of precision, \(N_{f}=2+1\) and \(N_{f}=2+1+1\) results can be directly compared and the magenta bands correspond to weighted averages of these. In cases where \(\chi^{2}/\text{dof}\geq 1.25\), the dashed magenta lines indicate the uncertainties obtained after the application of the PDG scale factor \(\sqrt{\chi^{2}/\text{dof}}\). The results from FNAL/MILC and HPQCD dominate the average, but show some tension. The \(\chi^{2}/\text{dof}\) for the weighted average of just these two results is 1.2 and 2.4 for \(\hat{\mathcal{B}}^{(1)}_{B_{d}}\) and \(f_{B_{d}}(\hat{\mathcal{B}}^{(1)}_{B_{d}})^{1/2}\), respectively and larger than 3 for both quantities in the \(B_{s}\) case. It is noteworthy, that these tensions disappear when considering SU(3)-breaking ratios, since the effect is correlated between the \(B_{d}\) and the \(B_{s}\) system. The weighted averages of the observables shown in the right hand column of Fig. 6 have a precision of 2.1%, 2.0% and 0.8%, respectively. We note however, that if the average was only taken between the FNAL/MILC and the HPQCD result, the uncertainties for the first two quantities would be 3.5% and 3.3% due to the required rescaling of the uncertainties. The uncertainty on the CKM matrix elements \(|V_{td}|\) and \(|V_{ts}|\) and their ratio, which can be extracted by combining these observables with the experimental measurements of \(\Delta m_{d}\) and \(\Delta m_{s}\), is clearly limited by the theoretical uncertainty, due to the per mille-level precision of the experimental inputs. This necessitates further theory improvements.
Turning our attention to the remaining bag parameters (see Figure 7) there is agreement between most results, but for \(\mathcal{B}^{(4)}_{B_{q}}\) and \(\mathcal{B}^{(5)}_{B_{q}}\) we notice a slight tension between the ETM and the other two results, whilst for \(\mathcal{B}^{(3)}_{B_{q}}\) we notice a tension between the FNAL/MILC and the HPQCD results. The former is very similar to what is observed in neutral kaon mixing [98] and might be related to the choice of RI/MOM instead of RI/SMOM. Clearly further independent calculations are desirable to address and resolve these tensions.
A joined effort between the RBC/UKQCD and the JLQCD collaboration [99, 100] aims to extend Ref. [95] to predictions of the full set of operators in a fully relativistic set-up. This is achieved by complementing the existing data set described in the last column of Table 2 by JLQCD domain wall fermion ensembles, albeit with slightly different action parameters, with three lattice spacings in the range \(a\in[0.04,0.08]\,\text{fm}\). This allows simulations up to close to the physical \(b\)-quark mass, controlling the remaining extrapolation to its physical value. The chiral extrapolation is controlled by the inclusion of the physical pion mass ensembles of the RBC/UKQCD collaboration. The use of chirally symmetric domain wall fermions throughout, guarantees a fully non-perturbative renormalisation procedure with continuum-like mixing pattern, addressing one of the main systematic uncertainties present in current calculations.
## 6 Recent developments
Many (actually all) of the PS to V transitions described in the Section on semileptonic form factors are decays to multiple (two) hadrons in the final state. The \(D^{*}\) for example decays strongly into \(D\pi\). Such effects are typically included in the form factor chiral extrapolations through \(\chi\)PT or HM\(\chi\)PT [101]. A formalism to treat such processes on the lattice however has been introduced several years ago by Lellouch and Luscher in Ref. [102] and extended for the cases discussed here in Ref. [103]. First numerical applications [104] have
Figure 7: Using results from Refs. [92], [93] and [94] Results are quoted in \(\overline{\text{MS}}(m_{b})\) and using the BBGLN scheme. Magenta bands are the weighted averages between \(N_{f}=2+1\) and \(N_{f}=2+1+1\). In cases where \(\sqrt{\chi^{2}/\text{dof}}>1.25\) the dashed magenta lines indicate the uncertainty including the PDG scale factor of \(\sqrt{\chi^{2}/\text{dof}}\).
appeared only recently for the \(B\to\rho(\pi\pi)\ell\overline{\nu}\) process, which provides an alternative exclusive channel for the extraction of \(V_{ub}\). Further results, also for other channels, can reasonably be expected in the near future.
The formalism relies on the relation between two-particle energy levels in a finite volume and the infinite-volume scattering lengths [105]. Since energy levels are directly continued to Euclidean time this allows to compute properties of scattering processes on the lattice. In a second step a perturbation is introduced inducing the transition between a single particle and two particle states (\(K\to\pi\pi\) is the process considered in Ref. [102]). By computing (to leading order) the effect of such a perturbation on the energy levels and the scattering, one obtains a proportionality relation between the finite volume matrix element and the (say, \(K\to\pi\pi\)) decay amplitude in infinite volume. The proportionality factor, known as "Lellouch-Luscher" factor is a function of the momenta of the particles and the derivative of the scattering phases with respect to them. In Ref. [103] a further step is taken by extending the approach to the case of currents (the perturbations above) inserting energy, momentum and angular momentum for systems with an arbitrary number of mixed two particle states. In this case the application in mind is the \(B\to K^{*}(K\pi)\ell^{+}\ell^{-}\) transition.
Another topic in which considerable progress has been made, is the study of inclusive decays on the lattice, with the aim to shed some light on the tensions between inclusive and exclusive determinations of CKM matrix elements such as \(V_{cb}\) and \(V_{ub}\). In a series of papers [106, 107, 108, 109] an approach has been devised, taking the process \(B_{(s)}\to X_{c}\ell\nu\) as prototype. The central quantity is the hadronic tensor
\[\begin{split} W_{\mu\nu}(p_{B},q)=\sum_{X}(2\pi)^{3}\delta(p_{B} -q-p_{X})\times\\ \langle B(p_{B})|J_{\mu}^{\dagger}|X\rangle\langle X|J_{\nu}|B(p_ {B})\rangle\,\end{split} \tag{14}\]
where \(J_{\mu}\) is the weak current inducing the \(b\to c\) transition, \(q\) is the lepton-pair momentum and the sum over the charmed final state \(X\) includes a spatial-momentum integral (in \(\vec{p}_{X}\)). At fixed \(p_{B}\), for example by choosing the rest-frame of the \(B\) meson, the tensor above is a function of the spatial components \(\vec{q}\) and \(\omega=M_{B}-q_{0}\). What can be computed on the lattice is essentially the Laplace Transform of such tensor at fixed \(\vec{q}\)
\[C_{\mu\nu}(\vec{q},t)=\int_{0}^{\infty}d\omega W_{\mu\nu}(\vec{q},\omega)e^{- \omega t}. \tag{15}\]
Since \(C_{\mu\nu}\) in finite volume is given by a sum of exponentially falling (in time) functions, trying to invert the relation above for arbitrary values of \(\omega\) is an ill-posed problem. For \(\omega\) smaller than the energy of the lowest state \(X\) the relation however can be inverted. In Ref. [106] the tensor \(W_{\mu\nu}(\vec{q},\omega)\) is computed on the lattice for that particular unphysical kinematic choice (\(\omega<E_{X}^{\rm min}\)), where the final hadronic state can not go on shell and is then related to derivatives of the tensor in the physical region through a dispersion integral. The approach is very similar to the derivation of the moment sum rules in the case of Deep Inelastic Scattering.
Improving on this first approach a completely new method has been introduced in Ref. [107], based on the observation that in order to compute the inclusive decay rate what is needed is not the hadronic tensor but rather a smeared version of it with functions resulting from the leptonic tensor. The building blocks are the quantities
\[\bar{X}^{(l)}(|\vec{q}|^{2})=\int_{0}^{\infty}d\omega W_{\mu\nu}(\vec{q}, \omega)K^{\mu\nu,(l)}(\vec{q},\omega)\, \tag{16}\]
where the functions \(K^{\mu\nu,(l)}(\vec{q},\omega)\) are known and defined in Ref. [107]. If one can find good polynomial approximations of the \(K\)-functions in powers of \(e^{-a\omega}\), then the \(\bar{X}^{(l)}\) can be obtained by taking suitable combinations of the correlator \(C_{\mu\nu}\) at different times. Obviously the larger the time one can compute the correlation function with controlled errors, the higher the degree one can consider the polynomial approximation and the better the accuracy in \(\bar{X}^{(l)}(|\vec{q}|^{2})\), and eventually in the inclusive decay rate. In Ref. [107] Chebyshev polynomials have been studied while in Refs. [108, 109], in an attempt to optimise the approximation balancing truncation errors and statistical noise from the correlator \(C_{\mu\nu}\), also the Backus-Gilbert method has been considered. All such studies are still exploratory, however the results are very encouraging.
## 7 Conclusions and outlook
The combination of Lattice QCD and \(B\)-physics constitutes an exciting and active field. In this review, we have commented on the challenges faced when making predictions for \(B\)-physics observables from lattice QCD calculation. We have reviewed the recent literature for the determination of the \(b\)-quark mass, leptonic decays constants, several semileptonic decay channels and neutral meson mixing parameters. To address some of the observed tensions, we have put forward a number of suggestions for benchmark quantities, which will help to understand systematic effects associated with complicated multi-dimensional fits.
Some quantities (\(b\)-quark mass, decay constants) are in a mature state, with many results using different methodologies agreeing, producing comparable uncertainties, and the corresponding tests of the SM are limited by the experimental precision. The more complicated calculation of semileptonic form factors has made tremendous progress, but due to to the wealth of decay channels and the increased computational and data-analysis complexity, fewer independent results exist for the individual observables. Several of the results in the literature have not yet reached a community consensus and currently display some level of tension. However, there are major ongoing efforts by several groups to consolidate these calculations. We hope that the benchmark quantities we suggest in Sec. 4.6 will be adopted by the community and will aid in shedding light on current tensions between different lattice computations. The situation for neutral meson mixing parameters is similar with only a small number of results currently available.
Finally, we comment on recent progress in calculations of semileptonic decays into two final state hadrons and inclusive semileptonic decays. Whilst these calculations are currently exploratory they are very promising and we expect they will mature to further ab-initio tests of the SM.
It is worth emphasising the increase in complexity in the quantities presented here. Masses and decay constants can be extracted from two-point functions, form factors require three-point functions while for two-hadron decays and inclusive transition rates four-point functions are needed. The statistical noise in \(n\)-point functions unavoidably grows with \(n\) and in addition the combinatorics in the number of Wick contractions becomes more complex. It is a remarkable achievement of the advances in numerical methods and computer power combined with smart field theoretical ideas that has led us to tackle new challenges in \(B\)-physics on the lattice, which seemed un-treatable only a few years ago.
## Acknowledgements
We thank Daniel Wyler for inviting us to contribute to the "European Physical Journal Special Topics on B-physics" volume. We are grateful to Marzia Bordone and Shoji Hashimoto for comments on the manuscript.
Data Availability Statement: No Data associated in the manuscript.
|
2305.17800 | Magnetic domain walls : Types, processes and applications | Domain walls (DWs) in magnetic nanowires are promising candidates for a
variety of applications including Boolean/unconventional logic, memories,
in-memory computing as well as magnetic sensors and biomagnetic
implementations. They show rich physical behaviour and are controllable using a
number of methods including magnetic fields, charge and spin currents and
spin-orbit torques. In this review, we detail types of domain walls in
ferromagnetic nanowires and describe processes of manipulating their state. We
look at the state of the art of DW applications and give our take on the their
current status, technological feasibility and challenges. | G. Venkat, D. A. Allwood, T. J. Hayward | 2023-05-28T19:04:13Z | http://arxiv.org/abs/2305.17800v1 | # Magnetic domain walls : Types, processes and applications
###### Abstract
Domain walls (DWs) in magnetic nanowires are promising candidates for a variety of applications including Boolean/unconventional logic, memories, in-memory computing as well as magnetic sensors and biomagnetic implementations. They show rich physical behaviour and are controllable using a number of methods including magnetic fields, charge and spin currents and spin-orbit torques. In this review, we detail types of domain walls in ferromagnetic nanowires and describe processes of manipulating their state. We look at the state of the art of DW applications and give our take on the their current status, technological feasibility and challenges.
## I Introduction
In recent years, there has been an explosive growth in research into magnetic nanostructures for memories, sensors and computation. Advantages of magnetic systems include their inherent non-volatility, fast speed of operation, low power consumption, nonlinear responses, negligible current leakage, and compatibility with CMOS fabrication techniques [1]. This has led to magnetic random access memory (MRAM) products being available in the market as well as multiple demonstrations and proposals for Boolean and unconventional computing applications.
In this context, magnetic domain walls (DWs) in patterned magnetic nanowires have been proposed for many applications, which are introduced below. DWs are regions in a magnetic material between uniformly-magnetised domains through which magnetisation changes direction, DWs are typically small (with widths from a few nanometres to hundreds of nanometres), move relatively quickly (with reports of speeds of \(18\,\mathrm{km/s}\)) [2] and can be manipulated using a variety of stimuli, as we discuss later.
Magnetic DWs in nanowires have been developed as the information carriers for memory devices such as the switchable components of MRAM cells [3] and shift registers, which include the racetrack memory architecture among others [5; 6]. These memories offer high storage densities with nanowire feature sizes scaled down to \(20\,\mathrm{nm^{2}}\), low read/write energies, low leakage current and years of data retention [8] in devices without movable parts. The intense research activities over years has now led to the prospect of commercialisation by exploiting CMOS fabrication and device/circuit level integration [9; 10].
DWs in nanowires have also been proposed for logic applications [11; 12]. Control of the motion and interaction of DWs at nanowire junctions has allowed logic gates [8], adders [13; 14] and shift registers [6] to be demonstrated. DWs also show complex behaviours when encountering energy barriers such as at nanowire junctions. These behaviours are often stochastic in nature and highly sensitive to thermal perturbations [15; 16], which was long regarded as a severe impediment to performing information processing tasks with magnetic DWs. However, there have been examples of non-Boolean logic devices that make use of DW stochasticity to realise, for example, neuromorphic computing [17] and stochastic computing [18] along with other devices such as a random number generator using DW-based Galton board [19]. The stochastic nature of DW depinning from nanowire junctions has also been shown to lead to reliable and complex many-body ensemble behaviour in large arrays of overlapping nanorings and proposed as the basis for reservoir computing [20].
Beyond information technology applications, DWs in magnetic nanowires have been used to realise optical based [21] and multiturn [22] magnetic field sensors. The stray magnetic fields from magnetic DWs in nanowires can also be used to interact with [23] and position [24] ultracold atoms as well as control magnetic nanoparticles for biological sensing and cell positioning applications [25]. There have been proposals [26] and demonstrations [27; 28] of trapping superparamagnetic beads using DW motion which can potentially be useful for manipulating biological cells and DNA.
The movement of DWs in magnetic nanowires has been utilised for the majority of proposals of DW related technological applications and the last two decades have seen a surge in the study and manipulation of DWs using a variety of stimuli such as magnetic fields, charge and spin-polarised currents. Novel spin-related phenomena such as spin-orbit and spin-transfer torques have been utilised for controlling DW positions and velocities in nanowires in devices that move the technology from lab demonstrations to being closer to commercial realisation.
This review discusses the various approaches of creating functionality from dynamics of DWs in both in-plane and out-of-plane magnetised nanowires, with a particular focus on the novel computing proposals that have emerged recently. We will also present fundamental aspects of the nature and control of DWs in nanowires to assist newcomers to the subject.
The outline of the review is:
* Section 2 details types of magnetic DWs patterned from thin films. This section also discusses available stimuli to manipulate and control magnetic DWs in patterned nanowires, including magnetic field, strain, voltage, and spin-polarised currents. We also discuss how various nanowire-based structures, including straight nanowires, nanorings, geometric defects, and nanowire junctions can be used to position DWs and tune their properties.
* Section 3 details DW processes in nanowires with particular attention to the stochastic features of these behaviours and the associated effects of thermal perturbations.
* Section 4 concentrates on applications proposed using DWs in magnetic nanowires, including : memories,
Boolean, neuromorphic and reservoir computing; and non-computing applications, including field sensors, biological applications, and transporting ultra-cold atoms.
* Section 5 concludes with a summary and future outlook.
## II Magnetic domain walls
### Domain wall structure
Rectangular-cross-section magnetic nanowires typically have thickness from a few atoms to tens of nanometres, widths from 50 - 500 nm, and can extend in length to several tens or hundreds of micrometres. They are usually fabricated using standard lithography techniques such as electron beam lithography [29] or focussed ion beam milling [30] to pattern thin films into user-defined designs. We will be considering these rather than nanowires fabricated using electrodeposition techniques [31; 32] as more control on nanowire features is possible with lithography and the vast majority of applications proposed using DWs have considered lithographically patterned magnetic nanowires.
Patterned nanowires made of soft magnetic materials such as Ni\({}_{81}\)Fe\({}_{19}\) (Permalloy) have a magnetic structure dominated by shape anisotropy. The extended geometry of such a nanowire means magnetic domains are usually oriented in plane, along the wire length. This results in DWs that lie directly across the width of a nanowire to separate oppositely oriented magnetic domains. At their simplest, the possible configurations are referred to as 'head-to-head' (H2H) when the DW separates domains with magnetisation oriented towards the wall and 'tail-to-tail' (F2T) when the domains are oriented away from the wall (Figure 1 (a) and (b)). These DWs most often have in-plane magnetised (IM) configurations, due to the wires' width usually being greater than their thickness. The simplest H2H (and T2T) DW structure is the 'transverse' configuration [33], where moments in the centre of the wall are perpendicular to the nanowire edge (Figure 1 (i) and (ii)). This occurs in thinner, narrower nanowires and results in decreased exchange energy and a high magnetostatic energy. In thicker, wider soft ferromagnetic nanowires, DWs form 'vortex' configurations where the moments rotate \(360^{\circ}\) around a vortex core (Figure 1 (iii) and (iv)). This structure has a low magnetostatic energy, at the expense of an increased exchange energy. DW structures with opposite chirality, i.e. the winding direction of magnetisation through the DW, are degenerate in a perfectly straight, defect-free nanowire, but lead to different behaviours when encountering asymmetric wire features or transverse magnetic fields [34]. Asymmetric transverse DWs in soft ferromagnetic rectangular nanowires have also been predicted [35] and then imaged using Fresnel-mode electron beam imaging [36].
The in-plane DW textures shown in Figure 1 can also be described using a system of magnetic topological defects [38]. For example, transverse DWs can be described in terms of magnetic edge 'defect' states with fractional winding numbers while vortex DWs can be described in terms of vortex 'defect' states with integer winding numbers. This approach provides a simple nomenclature for capturing DW structure and chirality and can be useful to categorise the behaviour of DWs and understand their interactions both with each other and with geometric features in nanowires. DW chirality has also been used functionally to manipulate DW trajectory [39], measure DW "fidelity" (a minimum length scale over which structural changes of the DW occur) [40], and to form the basis of applications such as chirality-based DW memory cells [41] and logic gates [42].
There has also been much interest in nanowires with perpendicular magnetic anisotropy (PMA) which are out-of-plane magnetised (OOPM). DWs in OOPM nanowires are usually much reduced in width of a few nanometres compared to in-plane-magnetised nanowires [43], which offers improved current-induced motion efficiency and higher data density [44]. There are usually two types of DWs in such structures which are the Neel and Bloch types. These DWs usually have a Bloch structure in wider nanowires and transition to Neel DWs in narrower structures [45] (shown in Figure 1 (c) and (d)).
### Domain wall nucleation
Controlling the nucleation of DWs in nanowires is important for many experiments and applications in order to test the behaviour of magnetic features or to represent data.
The simplest approach involves fabricating a large nucleation pad, or injection' pad, at the end of a wire [50]. Injection pads are typically made with lateral dimensions from single to tens of micrometres, and with a variety of shapes, e.g. rectangular, circular, attached or elliptical (shown in Figure 2). The weaker shape anisotropy of the pad compared with the nanowire means that an externally-applied global magnetic field causes a DW to nucleate within the pad first, and then be injected into the nanowire at magnetic fields far lower than the nucleation field of a simple wire end. Injection pads often lead to a range of DW structures being introduced to an adjoining IM nanowire as a variety of domain configurations may exist in the larger pad [51; 52], although DW chirality can be controlled by careful design of an injection pad and placement of the attached nanowire [53].
DWs in IM nanowires can also be nucleated by using the Oersted field associated with currents driven through patterned current-carrying wires that cross over the magnetic nanowire [54; 55]. This approach produces consistent DW structures using local excitations without the need for a global magnetic field, while the pulsed nature of the current allows more precise timing of nucleation events to synchronise with a wider experiment [56].
Stein et al. [57] have shown that combining current pulses of different polarities and external magnetic fields lead to reproducible creation and annihilation of DWs in an IM nanowire (shown in Figure 3 (a)). Figure 3 (b) shows changes in measured resistance after applying multiple current pulses and the different resistance levels corresponds to various types of DWs being formed in the nanowire. Specific choices of
the polarity and duration of of the current pulses and polarity of magnetic fields led to stochastic pinning of DWs in the nanowire region bounded by the current lines as well as the injection of vortex and transverse walls into the nanowire. In the same year, Hayashi et al.[58] used short lived current pulses (with \(\sim\) ns lifetimes) to decrease the magnetic fields required to nucleate DWs in IM nanowires. They found that the nucleation field reduced by around half and the distribution of the switching fields decreased in width as well and the authors attributed this to using localised fields generated by the current pulses.
Figure 1: (a) and (b): DW configurations in the top view of soft ferromagnetic patterned nanowires, obtained from micromagnetic simulations performed using Mumax3[37]. Colour maps the local magnetisation direction and the inset to each figure part shows a simple schematic of the DW configuration. (c) and (d) shows schematics of Néel and Bloch type DWs in out-of-plane magnetised nanowires
lines.
Injections pads[59] and the Oersted fields from current carrying lines[60] have also been used to inject DWs in OOPM nanowires. Zhang et al.[61] used \(\mathbf{M}\) shaped current lines for increased efficiency of DW injection in Co/Ni multilayer nanowires by modifying the shape of the current and localising the Oersted field to inject DWs. Other methods to inject DWs in OOPM nanowires include controlling the PMA by ion-irradiation[62] and ion milling[63] which change the physical structure and using Joule heating in particular regions of the nanowire via current injection[64].
### Domain wall motion
There are several approaches to achieve controlled DW motion in patterned magnetic nanowires. It is well known that, in general, the application of a magnetic field causes ferromagnetic domains parallel to the field to grow and oppositely-oriented domains to shrink, with the changes in domain size mediated by DW motion. This can be understood at a more microscopic level by examining the Landau-Lifshitz-Gilbert (LLG) equation of motion[65]
\[\frac{\mathbf{dM}}{\mathrm{dt}}=\mu_{0}\gamma(\mathbf{M}\times\mathbf{H}_{ \mathrm{eff}})+\frac{\alpha}{|\mathbf{M}|}\left(\mathbf{M}\times\frac{\mathrm{ d}\mathbf{M}}{\mathrm{dt}}\right) \tag{1}\]
Here \(\mathbf{H}_{\mathrm{eff}}\) is the effective local magnetic field that acts on the magnetisation, \(\mathbf{M}\) in the time interval dt. \(\gamma\) is the electron gyromagnetic ratio, \(\mu_{0}\) is the permeability of free space and \(\alpha\) is the Gilbert damping parameter. \(\mathbf{H}_{\mathrm{eff}}\) is calculated from the derivative of the system energy with changes in \(\mathbf{M}\)[66], which includes any applied external field, \(\mathbf{H}_{\mathrm{app}}\), as well as other energy contributions, e.g. magnetostatic, magnetocrystalline, and exchange energy terms. Equation (1) is most accurate when solved at a scale up to the exchange length of a material, i.e. the length over which magnetisation remains approximately uniform.
The first term of the left-hand side of Equation (1) shows that the initial response of \(\mathbf{M}\) is to precess around \(\mathbf{H}_{\mathrm{eff}}\). In IM nanowires and in a 1-D approximation, this means that a magnetic field applied parallel to the long axis of the nanowire (the longitudinal field component) will cause DW magnetisation, which is orthogonal to the field, to start to precess out of plane. This, in turn, creates an out-of-plane demagnetisation field that acts upon the DW's remaining in-plane component of magnetisation and causes it to precess towards alignment with the applied field direction, expanding the magnetic domain in this process[67]. The second term describes the damping of the precessional motion, which occurs in all ferromagnetic materials and, ultimately, causes local magnetisation to align to the local \(\mathbf{H}_{\mathrm{eff}}\). Similar dynamics occur in OOPM materials with a magnetic field applied out of plane where the PMA plays a significant role.
The fields required to propagate DWs are usually low (\(1-20\,\mathrm{mT}\)) both for soft magnetic materials, such as Permalloy[68] and for out-of-plane manipulation in some OOPM materials
Figure 2: Some of the different nucleation pads used for injecting DWs into nanowires are (a) rectangular[46], (b) circular[47], (c) attached[48] and (d) elliptic[49] in shape.
like Co/Pt [69] although OOPM systems such as FePt/Pt require considerably higher fields [70]. The magnetic fields are usually generated by an external electromagnet, and the resulting high power consumption usually limits the use of field-driven DW motion to laboratory experiments. There has, therefore, been intense interest in developing alternative, low-power methods of controlling DW motion and position.
DW motion in magnetic nanowires can also be induced by passing electric current directly through the wires. This approach is popular due to it offering local addressability, power efficiency, and fast DW motion, and has resulted in a large number of proposals for DW-based MRAMs and a commercial memory device [71]. Current control of magnetisation in thin films has been reviewed extensively from theoretical [72] and experimental [73; 74] viewpoints. Briefly, the spin angular momentum carried by a spin-polarised charge current causes an adiabatic'spin transfer' torque (STT) [75; 76; 77] to be applied to the DW, which causes DW displacement. STT can also have a non-adiabatic contribution with an unclear origin with contenders being the force on the conduction electron due to gradient in the _s-d_ exchange field, linear momentum transfer and spin-flip scattering [74]. STT is very relevant technologically as it can eliminate the need for magnetic fields and, therefore, for inefficient solenoids or electromagnets. STT has been proposed (in simulations) to nucleate DWs at specific locations in nanowires [78] and Phung et al. [79] showed that they could inject DWs in nanowires with both IM and OOPM regions (obtained using ion implantation) using STT although the injection was stochastic in nature which might not be suitable for Boolean applications. One challenge with SST driven DW motion is that the position of the electrical contacts necessary for current-induced motion, of course, create natural limits to the extent of DW motion in nanowires [80]. Furthermore, the high current density for DW motion (\(10^{10}-10^{12}\,\mathrm{A/m^{2}}\)) remains a technological challenge, both in terms of power efficiency and the resultant heating, which might damage the structure or change its behaviour. This is partly due to current-driven DW motion having an intrinsic pinning, even for an ideal wire, which leads to a threshold current both for IM [74] and OOPM systems [54]. The result is that sustained STT driven DW motion usually requires an external magnetic field [81; 82; 83; 84]. Various proposals have been made for decreasing the current densities required for efficient control of DW motion, including using changing aspect ratios of nanowires [85], different geometries of the current injection [86], and OOPM materials [87; 88; 89]. Tailoring the PMA in OOPM systems have been predicted lead to increased efficiency of SST driven DW motion [90]. There have also been interesting reports of lower switching current densities in systems involving synthetic ferrimagnets [91] and antiferromagnets [92], due to the in-plane magnetic interactions in them leading to narrow DWs.
Recently, non-magnetic (NM)/ferromagnetic (FM) bilayers have opened up multiple schemes of controlling the magnetisation in the magnetic layer. The spin Hall effect (SHE) [94] describes the generation of a transverse spin current at the FM/NM interface due to spin-orbit coupling in the NM. The SHE can significantly affect the DW dynamics in nanowires [95] and can even reduce the current densities required for DW switching [96; 97]. The torque exerted by this transverse spin current on the magnetisation in the FM is known as spin-orbit torque (SOT) and this has been studied intensely in the last few years [93; 98]. Depending on the relative orientation of the injected current and the magnetisation in the FM, there can be two configurations of SOT for switching as shown in Figure 4. The "type-x" configuration require a magnetic field out-of-plane of the nanowire for breaking the symmetry of the system for deterministic switching which can be inconvenient for technological applications. SOT-induced schemes overcome limitations of STT with the added advantages of reduced power consumption and after device operation. SOT-induced DW depinning has been predicted [99] and demonstrated [100] in Ta/CoFeB thin films nanowires, making available an added method of controlling DWs [101]. While most such studies have been with OOPM systems, there have also been analytic predictions of SOT driven DW motion in IM systems such as Pt/CoFeB/MgO [102]. Further novel phenomena such as the Rashba effect and the Dzyaloshinskii-Moriya interaction (DMI) can also be used to tune DW velocity [103; 104; 105] by controlling the DW energy using these effects. For instance, Dao et al. [106] have shown that DWs can be control
Figure 3: A scanning electron microscopy image of the magnetic nanowire with a pair of vertically crossing current lines. (b) The variation of the nanowire resistance after one pulse (green) and after two pulses (red): Taken from [57].
lably pushed into nanowires using the chiral interactions induced by DMI between in-plane and out-of-plane magnetised trilayers of \(\text{Pt}/\text{Co}/\text{AlO}_{\text{x}}\) in combination with SOTs induced using currents. The capability of SOT to nucleate DWs are found to strongly be chirality dependent and can required current densities less than traditional current induced DW injection methods. SOT offers a novel and useful control of DWs using electric currents for device applications, although some challenges such as efficient magnetisation switching without out-of-plane fields have to be overcome.
DWs in nanowires can also be controlled using applied strain in magnetostrictive materials, in which strain mediates magnetisation changes via inverse magnetostriction, also called the 'Villari effect [107]. Metallic magnetostrictive nanostructures are often coupled to underlying piezoelectric materials to allow strain in the piezoelectric to be created via electric potentials applied to patterned contacts before being transmitted to the magnetic element (Figure 5 (a)). Such systems are termed 'artificial multiferroics', due to their heterogeneous structure and show a variety of magnetic and DW driven phenomena [108]. Local variations in static strain across the length of a magnetostrictive nanowire create potential barriers or wells that can be used to addressably pin or even move DWs along the wire length [109; 110]. Lei et al. [11] have shown that by applying a static strain in piezoelectric/multilayer magnetic structures, the magnetic response can be reproducibly tuned (Figure 5 (b)). Other proposals for manipulating DWs using strain include applying dynamic strain profiles and Dean et al. [112] proposed using standing surface acoustic waves (SAWs) along the length of an IM magnetic nanowire to create an array of DW pinning sites. Subsequently, Adhikari et al. [113] found that exciting DWs in an array of OOPM Co/Pt multilayer nanowires using SAWs (created using integrated digital transducers and shown in Figure 6 (a)) resulted in an increase in the DW depinning probabilities by approximately a factor of 10 (Figure 6 (b)). Technological proposals based upon strain-induced control of DWs include memories [114] and manipulation of magnetic beads in a fluid [115].
Optical pump-probe techniques have also been used to move DWs in IM Co/Ni films [116] and open up possibilities of ultrafast control and writing of magnetisation. We therefore see that many varied and technologically relevant schemes are available for initiating and controlling DW states in both IM and OOPM nanowires.
### Positioning domain walls
Geometric modifications have been also used to create sites for pinning/depinning of DWs in nanowires by altering the magnetic energy landscape. In magnetic nanowires, local geometric modifications invariably cause change to the magnetostrictive energy of a DW, with subsequent rearrangements in the magnetic configuration of a DW at the modified side leading to changes in exchange energy [117]. These changes depend upon the structure and chirality of the DW, and the precise geometry of the modified region, which has given a lot of scope for investigation.
The nucleation and pinning of DWs in nanowires are dependent on the defects introduced during fabrication of the wire. Dutta et al. [119] found that edge roughness of the wire can trap DWs at length scales less than the resolution of the fabrication process both in IMA and PMA nanowires. They studied a system of nanowires in the regime where the DW width was smaller than the correlation length of the wire roughness (i.e. the length scale defining the transition between smooth and rough length scales of the nanowire) and found a discrete distribution of DW traps in the nanowires which limits the precision of the placement of a DW in the nanowire.
Himeno et al. [118] used artificial necks of varying width in IM NiFe/Cu/NiFe trilayer submicrometer wires to control the depinning fields of the DWs (Figure 7). Kluiui et al. [120] also used notches in an IM curved ring nanowire and by injecting currents between various contacts positioned between the notches, they found multiple transitions in the anisotropic magnetoresistance (AMR) response from the ring which corresponded to DW pinning at the notches. Such studies led to further investigations of notches of different shapes [117; 34], modifying the position of notches in the nanowire [121], and proposals of stepped nanowires for memory applications [122]. These demonstrated in various ways how the potential landscape and depinning fields of DWs could be modified.
Figure 4: Types (a) \(x\) and (b) \(y\) configurations for the SOT due where the injected current (black dashed arrow) and magnetisation (purple arrow) lie along and perpendicular to each other respectively. The splitting of electrons of different spin are shown as well. The ’type-x’ configuration required an out-of-plane for breaking the symmetry and switching the magnetisaton. Taken from [93].
Asymmetric geometric defects can be used to create additional levels of control in IM nanowires, DWs of opposite chirality generally interact differently with defects that are present on one side of a nanowire only, regardless of whether they are notches[123; 124], protuberances[125], or adjoining wires at 90\({}^{\circ}\) to the original wire[40]. This has been used as a means of selecting one chirality over another[126] and the basis of a logic system using vortex DWs[42; 127]. In all cases, if the geometric modification is repeated on the other side of the wire, the chirality-filtering is removed and DWs of the same structural type pass through with identical depinning fields[56]. Asymmetry along the wire direction of a notch or protuberance shape creates a different energy pathway for DWs travelling through the wire in opposite directions. This leads to depinning fields that depend on the direction a DW is travelling, in effect creating a 'DW diode' using both in-plane[128; 129] and OOPM materials[130].
External stimuli can be used to control DW pinning at geometric features. For example, a DW in an IM nanowire can be pinned through magnetostatic interactions with an adjacent and close-lying magnetic nanowire[131; 132; 133; 134; 135]. Also, an externally-applied transverse magnetic field can be used to tune the DW depinning probability function at notches[42], which gives a greater degree of control. Kim et al.[136] studied the depinning of DWs at notches of different gap sizes in an OOPM nanowire and demonstrated that the depinning fields and times could be controlled by changing the gap sizes. The Oersted field from current-carrying lines patterned across magnetic nanowires can also be used to pin DWs in an addressable manner[137; 60; 138], although the Oersted fields created using these lines are usually relatively low.
Precise understanding of energy pathways allows prediction of DWs experiencing potential wells or potential barriers. This can be tested experimentally by pinning a DW and then
Figure 5: A scanning electron microscopy image of the piezoelectric/multilayer device used for showing effects of strain on the magnetic response. (b) The variation of the giant magnetoresistance of the device for different voltages applied to the piezoelectric which leads to a modification of the strain on the nanowire. Taken from [111].
Figure 6: A schematic of the setup used to modify DW pinning/depinning using SAWs showing the magnetic stripes and integrated digital transducers (IDTs). (b) The depinning probability as a function of increasing SAW voltage for three representative pinning sites. Taken from [113].
reversing the applied magnetic field. A potential barrier will lead to 'pinned' DWs moving back away from the defect site under low fields but a pinned DW will require a similarly-large field to move from the site in either direction [131]. However, DW depinning from geometrical modifications is complicated by thermally-activated stochastic processes [16], which we will explore more later.
DWs have also been positioned by modifying the global wire geometry. Magnetic fields can cause DW motion in nanowires whenever there is a field component parallel to the wire direction. Curved nanowires, therefore, provide a natural limit to the extent of DW motion and they have been used to assist in exploring the nature of various DW phenomena in nanowires using magnetic fields and currents in both IP and OOPM systems [139; 140; 141]. An in-plane rotating magnetic field causes DWs to propagate around curved wire regions in the directions of same handedness as the field rotation, which has been exploited for clocking in logic [142] and sensor [142; 143] systems. The depinning fields in curved nanowires is sensitive to the degree of curvature of the nanowire and Glathe et al. [143] suggested that DW pinning in a curved IM nanowire can be better controlled by designing corners as a polygonal line. Modifying wire geometry can also be used to control DW chirality, with vortex DWs in curved IM nanowires subject to an externally-applied magnetic field observed to have chirality that is highly dependent upon the wire width and magnetic field direction [144]. On the introduction of width gradients in half rings to form structures having either point of axis symmetry, the authors could generate vortex DWs with either same or opposite chirality via different stochastic switching pathways. The stochasticity in these pathways could be controlled by modifying the width gradient and temperature and the authors indicated that this could be useful for applications.
IM ferromagnetic rings offer an interesting combination of having magnetic properties similar to a straight nanowire locally but with a wider curved geometry that creates well-defined magnetic states and makes positioning DW straightforward. The lowest energy magnetic configuration of a soft ferromagnetic ring is the 'vortex' state, in which magnetisation is oriented circumferentially, i.e. following the local wire direction around the ring, and no DWs are present (Figure 8 (a)). DWs can be introduced in pairs, with the so-called 'onon' state referring to a configuration with two DWs, one H2H and one T2T, on opposite sides of the ring (Figure 8 (b)). Other states are possible [145] but the vortex and onion configurations form the bulk of interest [146; 147]. Application of an in-plane magnetic field causes DWs to align on opposite sides of a ring in a radial direction parallel to the field direction [146]. This means in-plane rotating magnetic fields of sufficient strength can be used to cause the DW pairs of an onion state to propagate around the ring in the same direction as the field rotation [148]. Bisig et al. [149] reported that rotating fields can also be used to set the chirality of DWs with high fidelity in rings of varying diameters and width with thermal activation playing a role in the switching process.
The magnetic behaviour of ring arrays can be quite different from that of individual rings. For instance, the magnetostatic interactions within an array of close-lying rings can cause a significant difference in the switching fields of arrays with rings of varying widths and spacings [150]. In fact, the switching of Cobalt rings arranged in a chain transverse to the switching field always occurs in pairs [151]. These additional complexities in the behaviour of multiple elements can be important for using such systems for non-Boolean computing architectures, as shall be described later on.
We shall now discuss the behaviour of DWs at junctions of nanowires. Faulkner et al. [152] considered an IM three terminal nanowire junction of Permalloy for controlled DW injection (shown in Figure 9 (a)-(c)). They found that the fields required for switching the magnetisation of the output arm of the device was significantly lower when DWs were injected from either one or both input arms compared to when no DW was injected
Figure 7: A schematic of a trilayer nanowire (in black) and DWs in them were manipulated using Oersted fields created by passing currents through Cu wires shown at the ends of the nanowire. (b) The variation of the depinning field of a DW from the notch with the neck width of the notch. A higher depinning field is observed for a notch with lower neck width. Taken from [118].
in either input arm (seen from the switching fields in the hysteresis loops in Figure 9 (d)-(f)). This was a consequence of a single DW (in the case of DW injection from one of the input arms) or linked DWs (when DWs are injected from both input arms) expanding in the output arm from the junction and causing the magnetisaton to switch. Pushp et al.[39] subsequently considered an IM Y-shaped nanowire junction and could reliably inject DWs of a given chirality into each arm of the Y-shaped structure by varying the strength of the injection field. Such IM junction based nanowire devices were subsequently proposed for DW based Boolean logic operations and we shall describe these in Section IV B. DW behaviours at junctions have also been considered in OOPM systems and Kwon et al.[153] considered propagation of Neel DWs in a bi-furcated nanowire of multilayer Ni/Co grown on Pt. (The DW injection into each of the two arms was at different applied fields and this was attributed to the DMI preferring one tilt direction of the DW surface leading to easier injection in one of the arms. We thus see that the behaviour of DWs at nanowire junctions offer additional opportunities of manipulating DW states for developing devices.
## III Domain wall processes in nanowires
DWs propagating in straight, patterned nanowires show a large array of dynamic behaviours over and above simple motion[154, 155, 155]. These behaviours are beyond stimulus driven deterministic DW evolution that was considered in the previous section and are heavily affected by thermally driven phenomena in the nanowires. We introduce these additional behaviours in brief below.
We refer to the paper by Hayward et al. for most of the discussions in the next two paragraphs[16]. DW processes are stochastic and experimental observations can be classified into three categories[16]:
* DW propagation, even in defect free nanowires, is not deterministic even for magnetic fields greater than what is required for DW motion
* DWs pinned at engineered pinning sites have rich, multimodal depinning magnetic field distributions. In fact, it is common to observe the DW depinning probability increase sigmoidally with applied magnetic field. This was observed, for example, by Pi et al.[156], who studied DW pinning at a notch in an IM Permalloy nanowire (Figure 10 (a)) by AMR (Figure 10 (b) black line).
* DWs move non-deterministically via defect sites when magnetic fields that would not be expected to be able to induce depinning are applied.
The simple picture of DW motion in a nanowire described in Section II.3 holds above a threshold field strength, due to overcoming imperfections in a wire, and with constant mobility \(\mu_{\text{DW}}=\frac{v}{H}\), where \(v\) is DW velocity and \(H\) the driving magnetic field[157, 158]. This 'viscous' regime remains until the 'Walker breakdown' field, the value of which depends upon nanowire material, geometry, and DW structure. Above this field, DWs first enter a regime of oscillatory motion and negative differential mobility, followed by a turbulent regime that sees the DW velocity mobility become positive once more[16] (refer to Figure 11). Walker breakdown processes see dramatic changes to the dynamic structure of DWs, accompanied by the near-periodic stalling of DW motion and transition of DW configuration between different types and chiralities. In the nanowire configuration considered in[16], an oscillatory transition between VDW and TDW types of alternating chiralities occurs. This stochasticity of DW structure has been explained as thermally-driven fluctuations in magnetic field driven Walker breakdown pathways[16]. Furthermore, thermal perturbations affect these different regimes of DW dynamics quite significantly, especially above Walker breakdown, when
Figure 8: (a) ‘Onion’ and (b) ‘vortex’ magnetic states in a soft ferromagnetic nanowire ring. The DWs in the ‘onion’ state are highlighted by the circles and the colour wheel maps the colours to magnetisation orientation.
the intrinsic instabilities of the DWs are heavily influenced by relatively small perturbations [16].
Such DW processes are not restricted to magnetic field stimuli. Similar dynamics to the field driven dynamics described above have been predicted and observed with current-driven DW motion in patterned nanowires. Duine et al. [159] used the stochastic version of the LLG equation which has a stochastic field component to account for temperature driven perturbations. The authors found that the variation of the DW velocity with applied current was linear even without any STT at non-zero temperatures due to these perturbations indicating that they can play an important role. Subsequently, Hayashi et al. [83] found that current-driven DWs propagating through an IM Permalloy nanowire and pinned at a notch would have different transverse and vortex structure of different chiralities, despite identical experimental conditions and a single structure being used. They also found that each of the four DW states observed had substantially different depinning fields but similar depinning current densities and concluded that with current-induced depinning, all the DW types may be converting to the same DW state. Higher current densities again result in DWs experiencing Walker breakdown and turbulent dynamics, although the Walker threshold current density can be found above the point where the DW velocity has ceased to respond linearly to current [154]. Hybrid approaches to driving DW motion have also been studied and have shown that application of a magnetic field lowers current driving thresholds [160; 73]. Furthermore, as seen in Section II.4, the patterned geometry of a magnetic nanowire plays an important role in determining DW dynamics. For example, modifying the curvature of nanowires has been predicted to modify the magnetic-field-driven DW velocities, precession, and oscilla
Figure 10: (a) The measurement setup for a detecting the depinning of a DW in an IM notched nanowire. (b) The relative probability of a DW depinning from the notch (black), a vortex DW (red) and transverse DW (blue) being pinned at the notch. Taken from [156].
Figure 9: FIB images of typical three-terminal structures fabricated, with (a) zero, (b) one, and (c) two DWs being injected into the junction. Directly below each image is a hysteresis loop of the output arm corresponding to the type of structure shown showing the difference in fields at which the magnetisation switches. Taken from [152].
tion frequencies [161]. The DW velocity also shows a curvature-dependent oscillatory behaviour in curved nanowires that is distinct from the oscillatory feature above Walker breakdown seen in straight wires [162; 163].
In applications for which DW stochasticity is detrimental, Walker breakdown might need to be minimized and various techniques have been suggested for this. For instance, doping permalloy nanowires with rare-earth elements such as holmium can increase the Gilbert damping parameter and thereby suppress Walker breakdown until much higher magnetic fields [164]. Geometric suppression of Walker breakdown in nanowires has also been achieved using a series of cross-shaped wire junctions in a comb-like geometry to interrupt and reset, at regular intervals, the dynamic pathway of DW dynamics that lead to magnetic oscillations [165].
We have seen in this and previous sections that DWs in nanowires exhibit rich dynamics which can be controlled via a variety of stimuli including magnetic fields, charge and spin-polarised currents, spin-orbit torques and mechanical strain. DW properties such as nucleation, chirality and position can be controlled by many of these stimuli as well as other factors such defects in the nanowires and by modifying the wire geometry. Furthermore, nanowires show complex behaviour such as stochastic switching of domains involving probabilistic pinning/depinning of DWs. In the next sections, we shall describe applications proposed using DWs in nanowires and explain how, while for some applications these complex effects can be detrimental, for other novel applications, these probabilistic processes can be highly desirable.
## IV Applications of domain walls in patterned nanowires
Proposed applications of magnetic DWs in patterned nanowires have mostly focused on information technologies, originally just for digital memories and computing, but more recent proposals have included physical realisations of approaches to unconventional computing. Other uses include magnetic field sensors, various bio-applications using magnetostatic interactions of DWs with magnetic beads, and even externally-addressed cold-atom optics, again using magnetostatic interactions. Here, we review research in each of these areas. Studies which have theoretically or numerically simulated a device will be a (Type:Sim) descriptor while an experimentally demonstrative study will be given a (Type:Expt) descriptor.
### DW memories
The first magnetic DW-nanowire memory proposal envisaged positioning of a DW between two positions defined by changing lateral dimensions of an IM ferromagnetic wire [166] (Type:Sim). The element memory might then be defined either as the magnetisation direction of the central region or the position of the DW, depending on the readout scheme employed. Positional bistability has also been created using induced stress patterns in magnetostrictive materials [109], magnetic fields transverse to the nanowire, particularly when combined with DW chirality [41] (refer to Figure 12), and the
Figure 11: (a) (Top) : A diagram showing the geometry of the nanowire simulated in [16]. The initial vortex DW configuration is shown in higher detail. The plots below show the DW velocity and normalised magnetisation in different regimes of DW dynamics. Taken from [16].
position of contact wires for current-induced DW motion [167], which was the basis of the first commercial DW-nanowire MRAM [71].
Other proposed memories use more continuous propagation of DWs in nanowires. Parkin et al. [4; 6] (Type:Expt) proposed a shift register technology known as 'racetrack memory' (RM) in 2008 in which digital data are encoded as a series of DWs or oppositely-magnetised domains along an IM nanowire. This approach relies upon using current-induced DW motion in order to ensure that all DWs move in the same direction, thus defining the direction of shift register operation. This approach also offers relatively simple device integration, since only single elements for data writing and readout need be incorporated into an extended wire. Parkin et al [168] (Type:Expt) further proposed different versions of the racetrack with the latest version (v4) making use of synthetic antiferromagnets on heavy metals and exploiting spin-orbit driven effects for driving DWs. These memories promise the advantage of high DW velocity, low power consumption and lesser fringing fields in the device.
Franken et al. [62] (Type:Expt) engineered effective unidirectional field-driven DW motion in an OOPM nanowire ring by patterning a series of gradients in the wire edge profile to make diode structures. These elements prevented back-propagation of DWs, thus allowing a bidirectional out-of-plane magnetic field to result in unidirectional DW motion. The ring structure meant that DWs could continue to circulate controllably for an arbitrary time and fulfil the requirements of a shift register.
Magnetic tunnel junctions (MTJs) [169] comprising ferromagnetic layers with free and fixed magnetisation separated by an electrically-insulating layer are also attractive candidates for memory/logic devices. They are particularly useful for reading out the states of devices using phenomena such as tunneling magnetoresistance (TMR) [170], Lou et al. [171] (Type:Expt) have proposed an MRAM device operating at multiple levels implemented using MTJs having a DW in the free layer which are formed due to lithographical imperfections. Subsequently, Raymements et al. [172] (Type:Expt) proposed an improvement to this by introducing a hybrid layer (an additional free layer in an MTJ) to show a three-operation device for DWs that allows addressable writing, DW transfer, and readout.
In order to increase the storage densities of memories, it is required to minimize the size of the magnetic bit. However, the energy barrier to be overcome for the switching of magnetic domains scales with domain size and thus smaller magnetic bits are more susceptible to thermal effects that can lead to stochastic depinning, which is an existing hurdle towards both dependable switching and long-term information retention in such memories. Another major active research area is the manipulation of DWs for storing information. Magnetic fields control of magnetic states are not attractive for devices due to poor control and high power consumption and so current control of magnetisation is the way forward. However, the current densities required for STT/SOT driven DW motion in nanowires is still relatively high which and their application can lead to significant Joule heating of the device [173]. Furthermore, reading and writing of information requires controlled DW shifting and precise position control (for alignment with read/write heads) and these stringent requirements need further study. These challenges are still to be overcome before the commercialisation of DW based long-term memory storage devices. The interested reader is referred to others reviews [174; 175; 176] for further information about mechanisms, devices, and materials for DW memories.
### Boolean computing
Magnetic-field-driven Boolean computing using DWs in IM nanowires was first demonstrated approximately 20 years ago by Allwood et al [11] (Type:Expt). They used in-plane rotating magnetic fields to drive DWs around 2D nanowire circuits (refer to Figure 13), with wire corners providing a means to limit the extent of DW propagation during any particular phase of field rotation thus allowing the external field to act as a circuit clock. The behaviour of DWs at wire junctions allowed nanowire elements to be created that performed operations usually associated with CMOS logic circuit elements (Figure 14) [11]. These included a Boolean NOT-gate, which performed an inversion operation, and an AND/OR-gate, the function of which depended on a dc magnetic field bias direction. They went on to show how these elements could be integrated to realise more complex logic circuits, including using a chain of NOT-gates to create a clocked shift register memory (Figure 13). Allwood et al. [177] (Type:Expt) further proposed a shift register using a three-terminal 'Y-shaped' IM nanowire structure with different widths of the nanowires in the structure. While two of the terminals of the device were used for the NOT operation described in [11], the third terminal supplied an extra, non-inverted output and by cascading a series of such structures, they could realised a shift register operation.
Zheng et al. [178] (Type:Expt) extended this idea to show that field driven DW motion in a Y-shaped permalloy nanowire device which has a "gate" terminal, the magnetisation of which determines the DW propagation dynamics in the "source" and "drain" arms, can be used to realise a transistor operation. DW depinning at the junction is controlled by the direction of the magnetisation in the gate arm (refer to Figure 15). They proposed in-memory computation using this element where the junction device can be used to store memory states as well as perform logic operations (such as OR and NAND) acknowledging the cross-disciplinary challenges of implementing a spintronic arithmetic logic unit for feasible in-memory processing of data.
The above applications with IM materials used nanowires with dimensions that would result in transverse DWs of a random chirality. Vandermeulen et al. [179] (Type:Sim) proposed a simulated device which used the two chiralities of transverse DWs in an IM nanowire as Boolean logic levels to realise NOT/AND/OR logic gates. Around the same time, Goolap et al. [48] (Type:Expt) demonstrated an IM nanowire device which could be used to invert or rectify the chirality states of transverse DWs. They used magnetic force microscopy to show that the chirality of a T2T transverse DW could be rectified from 'down' to 'up' while in the inverter case, the chiralities of both T2T and H2H DWs could be reversed. The chirality
Figure 12: (a) An SEM image of a nanowire with a notch with axial field hysteresis loops for different transverse fields to assist depinning. (b) A proposal for a multiple state memory using the direction of magnetisation in the wire and orientation of the DW, Taken from [41].
Figure 13: Images of (a) a magnetic nanowire circuit showing a combination of different gate logic element implementations mentioned in Table 14. (b) A 5-bit magnetic shift register implemented using NOT gates and a fan-out junction. Taken from [11].
Figure 14: Simple logic elements with their CMOS and magnetic nanowire implementations. Taken from [11].
inherent in vortex DWs has also been proposed as the basis of chirality-encoded DW logic [42; 127]. Omari et al. [122], (Type:Expt) used the chirality reversal of DWs that propagate through notches in the wire edges to perform inversion operations. They also showed that two-input wire junctions could be programmed to operate as logic gates (AND/NAND/OR/NOR) by controlling the chirality of the DW in the output arm which in turn was determined by which input arm switched its magnetisation first.
The above proposals and demonstrations were with magnetic fields being the agents to manipulate DWs. We now consider electric current driven proposals for Boolean logic devices, Inovria et al. [180] (Type:Expt) demonstrated a three-terminal DW-based MTJ device for various logic operations including inverter and buffer operations. A memory cell device was modified to be used for logic purposes with input information encoded into the position of a single DW (as a logic '0' or '1' as shown in Figure 16 (a) and (b)) in a nanowire using STL and the information state was read out using TMR. They demonstrated an inverter and a buffer operation (refer to Figure 16 (c) and (d)) using the device and subsequently realised a three inverter network by arranging three devices in series, Muffrini et al. [181] (Type:Expt) used a four-pillar device to propagate DWs over long distances in the common free layer of four MTJs using STT with the device state read out using TMR of the output pillar. They then used these device states to simulate the implementation of a spin torque majority gate which is a candidate for an efficient spin-logic device [182].
While electric current driven control of DWs in nanowires have many advantages over field driven control (refer to Section II.3), the challenges associated with electric field control of DWs (such as high current densities required) has limited device proposals. While current-driven DW-based NOT-gates were demonstrated in 2008 using Invar elements [183], it was the use of SOT driven current to control DWs that has led to progressing the state of the art involving proposals of current driven DW based logic devices. Baek et al. [184] (Type:Expt) have shown that a combination of electrical-field-controlled SOT switching and voltage-controlled magnetic anisotropy switching of the magnetisation in \(\mathrm{Ta}/\mathrm{CoFeB}/\mathrm{MgO}/\mathrm{AlO}_{\mathrm{x}}\) can be used to realise a spintronic logic device. They claimed that compared to a CMOS implementation, a half-adder implemented using their system would be an order of magnitude smaller, have lower energy consumption, and offer the flexibility of dynamic reconfigurability. Subsequently, Luo et al. [12] (Type:Expt) were able to demonstrate all-electrical control of DWs in OOPM nanowires to perform all necessary operations for a full logic architecture (Figure 17). This included development of three-input-one-output wire junctions to create majority gates with a logical AND/OR function governed by the magnetisation direction of a central control wire, and half- and full-adder circuits made of cascaded magnetic nanowire NAND-gates (Figure 17 (c)-(d)). This approach offers the potential of highly efficient and high-speed processing based on DWs in magnetic nanowires.
We now shift our attention to OOPM materials based pro
Figure 15: The operation of a three-terminal nanowire device with a junction where the magnetic state in one arm (’gate’) can be used to control the pinning of a DW at the junction thus control the magnetic states in the other arms, realising a transistor operation. The ’calculation’ and ’memory’ stage show modes of operation to realise both data processing storage. Taken from [178].
posals and demonstrations. Alamdar et al. [185] (Type:Expt) drove OOPM Ta/CoFeB/MgO DW-MTJ heterojunction devices using SOT switching to realise two device inverter circuits by controlling the DW position in the CoFeB layer, which in turn determined the output resistance state of the MTJ. They optimised the PMA and lithography process to obtain a high TMR ratio and suitable average resistance-area product of the device. Lin et al. [186] (Type:Expt) have proposed a multistate SOT-DW-MTJ device (refer to Figure 18 (a)) for in-memory computation applications. Apart from site three typical layer stack of anMTJ (they considered CoFeB/MgO/CoFeB), the device has a W layer on each side of the stack to control the position of the DW in the free CoFeB layer using SOT generated by passing current pulses through the W layer. Along with MOKE imaging of the DW states in similar devices, they simulated the operation of buffer, inverter and typical logic gates on single and dual inputs to the device. They also used micromagnetic and circuit level simulations to show the operation of a full adder using their device and expect the read/write latency to be as short as \(1.25/0.22\,\mathrm{ns}\), and average writing energy of \(8.4\,\mathrm{fJ}/\mathrm{bit}\) and a power consumption of \(26.25\,\mathrm{\mu W}\) which they claim is significantly lower than alternative implementations of devices for similar applications. This is promising as according to previous estimates by Xiao et al. [187] the power consumption of a 32 bit STT/SOT driven DW-MTJ adder circuit is comparable to its CMOS counterpart.
Finally, in proposals involving different DW control stimuli, Zhang et al. [188] (Type:Expt) have supplemented the use of current-induced DW motion with ultrafast laser pulses of controlled helicity in a nanowire device. The current pulses had to be timed with the application of optical pulses and allowed a range of logic operations to be demonstrated at wire junctions, as shown in Figure 19. Due to requiring lower current densities to drive DWs. the energy required for this implementation is also lower than tradition current driven DW motion.
The SOT driven control offer the possibility of device operation at reasonable current densities and this creates industrial potential in these devices. Moreover, as most of the demonstrations in which individual DW based logic devices are integrated use CMOS compatible interconnects, this should be amenable for industrial development. There are a significant number of challenges that need to be overcome before commercialisation. The individual logic elements (like a Boolean logic gate) need to scaled down in size for incorporating advanced logic function implementations in DW based Boolean logic devices. Moreover, more complex logic and networks need to be demonstrated and studied for more complex computation and these need to be tested for repeatability and reliability. They need to be engineering for precise control of
Figure 16: (a) and (b) A DW in a nanowire read out using the resistance state of an MTJ on top of the nanowire. The position of the DW denotes a logic ’0’ or ’1’. (c) and (d). The device showing an inverter and buffer operation on applying clock pulses. Taken from [180].
DW manipulation at high speeds of operation. The stability of DWs becomes are a significant issue here as well and in the next section we describe device proposals which aims to use this instability of DWs for computing operations.
### Non-Boolean/Neuromorphic/Unconventional computing
Neuromorphic computing (NC) describes various forms of 'brain-inspired' computing [189, 190, 191, 192, 193] and is seen as offering exciting routes to reducing the power consumption involved in implementing artificial intelligence algorithms currently implemented using CMOS-based von Neumann architectures [194, 195]. NC involves replacing part or all of a conventional computer with bespoke elements that enable computation. One common such element is a neural network, in which a neuron acts as a processing element that transforms multiple inputs into a processed output. A synapse is a memory element that has a multi-weighted transmission response and conducts neuron outputs, with weights generally trained (using a training algorithm) to solve a particular task. The response of the neuron to a stimulus, called the 'activation function', should have a required degree of nonlinearity to achieve efficient separation of inputs in classification tasks. Neurons may also have more advanced functions including 1) an 'integrate-and-fire' response which is outputting an input signal when its integrated value crosses a threshold, 2) a 'leaking' property by virtue of which the energy stored in the neuron is dissipated over time and 3) 'lateral inhibition', by which a firing neuron disables other neurons in the same layer from firing. These properties will be useful for understanding spintronic proposals/implementations of NC.
Spintronic implementations of NC [196] have been demonstrated in recent years using spin-torque nano-oscillators [197], superparamagnetic tunnel junctions [198], spin waves in a ring resonator [199], and artificial spin ices (ASIs) [200]. They have also been proposed using interactions in spin-Hall nano-oscillators [201], ensembles of superparamagnetic particles [202], skyrmions [203, 204, 205], dipolar-coupled nanomagnets [206], magnetic states of ASIs [207] and spin waves in a thin film [208, 209] and in coupled nanomagnets on a thin film [210, 211]. Here, we shall focus on approaches that use magnetic DWs in nanowires to perform NC. Table 1 shows a comparative analysis of all these approaches with prospective advantages and challenges.
A DW-based spintronic implementation of a neuron and a synapse was initially proposed in 2016 [227] (Type:Sim) using MTJs (refer to Figure 20 (a)) with OOPM materials used for the free and fixed layers with the free layer having a DW. The neuron implementation was suggested to be achieved by moving the position of the DW in the free layer using SOT generated by current pulses. Synaptic behaviour was envisaged via a multi-step conductivity response of the MTJ obtained
Figure 17: (a) A Y junction device which is demonstrated to be a logic circuit.(b) MOKE images showing operation of the junction device when driven using current pulses. (c) A XOR gate demonstrated using NAND gates and (d) a full adder circuit with magnetic force microscopy images showing different states in the gate operations. Taken from [12].
by moving the DW in the free layer and between artificial pinning sites. Subsequently, Hassan et al.[228] (Type:Sim) proposed a neuron with lateral inhibition by extending the DW-based MTJ proposed by Incorvia et al.[180] to include an underlying hard IM ferromagnetic track. The integrate (by passing a current through the DW track) and firing occurred when the DW reached an MTJ at the end of its track causing the MTJs resistance state to change. Leaking action was obtained by the dipolar field of the ferromagnetic track on the DW in the MTJ free layer, causing the DW to return to its initial state after the application of the current pulse. They went on to propose the arrangement of multiple devices in an array and could obtain lateral inhibition due to the suppression of DW movement in one track by dipolar coupling with its neighbouring track. Modification of the spacing between tracks and the current passing through them could control the degree of lateral inhibition and the array response was also used to categorise more than 100 handwritten digits with 94% accuracy.
Further recent proposals of NN functions with DW motion include utilising magnetocrystalline anisotropy gradients[229] and by using width modulated wires[230], both of which modify the energy landscapes of DWs for propagation. The leaking action is provided by a resorting force which is induced by the modified energy landscape on a displaced DW. Brigner et al.[229] (Type:Sim) suggest that by introducing anisotropy gradients in a 3-terminal MTJ device, DWs move from regions of higher to lower anisotropy without external stimuli which leads to a form of leaking action. Upon passing a current through the DW track, the DW movement causes the integration of the current response. Similar to the above, field and current free DW movement due to the width gradient in a nanowire[230] (Type:Sim) causes the simultaneous implementation of the leaky and integrate actions required for a neuron. Cui et al.[231] (Type:Sim) simulated current and field-driven DW motion in a pair of adjacent MTJ-DW devices and obtained lateral inhibition by tuning the magnetostatic interaction between the devices at the Walker breakdown field. Wang et al.[232] (Type:Sim) have used micromagnetic simulations of DWs in an OOPM nanowire in which the DW nucleation is stochastic using STT (due to thermal effects) and DW dynamics are deterministic using SOT. This led to both stochastic modifications of synaptic weights and a multilevel output synaptic response. They could simulate machine learning of breast cancer data with an accuracy of 95.7% accuracy and estimated an energy consumption of less than 2 fJ for device based on their proposal.
Experimental realisations of NC based elements using DW-based MTJ devices have concentrated on synaptic behaviour and Siddique et al.[223] (Type:Expt) showed both linear and nonlinear activation functions by using patterned MTJ elements on an underlying DW-carrying CoFeB nanowire (which was the MTJ free layer). DW positions were changed by SOT induced by currents passed through the free layer and read out via the resistance states of output MTJs. This approach allows the precise form of activation function to be specified by the relative size of successive MTJ regions along the DW
Figure 18: (a) The proposed SOT/DW/MTJ device for in-memory computation: (b) The proposed hybrid device to make use of multiple DW states for implementing logic operations. Taken from [186].
nanowire conduit, for example, using successively larger and then smaller MTJs creates a sigmoidal response (refer to Figure 20 (b)). Furthermore, it is relatively fast, being operated with 8 ns current pulses, and energy efficient, since each pulse consumed \(<\) 16pJ. Leonard et al. [224] (TypeExpt) have demonstrated a DW-based synapse read out using a single MTJ. The device consisted of an MTJ stack with an OOPM CoFeB free layer and an underlying Ta layer for writing magnetic states using SOT. The CoFeB layer had a DW track patterned notices so as to control DW nucleation and pinning. The MTJ resistance was read out using TMR via electrical contacts and they considered trapezoidal and rectangular shaped DW tracks for different functionalities. Since the trapezoidal device had a varying tack width and the threshold voltage to depin the DW is inversely proportional to the track width, it showed deterministically switched resistance states. Also, while it had 9 intermediate notches, the authors were able to obtain only 4 resistance states from it and they attributed this to inconsistencies in the fabrication process. The rectangular device, on the other hand, had constant width and notch configurations and showed probabilistically switched resistance states. The authors subsequently simulated machine learning of a Fashion-MNIST dataset (using the trapezoidal geometry) and of an CIFAR-100 image recognition (using the rectangular geometry). Importantly, the authors also compare the performance of their DW-based synapse with other proposed synapse implementations in terms of write energy, update duration and write noise. Leonard et al. [222] (Type:Expt) have also shown DW-based device with MTJ readout that used a similar stack configuration and SOT driven stimulus as used in [224]. The device had a voltage dependent activation function and they used this response to implement machine learning using a noisy version of the Fashion-MNIST dataset and claimed that such devices could be used to implement robust networks for NC. The interested reader can refer to the recent review by Hu et al. [233] for more details on such devices.
Kumar et al. [234] have proposed a DW based synapse using a meandering OOPM nanowire configuration. The magnetic states were written using a combination of an OOP magnetic field and SOT via an underlying Tungsten layer. This layer was grown using a novel procedure involving sub layers grown at different pressures and this reduced the roughness of the Tungsten on contact with the magnetic layer and aided spin current propagation. This led to low SOT current densi
Figure 19: (a) Opto-electronically driven DW motion used for realising: (a)AND, (b) OR, (c) NAND, and (d) NOR logic gates. Inputs \(A\) and \(B\) correspond to current and optical pulses used to drive DWs and \(R\) denotes the resistance state of the wire which serves as the logic gate output. Taken from [188].
ties being required \(\sim 10^{6}\,\mathrm{A/m^{2}}\) and a low power consumption of \(0.4\,\mathrm{f/d}\) to move a DW by \(\sim 19\,\mathrm{\mu m}\). The authors varied the distance between neighbouring segments in the meandering nanowire to realise synapses of varying degree of stochasticity.
The stochastic dynamics of DWs at notches in IM nanowires used have also been used to demonstrate functional binary stochastic synapses by Ellis et al. [235] (Type:Expt). They used magnetic field driven nucleation and Oersted fields of current pulses for DW dynamics in an IM permalloy nanowire with an artificial notch to realise the sigmoid-like passing probability of a DW using MOKE. They went on to simulate and demonstrate machine learning of handwritten digit recognition in the device using a gradient learning rule
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Type of spintronic element** & **Computation performed/proposed** & **Advantages** & **Challenges** \\ \hline \hline Spin torque nano-oscillators and superparamagnetic tunnel junctions & Magnetic synapse [212], Spoken digit recognition and waveform classification [197], vowel recognition [213]; generation of cursive letters [214], stochastic computing [215], random number generation [198] (Type:Expt) & Small size, low power consumption, CMOS compatibility, downsizing to potentially atomic dimensions, switching at low current densities (due to aid of thermal perturbations)
that adjusted synaptic stochasticity and energy throughput depending on the number of measurements.
All these explorations indicate that there is promise in realising NC devices and circuits using DW-based devices. There is now a need to scale the elements of this implementations to realise more complex circuits and realise further novel functionalities. The realisation of neurons and the other proposed foundational NC elements are necessary for this purpose. Furthermore, the possible fabrication and behavioural inconsistencies possible with DW tracks and nanostructures need to be resolved for technological implementations.
Apart from proposals for neural network elements, other forms of unconventional computing have also been explored. One such paradigm is stochastic computing where complex computations can be realised using streams of random bits [236]. In this context, the inherent uncertainties of DW dynamics can also be useful for stochastic computing paradigms [237]. Hernandez et al. [19] (Type:Expt) have used the stochastic pinning and propagation of DWs at nanowire junctions DWs to realise a Galton board, an archetypal experiment in statistics. The original Galton board consisted of balls moving down a slanted surface encountering a grid of obstacles which led to a Gaussian diffusion random walk of the balls. In [19], a magnetic field was used drive a single DW in a branched network of IM magnetic nanowires to recreate a Galton board (Figure 21 (a)). The distribution of DWs across the eight out-put wires approximated a normal function (Figure 21 (b)), although the final position of successive DWs was highly uncorrelated. The authors passed the binarised output sequence through the NIST Statistical Test Suite for Random and Pseudorandom Number Generators and it passed all.13 tests indicating a high degree of uncorrelation. Interestingly, removing the central wires in a separate structure resulted in a flattened DW distribution across output wires (Figure 21 (c) and (d)). This highlights how this simple approach could be adapted to different tasks. One might envisage adaptive, addressable tuning of DWs at wire junctions offering training functionality in reconfigurable magnetic nanowire Galton board networks.
Another random number generator was demonstrated by Narasimman et al. [238] (Type:Expt) who reported a 65 nm CMOS readout circuit for an OOPM Hall cross in which the magnetic texture took random orientations due to the thermal and demagnetization effects. They changed the magnetic textures using pulsed currents and detected these textures using the anomalous Hall effect by a modulation and amplification scheme. Their device consumed a power of 126 \(\mu\)W when driven using a 1.2 V supply and could function with voltage drifts of 440 \(\mu\)V.
The final unconventional computing paradigm that we shall consider is that of reservoir computing (RC), which is a bioinspired computational paradigm like NC and has emerged as a prime container for processing time-varying signals [239, 240]. A reservoir computer typically consists of an input layer, a reservoir layer and an output layer (refer to Figure 22). A conventional reservoir consists of a recurrent neural network with fixed internal weights. The recurrence in the network, the neuron interactions, and the behaviour of individual neurons themselves combine to provide a time-dependent nonlinear reservoir response to input, which makes RC well-suited for analysing time-varying data. However, unlike traditional recurrent neural networks, only the output layer has to be trained, which makes training very efficient.
The fixed internal weights of the reservoir implies that it can be replaced using any dynamical system and this has led to many _in-materio_ RC schemes, which involve using a physical system's complex response to a stimulus as the reservoir transformation. Ideally, the physical reservoir should offer a means to represent data, nonlinear sensitivity to input, a memory of previous input that recedes over time (fading memory), and have readable output. In practice, it should also be scalable and exhibit reproducible device dynamics. Magnetic systems are attractive candidates for reservoirs due to their inherent hysteretic and nonlinear response to stimulus, e.g. magnetic field or SOT. There are also a host of methods for representing, inputting, and reading data, as we have described here for DW-based nanowire systems. This promise is reflected in the diversity of studies that have proposed magnetic devices for RC, including using skyrmions [205], magnetic states in ASIs [207], and superparamagnetic particles [202], and demonstrations using STNOs [197] and spin waves in ASIs [200]. The interested reader can refer to the review by Allwood et al. [241] for further details about RC using nanomagnetic devices.
We shall focus on proposals and demonstrations of RC which use DW dynamics. Dawidek et al. [20] (Type:Expt) showed that the stochastic nature of DW pinning and depinning in arrays of IM permalloy nanorings are suitable for RC. The arrays were driven by rotating magnetic fields with the DW behaviour in the array changing according to the field strength. Low-strength magnetic fields left DWs pinned at ring junctions, while strong fields caused all DWs to pass through junctions. The stochastic depinning resulting from intermediate fields gave rise to DW annihilation, from a moving DW meeting a DW pinned at a junction, which leads to the formation of magnetic vortex states in some nanorings. However, DWs subsequently moving through a junction within a vortex-state ring repopulated the nanoring with two DWs. A dynamic equilibrium between these DW loss and gain phenomena led to the array magnetisation (via the DW population) becoming a field-dependent emergent property of the ring arrays, as determined by polarised-neutron magnetometry (Figure 23 (a)). Imaging array magnetisation using X-ray photoelectron emission microscopy (X-PEEM) revealed a changing and rich magnetic state population in these arrays (Figure 23 (b)). The authors also simulated machine learning of spoken digit recognition with these arrays and obtained 99.4% accuracy for classification accuracy for a single speaker and upto 89% accuracy for eight speakers. Subsequently, Vidamour et al. [226] (Type:Sim) used a phenomenological model of the ring array system to optimise the dynamics and performance of these arrays for different classification tasks by tuning the scaling and input rate of data into the reservoir. They used task agnostic metrics for quantifying the determine the capabilities of these arrays for computation and showed the association of these metrics with performance in different tasks.
Vidamour et al. [225] (Type:Expt) then went on to report the
fabrication of electrical devices using these arrays (Figure 23 (c)) with these arrays and used their anisotropic magnetoresistance (AMR) response for machine learning. They now experimentally evaluated the task agnostic kernel and generalisation rank metrics for assessing the suitability of the array for RC and used these metrics for identifying the operating parameters of the arrays. They went on to demonstrate the performance of various tasks of varying computational requirements of non-linearity and memory (including signal transformation, spoken digit recognition and nonlinear autoregressive moving average series prediction) with the array and achieved state of the art performance.
Further proposals of RC include that by Ababel et al. [24] (Type:Sim) who proposed using the oscillations of a single in the potential barrier between two protrusions in an IM Ni nanowires to perform RC. They encoded the data input into the amplitude of a driving magnetic field at 500 MHz and used the DW position between two 'anti-notch' potential barriers along a nanowire length to represent output. The complex DW propagation dynamics in this simple one-DW system proved capable of performing a number of classification tasks, including sine / square wave differentiation, spoken digits, and handwritten digits. Subsequently, Hon et al. [243] (Type:Sim) performed RC using micromagnetic simulations of an array of nanowires with \(\lambda\)-shaped junctions (refer to Figure 24). The DW motion, dynamics, and depinning behaviour at junctions under a clocking magnetic field were used to simulate short term memory (STM) and parity check (PC) tasks using RC methodology. The devices would appear to be robust to changes in temperature, with similar performance estimated at 0 and 500 K.
The temporal signal processing utility of RC lends itself to analysing delayed feedback dynamics and a typical example of this are the Mackey-Glass equations [244] which are nonlinear, time-delayed differential functions that give rich dynamics originally envisaged as simulating the dynamics of regulated biological systems. Mackey-Glass oscillators obey these equations and allow oscillatory and chaotic dynamics to be explored in a simple manner. This is particularly exciting for RC as the nonlinear transformations using 'edge-of-chaos' dynamics should offer rich and tunable performance. Willaip et al. [245] (Type:Sim) demonstrated Mackey-Glass oscillator behaviour in simulations of STT-driven DW motion in a nanowire with an elliptical protrusion, although they commented that, in practice, current-driven heating of the device and fabrication uncertainties might affect device performance.
These proposals and demonstrations show that DW dynamics in nanowires devices possess the capabilities required for RC in a variety of tasks. Future work in magnetic implementations of RC will no doubt see a wider exploration of suitable systems, and control and tuning of these systems as well as further demonstrations of machine learning in a variety of tasks and environments.
### Other applications
The global market for magnetic field sensing was USD $2.21 billion in 2018 and projected to rise to $4.22 billion by 2026 [246]. Although this is dominated by Hall sensors, field sensing is a common application of magnetic materials [247] and magnetoresistive sensors made up almost 15% of the global market in 2018. Some proposals of magnetic field sensing have involved DW devices.
Wolfe et al. [21] (Type:Expt) proposed a sensor based on the Faraday effect in an IM magnetic garnet thin film ridged waveguide. When a single DW, created using a gradient magnetic field, crosses the path of light incident on the film, the
Figure 20: (a) A combination of a biological synapse and neuron with its proposed spintronic equivalent. Taken from [227]. (b) Linear and nonlinear activation functions realised using an MTJ based device. Taken from [223]
Faraday rotation changes and thus the field causing the magnetisation change in the film may be detected. Diegel et al.[22] suggested a four-bit DW-based multi-turn counter, primarily for automotive and industrial applications. The sensor layout consists of four rectangular-shaped spirals, two winding clockwise and two anti-clockwise, of soft ferromagnetic nanowire multilayers that exhibit GMR. Electrical connections formed a half-bridge configuration that gave stability against temperature variation. The spiral centres had DW injection pads that acted as a source or sink of DWs in the magnetic free layer, depending on the rotation direction of the external magnetic field. This gave opposite sensitivity in the two spiral pairs to the sense of field rotation and caused either an increase or decrease in each spiral's free-layer DW population. The GMR arrangement meant that these differences could be measured easily to determine the net whole number of turns of magnetic field in either rotation direction.
DWs have also been used for temperature sensing. Klingbeil et al.[24] (Type:Expt) considered meandering DW formation in Bi-substituted rare earth iron garnet films and measured
Figure 21: SEM micrographs of a magnetic nanowire Galton board which are (a) complete and (c) with the central element missing. The distribution of outputs of (b) the complete board which approached a binomial distribution and (d) the board with an element missing. Here the distribution is flattened indicating tuning of the board response. Taken from [19]
out-of-plane hysteresis loops using a Faraday effect based setup. They monitored the change in the overall magneto optical response, the DW nucleation fields and the magneto-optical susceptibility (the derivative of the magneto-optical response with field) as temperature was varied. Using appropriate calibration, they could monitor changes in temperature over a 20 -140 \({}^{\circ}\)C range with an accuracy of 0.1 \({}^{\circ}\)C and with a temperature drift of better than 0.15 \({}^{\circ}\)C.
The stray fields from DWs have also been used to control secondary systems. In particular, there have been several proposals and demonstrations of controlling magnetic nanoparticles (MNPs) for biological applications. Vavassori et al.[249] (Type:Expt) considered a micron-sized square ring of IP permalloy with a DW positioned at diagonally opposite corners. When a superparamagnetic bead was also positioned at one of the corners and the corresponding DW was displaced using a magnetic field, a dipole moment was induced in the bead and the stray field due to this caused the field required to displace the DW to increase. They simulated this shift in field and also measured a 12Oe shift in the AMR response of the square ring when beads were dispersed on it compared to when they weren't. Furthermore, simulations showed that by reducing the width of the square ring, the shift in field could be increased. Bryan et al.[26] (Type:Sim) used finite element micromagnetic simulations to model the magnetostatic interactions between a superparamagnetic bead and a H2H DW in a nanowire and observed the behaviour of the DW around the stationery bead. They then used analytical formulations to model how the hydrodynamic drag on the bead affects the DW movement. They found that that the DW was pinned around the superparamagnetic bead below a threshold magnetic field which was proportional to the bead diameter squared. Even from small beads, this effect is sufficient to reduce the domain wall mobility by five orders of magnitude.
Shortly afterwards, Donolato et al.[250] (Type:Expt) controlled the movement of protein carrying beads of different sizes exploiting the positioning of DWs in zig-zag wires of permalloy using pulsed magnetic fields (refer to Figure 25). The beads tended to follow the DWs with some drift as the speed of DW motion is considerably faster than the maximum measured bead velocity of 15 \(\mu\)m/s. The modification of the field pulse sequence was used to change the direction of bead movement as well as hold the bead at the DW constriction for analysis (refer to Figure 25). The authors also demonstrated continuous control of bead movement by considering DWs in a ring structure and applying rotating magnetic fields. Small
Figure 22: (a) A typical implementation of RC, highlighting the layered architecture: an unchanged reservoir layer with individual neurons has time-varying signals fed into it via weighted connections. A trainable output layer then provides a weighted sum of nodal states from the reservoir. Taken from [226]
field steps of \(\mathrm{\SIUnitSymbolCelsius}\) were used to obtain fine control of nanoparticle positioning with an accuracy of 100 nm. Bryan et al. [27] (Type:Exp) considered different types of nanowires (single planar, two perpendicular and curved) and simulated traps in them to identify the trapping stray field produced. They went on to fabricate these wire and used X-ray imaging to identify that the beads were trapped in the nanowire corners where the DWs were pinned. Subsequently, Rapoport et al. [28] considered a closed loop curvilinear permalloy track and simulated the motion of vortex DWs through curvilinear junctions to find that the DWs actually split into a H2H and T2T DW (with opposite stray field polarities) upon exiting the junction. They used these DWs to move a superparamagnetic bead at a junction and to sort beads of different sizes.
Similar concepts have also been extended to controlling the movement of atomic size particles. Allwood et al. [24] (Type:Sim) used finite element micromagnetic simulations to show that above a DW in a nanowire, a magnetic field trap for a single \({}^{87}\)Rb atom could be formed. Furthermore, they expected that the position of this trap could be modified by applying moving the DW. Subsequently, West et al. [23] considered an array of undulating IM permalloy nanowires and nucleated DWs in them at remanance after saturating the wires orthogonal to the their length. The resulting large array of tiny magnets worked as a magnetic mirror which could reflect \({}^{87}\)Rb atoms via the fringing field of the magnets. They could switch the magnets on and off reproducibly by switching the magnetisation direction of the array of nanowires.
Figure 23: (a) The variation of the magnetisation of the ring array with rotating magnetic field amplitude showing a strongly nonlinear response. (b) An X-ray photoemission microscopy image of the ring array showing the different magnetic states existing at an intermediate rotating field amplitude. The arrows indicate the direction of the magnetisation sensitivity. (c) The variation of state populations in the array with number of field rotations for different rotating field amplitudes. The very different evolution for different fields is indicative of the ‘fading memory’ in the system. Taken from [20]
These studies motivate the use of DW based devices as novel magnetic field, temperature and atomic particle sensors. While they might still be far from commercial application, the versatility of DW dynamics to respond to a variety of stimuli caused by different agents offers new insight into their potential uses.
## V Future Outlooks
Magnetic DWs have rich and varied behaviour and a wide variety of applications have been proposed utilising them. Here we have reviewed the various types of DWs in magnetic thin films and nanowires with both in-plane and out-of-plane anisotropy. We have reviewed the methods to create and manipulate DW dynamics with a variety of methods and highlighted how rich the behavioural and stimulus space of DWs is in for potential applications. We have also looked at different applications that have been suggested using them and reviewed the state of the art in them.
The state of the art with CMOS based devices for Boolean memories and computation is capable of producing 7 nm transistor devices which are facing issues of repeatable device manufacturing and performance [251]. In order to be a viable alternative for CMOS devices, DW based devices thus need to be studied and tailored to operate at similar sizes and operate at high speeds with low operational power and latency. Over the last couple of decades, one of the biggest obstacles inhibiting DW proposals for devices from hitting the market in a large scale has been that current densities required for applications involving Oersted field or STT are typically too large for technological feasibility. The current densities required for STT have been declining and with the advent of SOT, there is encouraging signs that we will soon reach reasonable current densities for technological realisations. The material science research required for identifying candidates for high SOT efficiency is key to making more progress in this area. Furthermore, one of the challenges with SOT based devices is that it is still quite sensitive to fabrication conditions and interface qualities. This is a significant step that has to be overcome for reproducible and reliable devices. Another significant problem for Boolean memory and computing
Figure 24: (a) A schematic of a grid of \(\lambda\)-type junctions used for acting as a reservoir. Also shown is the schematic of a single junction with associated dimensions. The method of inputting data into the grid using a clock field (denoted by the ellipse). The states for an input of ’0’ and of ’1’ are also shown. (b) The variation of the short term memory (STM) and parity check (PC) of the system at different temperatures. Taken from [243]
applications stochasticity of DW behaviour especially as the magnetic element size is decreased for higher scalability and memory density. Again, material science seems to be important here and the key is to identify material systems that have energy barriers for thermal driven magnetisation changes are high enough that reduction in storage or processing bit size is possible.
However, the stochasticity of DWs lend themselves naturally to non-Boolean applications as has been discussed in Section. IV C. However, before commercial realisation, some important points must be addressed. In the approach where neural network components (like neurons and synapses) are being realised with various DW devices, larger networks and more advanced functionalities need to be demonstrated if these devices are to compete with other existing or proposed technological implementations of NC. Furthermore the issue of reproducibility and robustness is something that has not been actively pursued in the community and this is going to be important for technological realisation. In the other forms of unconventional computing that we discussed (like reservoir computing), one of the key pursuits should be to identify applications and tasks which can't be easily (or economically) performed with existing CMOS based systems. This will require demonstrating advantages of the DW-based implementations in terms of niche applications, energy throughput or other relevant metric. We are quite excited to see how this research field progresses and where it take us.
###### Acknowledgements.
GV, DAA and TJH acknowledge funding from the Horizon 2020 FET-Open SpinEngine (Agreement no 861618), the EPSRC MARCH project EP/V006339/1, the Leverhulme grant RPG-2019-97 and the EPSRC project EP/S009647/1. We would also like to acknowledge C. Swindells, I. Vidamour and A. Welbourne for discussions.
|
2304.11180 | The parton-level structure of Higgs decays to hadrons at N$^3$LO | We present the quantum chromodynamics (QCD) corrections for Higgs boson
decays to hadronic final states at next-to-next-to-next-to-leading order
(N$^3$LO) in the strong coupling constant $\alpha_s$. In particular, we
consider the Higgs boson decay to massless bottom quarks and the Higgs boson
decay to a pair of gluons in the limit of a heavy top quark. The tree-level
five-parton, the one-loop four-parton, the two-loop three-parton, and the
three-loop two-parton matrix elements are integrated separately over the
inclusive phase space and classified by partons appearing in the final state
and by colour structure. As a check, we reproduce known results for the
hadronic $R$-ratios at N$^3$LO. We study patterns of infrared singularity
cancellation within the colour layers of the integrated expressions and observe
an agreement in the highest trascendental weight terms in the decay of
different colour singlets to quarks. We anticipate that our result will be an
essential ingredient for the formulation of N$^3$LO subtraction schemes. | Xuan Chen, Petr Jakubčík, Matteo Marcoli, Giovanni Stagnitto | 2023-04-21T18:00:01Z | http://arxiv.org/abs/2304.11180v2 | # The parton-level structure of Higgs decays to hadrons at N\({}^{3}\)Lo
###### Abstract
We present the quantum chromodynamics (QCD) corrections for Higgs boson decays to hadronic final states at next-to-next-to-next-to-leading order (N\({}^{3}\)LO) in the strong coupling constant \(\alpha_{s}\). In particular, we consider the Higgs boson decay to massless bottom quarks and the Higgs boson decay to a pair of gluons in the limit of a heavy top quark. The tree-level five-parton, the one-loop four-parton, the two-loop three-parton, and the three-loop two-parton matrix elements are integrated separately over the inclusive phase space and classified by partons appearing in the final state and by colour structure. As a check, we reproduce known results for the hadronic \(R\)-ratios at N\({}^{3}\)LO. We study patterns of infrared singularity cancellation within the colour layers of the integrated expressions and observe an agreement in the highest trascendental weight terms in the decay of different colour singlets to quarks. We anticipate that our result will be an essential ingredient for the formulation of N\({}^{3}\)LO subtraction schemes.
## 1 Introduction
The Higgs boson plays a special role in the Standard Model. Its discovery in 2012 [1; 2] through the clean decay modes \(H\to\gamma\gamma\), \(H\to 4l\) and \(H\to 2l2\nu\) marked a major milestone in particle physics. Amongst the remaining decay modes, the decay to bottom quarks has the largest branching ratio and allows probing the Higgs boson coupling to third generation fermions. On the other hand, the loop-induced decays of a Higgs boson to gauge bosons
(\(gg\), \(\gamma\gamma\) or \(\gamma Z\)) are key to an indirect determination of the couplings to \(Z\), \(W\) and \(t\). A summary of the current experimental evidence for the Higgs production and decay modes can be found in reviews by ATLAS [3] and CMS [4].
The importance of Higgs decays to quarks and gluons extends beyond phenomenology. Thanks to the simplicity of the \(1\to 2\) Born kinematics, the computation of radiative QCD corrections to these processes has been achieved to very high orders in perturbation theory. The \(H\to gg\) decay rate in an effective theory with the top quark integrated out [5; 6; 7] is known up to N\({}^{4}\)LO [8; 9; 10; 11; 12; 13]. The \(H\to b\bar{b}\) decay rate in a massless approximation where only the bottom Yukawa coupling is kept different from zero is also known up to N\({}^{4}\)LO [14; 15; 16; 17; 8]. The correction to the total decay rate of the Higgs boson to hadrons, including also contributions induced by the effective Higgs-bottom quark interaction where only the dominant \(m_{b}^{2}\) mass terms are retained, has been calculated at order N\({}^{3}\)LO in [18] and at order N\({}^{4}\)LO in [19]. We refer the reader to a review [20] for further discussion on terms suppressed by the top-quark and bottom-quark mass and electroweak effects.
All the above results follow a common approach based on the optical theorem, which amounts to the computation of the imaginary part of the Higgs self-energy. This technique is completely agnostic to the number and species of particles in the final state. Hence it does not reveal the infrared structure of the result and obscures the interplay between real radiation and virtual corrections in the different final-state partonic channels.
In this paper, we extend this analysis by separately integrating all the possible physical cuts of the four-loop QCD correction to the Higgs two-point function. In other words, we analytically integrate the tree-level five-parton, the one-loop four-parton, the two-loop three-parton, and the three-loop two-parton matrix elements over the respective phase space:
\[\sigma^{(3)}=\int\mathrm{d}\Phi_{5}\,M_{5}^{0}+\int\mathrm{d}\Phi_{4}\,M_{4}^ {1}+\int\mathrm{d}\Phi_{3}\,M_{3}^{2}+\int\mathrm{d}\Phi_{2}\,M_{2}^{3}\,, \tag{1}\]
where \(M_{n}^{l}\) denotes the \(l\)-loop matrix element for the decay of the Higgs into \(n\) final state QCD particles. We refer to the four terms in (1) as the triple-real (RRR), double-real-virtual (VRR), double-virtual-real (VVR), and triple-virtual (VVV) _layers_ of the calculation. For completeness we recompute also the next-to-leading order (NLO) and next-to-next-to-leading order (NNLO) corrections. Our method was first described in [21] in the context of the decay of a virtual photon into hadrons and leveraged the reverse unitarity relation [22; 23; 24; 25; 26] to gain access to modern multi-loop techniques.
The presented results are particularly relevant for the development of N\({}^{3}\)LO subtraction schemes. N\({}^{3}\)LO precision is the state-of-the-art for simple processes [27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46] but a general local subtraction scheme at this order is still missing. The matrix elements of \(H\to gg\) at NNLO were used for the derivation of gluon-gluon antenna functions in the context of the antenna subtraction scheme [47] in order to encapsulate all the unresolved radiation between a pair of hard gluons [48]. It follows that the matrix elements for \(H\to gg\) at N\({}^{3}\)LO are candidates for gluon-gluon antenna functions one order higher. Moreover, the integrated version of such matrix elements can shed light on how the universal behaviour of N\({}^{3}\)LO matrix elements in unresolved configurations [49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63] translates to the integrated level and how it relates to the divergences of virtual corrections.
The paper is organised as follows. In Section 2, we introduce the notation and we briefly describe our method. In Section 3, we analyse our results which are explicitly reported in Appendices A and B. We conclude in Section 4 with an outlook of future applications.
## 2 Method
We consider the \(H\to gg\) and \(H\to b\bar{b}\) decays. In the first case, we work in the heavy-top effective theory, with the QCD Lagrangian supplemented with an effective Lagrangian given by
\[\mathcal{L}_{gg}=-\frac{\lambda_{0}}{4}HG_{a}^{\mu\nu}G_{a,\mu\nu}\,, \tag{1}\]
with \(G_{a}^{\mu\nu}\) the renormalised gluon field-strength, \(H\) the Higgs field and \(\lambda_{0}\) the bare effective coupling, obtained by matching the effective theory to the full Standard Model [10; 64; 65]. In the second case, we implement the standard vertex between the Higgs field and a fermion line
\[\mathcal{L}_{q\bar{q}}=y_{0}^{b}H\bar{\psi}\psi\,, \tag{2}\]
with \(\psi\) the bottom quark field and \(y_{0}^{b}\) the bare Yukawa coupling. In our calculation, the bottom quark is treated as massless but the Yukawa coupling is non-vanishing.
We present results for the integration of renormalised squared amplitudes in the \(\overline{\rm MS}\) scheme. We replace the bare coupling \(\alpha_{0}\) with the renormalised coupling \(\alpha_{s}\) according to [66]
\[\alpha_{0}\,\rho_{0}^{2\epsilon}\,S_{\epsilon} = \alpha_{s}\,\mu^{2\epsilon}\Bigg{[}1-\frac{\beta_{0}}{\epsilon} \left(\frac{\alpha_{s}}{2\pi}\right)+\left(\frac{\beta_{0}^{2}}{\epsilon^{2}} -\frac{\beta_{1}}{2\epsilon}\right)\left(\frac{\alpha_{s}}{2\pi}\right)^{2} \tag{3}\] \[\qquad\qquad-\left(\frac{\beta_{0}^{3}}{\epsilon^{3}}-\frac{7}{6 }\frac{\beta_{1}\beta_{0}}{\epsilon^{2}}+\frac{1}{3}\frac{\beta_{2}}{\epsilon }\right)\left(\frac{\alpha_{s}}{2\pi}\right)^{3}+\mathcal{O}(\alpha_{s}^{4}) \Bigg{]}\,,\]
with
\[\beta_{0} = \frac{11C_{A}-2N_{F}}{6}\,, \tag{4}\] \[\beta_{1} = \frac{17C_{A}^{2}-5C_{A}N_{F}-3C_{F}N_{F}}{6}\,,\] (5) \[\beta_{2} = \frac{2857C_{A}^{3}}{432}+\frac{C_{F}^{2}N_{F}}{8}-\frac{205C_{F} C_{A}N_{F}}{144}-\frac{1415C_{A}^{2}N_{F}}{432}+\frac{11C_{F}N_{F}^{2}}{72}+ \frac{79C_{A}N_{F}^{2}}{432}\,,\] (6) \[S_{\epsilon} = (4\pi)^{\epsilon}e^{-\epsilon\gamma}\,,\qquad\text{with Euler constant $\gamma=0.5772\ldots$} \tag{7}\]
where \(\alpha_{0}\) is the bare coupling, \(\mu_{0}^{2}\) is the mass parameter introduced in dimensional regularisation to maintain a dimensionless coupling in the bare QCD Lagrangian density. We fix the renormalisation scale \(\mu^{2}\) to be the invariant mass of the decaying particle \(q^{2}\).
In the calculation of \(H\to b\bar{b}\), the renormalisation of the bare Yukawa coupling \(y_{0}^{b}\) is needed, which is done through the replacement \(y_{0}^{b}=Z_{y}\,y^{b}\), with \(Z_{y}\) as in [67]
\[Z_{y} = 1-\frac{3C_{F}}{2\epsilon}\left(\frac{\alpha_{s}}{2\pi}\right)\]
\[+\Bigg{[}C_{F}^{2}\left(\frac{9}{8\epsilon^{2}}-\frac{3}{16\epsilon} \right)+C_{F}C_{A}\left(\frac{11}{8\epsilon^{2}}-\frac{97}{48\epsilon}\right)+C_ {F}N_{F}\left(-\frac{1}{4\epsilon^{2}}+\frac{5}{24\epsilon}\right)\Bigg{]}\left( \frac{\alpha_{s}}{2\pi}\right)^{2}\] \[+\Bigg{[}C_{F}^{3}\left(-\frac{9}{16\epsilon^{3}}+\frac{9}{32 \epsilon^{2}}-\frac{43}{16\epsilon}\right)+C_{F}^{2}C_{A}\left(-\frac{33}{16 \epsilon^{3}}+\frac{313}{96\epsilon^{2}}+\frac{43}{32\epsilon}\right)\] \[\qquad+C_{F}C_{A}^{2}\left(-\frac{121}{72\epsilon^{3}}+\frac{1679 }{432\epsilon^{2}}-\frac{11413}{2592\epsilon}\right)+C_{F}^{2}N_{F}\left( \frac{3}{8\epsilon^{3}}-\frac{29}{48\epsilon^{2}}+\frac{1}{\epsilon}\left( \frac{23}{24}-\zeta_{3}\right)\right)\] \[\qquad+C_{F}C_{A}N_{F}\left(\frac{11}{18\epsilon^{3}}-\frac{121} {108\epsilon^{2}}+\frac{1}{\epsilon}\left(\frac{139}{324}+\zeta_{3}\right)\right)\] \[\qquad+C_{F}N_{F}^{2}\left(-\frac{1}{18\epsilon^{3}}+\frac{5}{108 \epsilon^{2}}+\frac{35}{648\epsilon}\right)\Bigg{]}\left(\frac{\alpha_{s}}{2 \pi}\right)^{3}+\mathcal{O}(\alpha_{s}^{4})\;. \tag{8}\]
In the calculation of \(H\to gg\), we additionally need to renormalise the effective coupling \(\lambda_{0}=Z_{\lambda}\lambda\)[68] according to [66]
\[Z_{\lambda}=1-\frac{\beta_{0}}{\epsilon}\left(\frac{\alpha_{s}}{2\pi}\right)+ \left(\frac{\beta_{0}^{2}}{\epsilon^{2}}-\frac{\beta_{1}}{\epsilon}\right) \left(\frac{\alpha_{s}}{2\pi}\right)^{2}-\left(\frac{\beta_{0}^{3}}{\epsilon^ {3}}-\frac{2\beta_{1}\beta_{0}}{\epsilon^{2}}+\frac{\beta_{2}}{\epsilon} \right)\left(\frac{\alpha_{s}}{2\pi}\right)^{3}+\mathcal{O}(\alpha_{s}^{4})\,. \tag{9}\]
We follow the strategy outlined in [21], where the relevant decay diagrams are generated with QGRAF [69] as self-energies of the Higgs boson with cut internal propagators. They are matched onto the integral families reported in [21] using Reduze2[70] and the Feynman rules are inserted and evaluated in FORM [71]. The integrals appearing in the matrix elements have up to eleven propagators in the denominator and a maximum of five scalar products in the numerator, compared to four scalar products in the photon decay. The integrals are reduced with the help of Reduze2[70] to a set of 22, 27, 35 and 31 master integrals for the four terms of (1), respectively. The master integrals required for the NNLO calculation can be found in [72] and have been extended up to weight 6 in [21, 66, 73]. The integrals required for the N\({}^{3}\)LO calculation were computed in [73, 74].
## 3 Results
We illustrate the general structure of the different partonic contributions by adopting a notation similar to [21] for ease of reference. Given a set of \(n\) final-state particles denoted by \(\mathcal{I}\), we can generically write the associated amplitude for the \(H\to ij\) process as
\[|\mathcal{M}_{ij}\rangle_{\mathcal{I}}=\mathcal{N}\left[|\mathcal{M}_{ij}^{(0) }\rangle_{\mathcal{I}}+\left(\frac{\alpha_{s}}{2\pi}\right)|\mathcal{M}_{ij}^ {(1)}\rangle_{\mathcal{I}}+\left(\frac{\alpha_{s}}{2\pi}\right)^{2}|\mathcal{ M}_{ij}^{(2)}\rangle_{\mathcal{I}}+\left(\frac{\alpha_{s}}{2\pi}\right)^{3}| \mathcal{M}_{ij}^{(3)}\rangle_{\mathcal{I}}+\ldots\right], \tag{10}\]
with \(\mathcal{N}\) a normalisation factor which is process- and final-state dependent. We denote the integration over the respective phase space of the matrix element \(\langle\mathcal{M}_{ij}|\mathcal{M}_{ij}\rangle_{\mathcal{I}}\) summed over spins, colours and quark flavours as
\[\mathcal{T}_{\mathcal{I}}^{ij,(k,[\ell\times\ell])}=\int\mathrm{d}\Phi_{n}\, \langle\mathcal{M}_{ij}^{(\ell)}|\mathcal{M}_{ij}^{(\ell)}\rangle_{\mathcal{I}} \tag{11}\]
and for \(\ell_{1}\neq\ell_{2}\)
\[{\cal T}^{ij,(k,[\ell_{1}\times\ell_{2}])}_{\cal I}=\int{\rm d}\Phi_{n}\,2\,{\rm Re }\big{[}\langle{\cal M}^{(\ell_{1})}_{ij}|{\cal M}^{(\ell_{2})}_{ij}\rangle_{ \cal I}\big{]}\,, \tag{10}\]
where \(ij=gg,q\bar{q}\). The label \(k\) denotes the perturbative order: contributions with the same \(k\) sum to the N\({}^{k}\)LO result for the total cross section. The long explicit expressions for \({\cal T}^{ij,(3,[\ell_{1}\times\ell_{2}])}_{\cal I}\) are provided in Appendix A, while in Appendix B we report the lower-order results expanded up to transcendental weight six. We denote the coefficient of each colour factor \({\cal C}\) as \({\cal T}^{ij,(k,[\ell_{1}\times\ell_{2}])}_{\cal I}\big{|}_{\cal C}\), and we omit the superscript \([\ell_{1}\times\ell_{2}]\) in case of no ambiguity. All results are also provided in the ancillary files in computer-readable format, with the notation
\[{\cal T}^{ij,(k,[\ell_{1}\times\ell_{2}])}_{\cal I}=\left[\texttt{H}ij\_\_{ \mathcal{I}\_k\_\ell_{1}}\texttt{x}\ell_{2}\right]. \tag{11}\]
All expressions are renormalised and in time-like kinematics. Higher-order results are normalised to
\[{\cal T}^{gg,(0)}_{gg}=\frac{1}{4}\,\lambda\,(N^{2}-1)\,(q^{2})^{2}\,(1- \epsilon)\,P_{2} \tag{12}\]
for the decay to gluons and
\[2\,C_{F}{\cal T}^{q\bar{q},(0)}_{q\bar{q}}=4\,y^{b}\,(N^{2}-1)\,q^{2}\,P_{2} \tag{13}\]
for the decay to bottom quarks, where \(C_{F}=(N^{2}-1)/(2N)\), \({\cal T}^{gg,(0)}_{gg}\) and \({\cal T}^{q\bar{q},(0)}_{q\bar{q}}\) are the respective Born-level cross sections. Note that the factor \(2C_{F}\) in (13) is included in the normalisation of \(H\to b\bar{b}\) as it appears in all colour layers starting from NLO. Finally, \(P_{2}\) is the volume of the two-particle phase space,
\[P_{2}=\int{\rm d}\Phi_{2}=2^{-3+2\epsilon}\,\pi^{-1+\epsilon}\,\frac{\Gamma(1 -\epsilon)}{\Gamma(2-2\epsilon)}\,(q^{2})^{-\epsilon}\,. \tag{14}\]
The structure or (non-)appearance of certain colour factors is key in understanding the cancellation patterns between real and virtual corrections. We therefore summarize the colour factors appearing in the various final-state configurations in Appendix C. In the final state with two quark lines, we explicitly separate the configurations with same- or different-flavour quark pairs. Note that for all colour factors \({\cal C}\) in \({\cal T}^{ij,(k)}_{q\bar{q}q^{\prime}\bar{q}^{\prime}(g)}\), we have that
\[{\cal T}^{ij,(k)}_{q\bar{q}q^{\prime}\bar{q}^{\prime}(g)}\Big{|}_{\cal C}=(N_{ F}-1)\,{\cal T}^{ij,(k)}_{q\bar{q}q\bar{q}(g)}\Big{|}_{{\cal C}/(N_{F}-1)}\,. \tag{15}\]
For \(H\to gg\), in the two-particle final state, the terms proportional to \(N_{F}^{k}\) for \(k=1,2,3\) in \({\cal T}^{gg,(k)}_{gg}\) are introduced as part of renormalisation and as such are not present in the finite part or beyond. Note that the only allowed two-particle final state in the \(H\to gg\) corrections is a pair of gluons, even at higher loops. This is explained by the fact that for a scalar particle to decay into a quark-antiquark pair, a chirality flip along the fermionic line is needed due to spin conservation. Therefore, all the diagrams contributing to the Higgs decay to a massless quark-antiquark pair via QCD interactions vanish.
We perform several checks on our results. First, we can directly compare our expressions for the two-particle final states to the calculations of the quark and gluon form factors up
to three loops [66; 67]. Second, we observe the complete cancellation of infrared (IR) poles in the sum of all the partonic final states at each perturbative order, both in \(H\to gg\) and \(H\to b\bar{b}\). Third, the finite parts agree with the known results for the total cross sections e.g. in [8]. Indeed, the sum of (A.1)-(A.46) is the N\({}^{3}\)LO coefficient of the \(R\)-ratio for \(H\to b\bar{b}\),
\[R^{H\to b\bar{b}}\Big{|}_{\alpha_{s}^{3}} =\Big{(}\frac{\alpha_{s}}{2\pi}\Big{)}^{3}\sum_{\mathcal{C}, \mathcal{I}}\mathcal{T}_{\mathcal{I}}^{q\bar{q},(3)}\Big{|}_{\mathcal{C}}\] \[=\Big{(}\frac{\alpha_{s}}{2\pi}\Big{)}^{3}\left[N^{2}\left(\frac {25999999}{62208}-\frac{3803}{216}\pi^{2}-\frac{4321}{24}\zeta_{3}+\frac{155}{ 6}\zeta_{5}\right)\right.\] \[\qquad\qquad\left.-\frac{76055}{384}+\frac{545}{48}\pi^{2}+\frac {1567}{16}\zeta_{3}-\frac{235}{8}\zeta_{5}\right.\] \[\qquad\qquad\left.+\frac{1}{N^{2}}\left(\frac{23443}{768}-\frac {27}{16}\pi^{2}-\frac{239}{16}\zeta_{3}+\frac{45}{8}\zeta_{5}\right)\right.\] \[\qquad\qquad\left.+NN_{F}\left(-\frac{47731}{486}+\frac{1727}{43 2}\pi^{2}+\frac{371}{12}\zeta_{3}-\frac{\pi^{4}}{120}-\frac{10}{3}\zeta_{5}\right)\right.\] \[\qquad\qquad\left.+\frac{N_{F}}{N}\left(\frac{88}{3}-\frac{65}{48 }\pi^{2}-\frac{65}{4}\zeta_{3}-\frac{\pi^{4}}{120}+5\zeta_{5}\right)\right.\] \[\qquad\qquad\left.+N_{F}^{2}\left(\frac{15511}{3888}-\frac{11}{54 }\pi^{2}-\zeta_{3}\right)\Bigg{]}\,. \tag{3.9}\]
Similarly, by summing (A.47)-(A.103) we obtain the N\({}^{3}\)LO coefficient of the \(R\)-ratio for \(H\to gg\),
\[R^{H\to gg}\Big{|}_{\alpha_{s}^{3}} =\Big{(}\frac{\alpha_{s}}{2\pi}\Big{)}^{3}\sum_{\mathcal{C}, \mathcal{I}}\mathcal{T}_{\mathcal{I}}^{gg,(3)}\Big{|}_{\mathcal{C}}\] \[=\Big{(}\frac{\alpha_{s}}{2\pi}\Big{)}^{3}\left[N^{3}\left(\frac {15420961}{5832}-\frac{2816}{27}\pi^{2}-\frac{44539}{54}\zeta_{3}+\frac{385}{ 3}\zeta_{5}\right)\right.\] \[\qquad\qquad\left.+N_{F}N^{2}\left(-\frac{11918065}{7776}+\frac{4 65}{8}\pi^{2}+\frac{8171}{36}\zeta_{3}-\frac{10}{3}\zeta_{5}\right)\right.\] \[\qquad\qquad\left.+N_{F}\left(\frac{11279}{72}-\frac{143}{72}\pi^ {2}-\frac{389}{4}\zeta_{3}+10\zeta_{5}\right)\right.\] \[\qquad\qquad\left.+\frac{N_{F}}{N^{2}}\left(\frac{221}{96}+6\zeta _{3}-10\zeta_{5}\right)\right.\] \[\qquad\qquad\left.+N_{F}^{2}N\left(\frac{58346}{243}-\frac{359}{3 6}\pi^{2}-\frac{128}{9}\zeta_{3}\right)\right.\] \[\qquad\qquad\left.+\frac{N_{F}^{2}}{N}\left(-\frac{55}{2}+\frac{1 3}{36}\pi^{2}+15\zeta_{3}\right)\right.\] \[\qquad\qquad\left.+N_{F}^{3}\left(-\frac{7127}{729}+\frac{14}{27} \pi^{2}+\frac{8}{27}\zeta_{3}\right)\Bigg{]}\,. \tag{3.10}\]
### Infrared structure of the deepest poles
It is interesting to compare the colour and singularity structure of \(H\to b\bar{b}\) to that of \(\gamma^{*}\to q\bar{q}\) derived in our previous work [21], both representing decays of a colour-singlet state into a quark-antiquark pair. Once we normalise with respect to the respective Born-level cross sections, we observe the presence of identical colour factors at all perturbative orders, except for the singlet contributions which are only present in \(\gamma^{*}\to q\bar{q}\). For the Higgs case, the singlet contribution vanishes because the Yukawa interaction in (2) introduces a chirality flip which would break chirality conservation along a closed massless fermionic loop. We notice that for all layers and orders, the two deepest poles appearing in any given colour factor of the integrated renormalised matrix elements coincide between the \(H\to b\bar{b}\) and \(\gamma^{*}\to q\bar{q}\) processes. To clarify, we refer to the two deepest non-vanishing poles in each partonic configuration and colour layer, not simply to the \(\epsilon^{-2k}\) and \(\epsilon^{-2k+1}\) poles at order \(k\). For the two-particle final states, the infrared singularity structure is predicted through universal IR factorisation formulae [75; 76]. In particular, the deepest poles can be interpreted in a completely process-independent way in terms of the \(I_{q\bar{q}}^{(1)}\) insertion operator [77]. For purely real corrections up to NNLO, we can build on a recent algorithmic construction of idealised NNLO antenna functions [78]. In this work, the deepest poles of \(M_{4}^{0}\) in the \(\gamma^{*}\to q\bar{q}\) decay are identified with structures coming from the integrations of double-unresolved limits. Defining the operator \(\mathcal{P}_{n}[\cdot]\) which extracts from an expression the \(n\) deepest non-vanishing poles in \(\epsilon\), the relations read
\[\mathcal{P}_{2}\left[\mathcal{T}_{q\bar{q}g}^{q\bar{q},(1)}\Big{|} _{N^{0}}\right] =\mathcal{P}_{2}\left[\mathcal{S}soft_{g}+2\mathcal{S}col_{q^{h}g }\right]\,,\] \[\mathcal{P}_{2}\left[\mathcal{T}_{q\bar{q}g}^{q\bar{q},(2)}\Big{|} _{N^{1}}\right] =\mathcal{P}_{2}\left[\mathcal{D}soft_{gg}+2\mathcal{T}col_{q^{h}g }\right]\,,\] \[\mathcal{P}_{2}\left[\mathcal{T}_{q\bar{q}g}^{q\bar{q},(2)}\Big{|} _{N^{-1}}\right] =\mathcal{P}_{2}\left[-\frac{1}{2}\left(\mathcal{D}soft_{\gamma \gamma}+2\mathcal{T}col_{q^{h}\gamma\gamma}\right)\right]\,,\] \[\mathcal{P}_{2}\left[\mathcal{T}_{q\bar{q}g}^{q\bar{q},(2)}\Big{|} _{(N_{F}-1)}\right] =\mathcal{P}_{2}\left[\mathcal{D}soft_{q\bar{q}}+2\mathcal{T}col_{q^{h }\bar{q}^{\prime}q^{\prime}}\right]\,,\] \[\mathcal{P}_{2}\left[\mathcal{T}_{q\bar{q}q\bar{q}}^{q\bar{q},(2) }\Big{|}_{N^{0}}\right] =\mathcal{P}_{2}\left[\mathcal{D}soft_{q\bar{q}}+2\mathcal{T}col_{q^{h }\bar{q}^{\prime}q^{\prime}}\right]\,,\] \[\mathcal{P}_{1}\left[\mathcal{T}_{q\bar{q}g}^{q\bar{q},(2)}\Big{|} _{N^{-1}}\right] =\mathcal{P}_{1}\left[\mathcal{C}_{4}^{0}\right]\,, \tag{26}\]
where \(\mathcal{S}soft\), \(\mathcal{S}col\), \(\mathcal{D}soft\) and \(\mathcal{T}col\) are given in Appendix A and B of [78], with the superscript \(h\) in \(\mathcal{S}col\) and \(\mathcal{T}col\) indicating the hard radiator. The quantities in the last line of (26) only exhibit a single pole and \(\mathcal{C}_{4}^{0}\), which encapsulates the triple collinear limit of a \(q\parallel\bar{q}\parallel q\) configuration, is given in (5.57) of [78]. Our results confirm that (26) holds for the \(H\to b\bar{b}\) process as well. Beyond the two deepest poles, the finite parts of lower-order integrated matrix elements contribute to the singularities, resulting in differences between \(H\to b\bar{b}\) and \(\gamma^{*}\to q\bar{q}\). An analogous study for single-real emission at one loop will allow for a physically motivated description of the singularities of the real-virtual contribution [79]. The fact that the exact correspondence of the deepest non-vanishing poles (and not just the \(\epsilon^{-2k}\) and \(\epsilon^{-2k+1}\) poles at order \(k\)) between the two processes extends also to N\({}^{3}\)LO in all the layers, colour factors and partonic final states is notable. Therefore, we expect that
a universal and process-independent description exists for unresolved radiation between a hard quark-antiquark pair also at this perturbative order.
We observe that for every power of \(\epsilon\), the terms with the highest trascendental weight always coincide in the \(\gamma^{*}\to q\bar{q}\) and the \(H\to b\bar{b}\) process. This is reflected in the N\({}^{3}\)LO \(R\)-ratio: the weight 4 contribution in the \(N_{F}^{2}\) colour layer is absent in both processes; the weight 5 contribution in the \(N_{F}/N\) colour layer is identically \(5\,\zeta_{5}\); the weight 5 contribution in the \(N_{F}N\) colour layer is identically \(-10/3\,\zeta_{5}\); the weight 6 contributions are absent in both processes. In fact, the correspondence between the highest weight contribution in every colour factor extends to the N\({}^{4}\)LO \(R\)-ratio of \(\gamma^{*}\to q\bar{q}\) and \(H\to b\bar{b}\)[8; 80]. Moreover, we observe that the weight \(2k\) contribution vanishes in the \(R\)-ratio of \(\gamma^{*}\to q\bar{q}\), \(H\to b\bar{b}\) and \(H\to gg\) all the way up to N\({}^{4}\)LO. Since the \(R\)-ratio at N\({}^{k}\)LO is given by the absorptive part of a \((k+1)\)-loop propagator, this is directly related to the maximal trascendental weight appering in the two-point function. It would be interesting to analyse our observations in the framework of [81].
For \(H\to gg\), no analogous processes have been computed, so it is not possible to perform a comparison as done for \(H\to b\bar{b}\). From [78], it is possible to argue that for the real radiation corrections up to NNLO in the purely gluonic final state, the following relations hold:
\[\mathcal{P}_{2}\left[\mathcal{T}_{ggg}^{gg,(1)}\Big{|}_{N}\right] =\mathcal{P}_{2}\left[\mathcal{S}soft_{g}+2\mathcal{S}col_{gg} \right]\,,\] \[\mathcal{P}_{2}\left[\mathcal{T}_{gggg}^{gg,(2)}\Big{|}_{N^{2}}\right] =\mathcal{P}_{2}\left[4\mathcal{D}soft_{gg}+2\mathcal{D}soft_{ \gamma\gamma}+8\mathcal{T}col_{g^{h}gg}+4\mathcal{T}col_{gg^{h}g}\right]. \tag{3.12}\]
On the other hand, the dependence on lower orders is manifest already at the second-deepest poles in the \(q\bar{q}gg\) and \(q\bar{q}q^{\prime}\bar{q}^{\prime}\) partonic channels [78]. Nonetheless, one can expect a weaker statement to hold also for \(H\to gg\), namely the universality of the \(\epsilon^{-2k}\) and \(\epsilon^{-2k+1}\) poles at order \(k\) across all the partonic final states and colour factors.
### Cancellation of infrared singularities
The pattern of cancellation of infrared singularities among the four layers contributing at N\({}^{3}\)LO is highly non-trivial due to the interplay of different soft currents and splitting functions in the real-emission contributions. Despite recently becoming available in their unintegrated form [49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63], such structures are yet to be fully understood at the integrated level. One can expect the cancellation pattern of the deepest poles to be easier to explain in terms of universal factorization properties of QCD. Moreover, the deeper the pole, the fewer infrared-divergent partonic configurations contribute to it, simplifying the analysis. To facilitate the inspection of the deepest singularities, in Tables 1 and 2 we display the coefficients of the \(\epsilon^{-6}\) and \(\epsilon^{-5}\) poles for different colour layers and partonic final states. Trivially, the sum of the coefficients in each column vanishes. In the following, we make some basic observations, postponing a thorough analysis, which would require detailed knowledge of the integrated structures contributing at each layer.
For \(H\to b\bar{b}\) in Table 1, the easiest colour factor to inspect is \(N^{-2}\), because it only receives contributions from the emission of photon-like gluons from the hard pair of quarks.
For both the virtual and real corrections, the coefficients of the \(\epsilon^{-6}\) and \(\epsilon^{-5}\) poles are proportional to \((I_{q\bar{q}}^{(1)})^{3}\). Moreover, we notice a pair-wise cancellation between the poles of the triple-virtual and triple-real, and the double-real-virtual and double-virtual-real contributions, which can be explained by the following argument. Due to a complete factorization in the deepest poles, each virtual (or real) photon emission from a hard quark-antiquark pair contributes to the poles independently and with a factor \(I_{q\bar{q}}^{(1)}\) (or \(-I_{q\bar{q}}^{(1)}\)). Hence at N\({}^{k}\)LO, the structure in every layer is proportional to \((I_{q\bar{q}}^{(1)})^{k}\) and the relative value of the coefficients between the layers is entirely of combinatorial origin. Namely if the deepest poles of the virtual layer are captured by \(\mathcal{N}(I_{q\bar{q}}^{(1)})^{k}\), then the poles of the \(r\)-fold real, \((k-r)\)-fold virtual photonic correction are given by
\[\mathcal{T}_{\mathcal{I}}^{q\bar{q},(k)}\Big{|}_{\rm abelian} =\mathcal{N}\binom{k}{r}\left(-I_{q\bar{q}}^{(1)}\right)^{r} \left(I_{q\bar{q}}^{(1)}\right)^{k-r}+\mathcal{O}(\epsilon^{-2k+2})\] \[=\mathcal{N}\binom{k}{r}(-1)^{r}\left(I_{q\bar{q}}^{(1)}\right)^ {k}+\mathcal{O}(\epsilon^{-2k+2})\,, \tag{20}\]
since there are \(\binom{k}{r}\) ways to cut open \(r\) of the \(k\) photonic loops and substitute a real emission for a virtual correction. In (20), \(\mathcal{N}\) is an overall normalisation factor common to all layers at order \(k\).
At NLO, (20) trivially yields \(\mathcal{N}\) and \(-\mathcal{N}\) for the virtual and the real corrections, while at NNLO, as one can see in (19), (20), (21) and (22), the coefficients for the double-virtual, real-virtual and double-real contributions are \(\mathcal{N}\), \(-2\mathcal{N}\) and \(\mathcal{N}\). Finally, at N\({}^{3}\)LO, the coefficients multiplying \((I_{q\bar{q}}^{(1)})^{3}\) are \(\mathcal{N}\), \(-3\mathcal{N}\), \(3\mathcal{N}\) and \(-\mathcal{N}\) for the VVV, VVR, VRR and RRR contributions, as shown in Table 1, justifying the pair-wise cancellation. In general, the cancellation of the deepest abelian poles at any order \(k\) is guaranteed by
\[\sum_{r=0}^{k}\mathcal{N}\binom{k}{r}\left(-I_{q\bar{q}}^{(1)}\right)^{r} \left(I_{q\bar{q}}^{(1)}\right)^{k-r}=\mathcal{N}\left(I_{q\bar{q}}^{(1)}-I_{ q\bar{q}}^{(1)}\right)^{k}=0\,. \tag{21}\]
The second term in the previous equation reflects the exponentiation of multiple photon emissions. In other words, the cancellation of infrared singularities proceeds independently for each photon.
With a similar combinatorial logic, one can predict the highest poles of the individual \(\ell_{1}\times\ell_{2}\) terms within each layer with \(\ell=\ell_{1}+\ell_{2}\) loops. One can show that in the purely abelian case the coefficient is split among the loop configurations according to
\[\frac{\mathcal{T}_{\mathcal{I}}^{q\bar{q},(k,[\ell_{1}\times\ell_{2}])}\Big{|} _{\rm abelian}}{\mathcal{T}_{\mathcal{I}}^{q\bar{q},(k)}\Big{|}_{\rm abelian}} =\frac{1}{2^{\ell-1}}\binom{\ell}{\ell_{1}}\left(\frac{1}{2}\right)^{ \delta_{\ell_{1},\ell_{2}}}+\mathcal{O}(\epsilon^{2})\,. \tag{22}\]
This can be observed at NNLO in (19) and in (18), and at N\({}^{3}\)LO in (19) and in (20), and in (20) and in (20).
The coefficients of the remaining colour factors for purely gluonic emissions, \(N^{2}\) and \(N^{0}\), are more complicated due to the appearance of non-abelian effects. For the triple-virtual contribution, the only colour structure present in the \(\epsilon^{-6}\) coefficients is \(C_{F}^{2}\), as
indicated in the first line of Table 1, in accordance with [67] once we account for the extra colour factor of \(2C_{F}\) due to our normalisation in (3.6). For the other layers, the colour factors \(C_{F}C_{A}\) and \(C_{A}^{2}\) also contribute to the deepest singularities, indicating that the description of the poles associated with real emission contains ingredients different from \(I_{q\bar{q}}^{(1)}\) already at \(\epsilon^{-6}\). An equivalent observation at NNLO was the basis of the analysis of the infrared structure performed in [82]. Additionally, the values of the coefficients in the \(N^{2}\) contributions to the three- and four-particle final state suggest the presence of an integrated structure cancelling between these two layers. For the \(N_{F}N^{-1}\) colour factor, we see a complete cancellation between the double-real-virtual and triple-real layers, explained by a single soft gluon emission on top of a four-quark final state.
For \(H\to gg\) in Table 2, the rows representing final states containing four quarks are left blank since they only exhibit poles starting from \(\epsilon^{-4}\). This happens because the most infrared-divergent behaviour at the triple-real level with four quarks in the final state is given by the emission of two collinear quark-antiquark pairs with one of the pairs also soft, or alternatively by the emission of two collinear quark-antiquark pairs in association with a single soft gluon emission. Both of these configurations yield at most \(\epsilon^{-4}\) poles. The triple-virtual correction does not have deep poles in the \(N_{F}\) and \(N_{F}N^{-2}\) colour factors, so the poles can be interpreted as NNLO-type corrections to the \(q\bar{q}g\) final state on top of a collinear quark-antiquark configuration. Including the \(\epsilon^{-4}\) poles for this colour layer, given in (A.69), (A.75), (A.83) and (A.97), the two deepest poles also follow the pattern \(\mathcal{K}\), \(-2\mathcal{K}\) and \(\mathcal{K}\) for the VVR, VRR and RRR respectively, where
\[\mathcal{K}=-\frac{1}{9}\frac{1}{\epsilon^{5}}-\frac{61}{54}\frac{1}{\epsilon ^{4}}\,. \tag{3.16}\]
One can expect a structure similar to the abelian case above because this colour factor receives contributions only from photon-like gluon emission from the quark-antiquark pair. However, the poles are not directly proportional to \((I_{q\bar{q}}^{(1)})^{2}\) due to the additional integration over the unresolved quark-antiquark pair. For the VVR contribution, (3.15) holds, as can be seen in (A.69) and (A.75). In the \(N_{F}\) colour factor we notice substantial cancellations
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multicolumn{2}{c}{Final-state \(\mathcal{I}\)} & \multicolumn{1}{c}{\(N^{2}\)} & \multicolumn{1}{c}{\(N^{0}\)} & \multicolumn{1}{c}{\(N^{-2}\)} & \(N_{F}N\) & \(N_{F}N^{-1}\) \\ \hline \hline VVV & \(q\bar{q}\) & \(-\frac{1}{6}\frac{1}{\epsilon^{6}}-\frac{17}{8}\frac{1}{\epsilon^{5}}\) & \(+\frac{1}{3}\frac{1}{\epsilon^{6}}+\frac{23}{8}\frac{1}{\epsilon^{5}}\) & \(-\frac{1}{6}\frac{1}{\epsilon^{6}}-\frac{3}{4}\frac{1}{\epsilon^{5}}\) & \(+\frac{1}{4}\frac{1}{\epsilon^{5}}\) & \(-\frac{1}{4}\frac{1}{\epsilon^{5}}\) \\ \hline VVR & \(q\bar{q}g\) & \(+\frac{29}{36}\frac{1}{\epsilon^{6}}+\frac{1663}{216}\frac{1}{\epsilon^{5}}\) & \(-\frac{5}{4}\frac{1}{\epsilon^{6}}-\frac{53}{4}\frac{1}{\epsilon^{5}}\) & \(+\frac{1}{2}\frac{1}{\epsilon^{6}}+\frac{9}{4}\frac{1}{\epsilon^{5}}\) & \(-\frac{20}{2}\frac{1}{\epsilon^{5}}+\frac{7}{12}\frac{1}{\epsilon^{5}}\) \\ \hline VRR & \(q\bar{q}gg\) & \(-\frac{41}{36}\frac{1}{\epsilon^{6}}-\frac{311}{36}\frac{1}{\epsilon^{5}}\) & \(+\frac{3}{2}\frac{1}{\epsilon^{6}}+\frac{217}{24}\frac{1}{\epsilon^{5}}\) & \(-\frac{1}{2}\frac{1}{\epsilon^{6}}-\frac{9}{4}\frac{1}{\epsilon^{5}}\) & \(+\frac{1}{2}\frac{1}{\epsilon^{5}}\) & \(-\frac{1}{3}\frac{1}{\epsilon^{5}}\) \\ \cline{2-8} & \(q\bar{q}q^{\prime}\bar{q}^{\prime}+q\bar{q}q\bar{q}\) & & & & & \(+\frac{13}{108}\frac{1}{\epsilon^{5}}\) & \(-\frac{11}{108}\frac{1}{\epsilon^{5}}\) \\ \hline RRR & \(q\bar{q}ggg\) & \(+\frac{1}{2}\frac{1}{\epsilon^{6}}+\frac{311}{108}\frac{1}{\epsilon^{5}}\) & \(-\frac{7}{12}\frac{1}{\epsilon^{6}}-\frac{37}{12}\frac{1}{\epsilon^{5}}\) & \(+\frac{1}{6}\frac{1}{\epsilon^{6}}+\frac{3}{4}\frac{1}{\epsilon^{5}}\) & \\ \cline{2-8} & \(q\bar{q}q\bar{q}^{\prime}\bar{q}^{\prime}g+q\bar{q}q\bar{q}g\) & & & & \(-\frac{7}{54}\frac{1}{\epsilon^{5}}\) & \(+\frac{11}{108}\frac{1}{\epsilon^{5}}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Coefficients of \(\epsilon^{-6}\) and \(\epsilon^{-5}\) poles for different colour factors of \(H\to b\bar{b}\) at N\({}^{3}\)LO. Blank cells indicate vanishing coefficients for these poles. Note that an overall factor of \(2C_{F}\) is factored out as indicated in (3.6).
between the four- and five-particle final states. On the other hand, the \(N_{F}N^{2}\) coefficients exhibit a highly non-trivial interplay between real quark-antiquark pair emissions and fermionic loops.
## 4 Conclusions and outlook
In this paper, we carried out the analytical integration over the inclusive phase space for all the possible partonic channels for the decay of a Higgs boson into a pair of gluons and a bottom quark-antiquark pair up to N\({}^{3}\)LO.
This work is a natural continuation of [21], where an analogous calculation was performed for the decay of a virtual photon into hadrons. Remarkably, the deepest IR poles of photon and Higgs decay to quarks coincide completely for any partonic final state. We observe a similar agreement also in the highest transcendental weight numbers in the coefficients of all powers of \(\epsilon\).
These results constitute an important contribution to the study of unresolved QCD radiation at the integrated level. In the context of the antenna subtraction method, they are necessary for the extraction of N\({}^{3}\)LO gluon-gluon antenna functions in final-final kinematics.
The set of integrated antenna functions at N\({}^{3}\)LO ought to be completed with analogous results for a quark-gluon pair of hard radiators. At NNLO, the expressions were extracted from QCD corrections to a neutralino decay [83]. We foresee extending this analysis to N\({}^{3}\)LO in a forthcoming publication.
## Acknowledgements
We are indebted to Thomas Gehrmann and Nigel Glover for their feedback and encouragement to pursue this work. We thank Oscar Braun-White, Christian Preuss, Kay Schon
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multicolumn{2}{c}{Final-state \(\mathcal{I}\)} & \(N^{3}\) & \(N_{F}N^{2}\) & \(N_{F}\) & \(N_{F}N^{-2}\) \\ \hline \hline VVV & \(gg\) & \(-\frac{4}{3}\frac{1}{\epsilon^{6}}\) & \(-\frac{77}{6}\frac{1}{\epsilon^{5}}\) & \(+\frac{7}{3}\frac{1}{\epsilon^{5}}\) & & \\ \hline VVR & \(ggg\) & \(+\frac{46}{9}\frac{1}{\epsilon^{6}}\) & \(+\frac{4609}{108}\frac{1}{\epsilon^{5}}\) & \(-\frac{305}{54}\frac{1}{\epsilon^{5}}\) & & \\ \cline{2-6} & \(q\bar{q}g\) & & \(-\frac{4}{3}\frac{1}{\epsilon^{5}}\) & \(+\frac{2}{3}\frac{1}{\epsilon^{5}}\) & \(-\frac{1}{9}\frac{1}{\epsilon^{5}}\) \\ \hline VRR & \(gggg\) & \(-\frac{113}{18}\frac{1}{\epsilon^{6}}\) & \(-\frac{1661}{36}\frac{1}{\epsilon^{5}}\) & \(+\frac{10}{3}\frac{1}{\epsilon^{5}}\) & & \\ \cline{2-6} & \(q\bar{q}gg\) & & \(+\frac{92}{27}\frac{1}{\epsilon^{5}}\) & \(-\frac{77}{54}\frac{1}{\epsilon^{5}}\) & \(+\frac{2}{9}\frac{1}{\epsilon^{5}}\) \\ \cline{2-6} & \(q\bar{q}q^{\prime}\bar{q}^{\prime}+q\bar{q}q\bar{q}\) & & & & \\ \hline RRR & \(ggggg\) & \(+\frac{5}{2}\frac{1}{\epsilon^{5}}\) & \(+\frac{440}{27}\frac{1}{\epsilon^{5}}\) & & & \\ \cline{2-6} & \(q\bar{q}ggg\) & & \(-\frac{113}{54}\frac{1}{\epsilon^{5}}\) & \(+\frac{41}{54}\frac{1}{\epsilon^{5}}\) & \(-\frac{1}{9}\frac{1}{\epsilon^{5}}\) \\ \cline{2-6} & \(q\bar{q}q^{\prime}\bar{q}^{\prime}g+q\bar{q}q\bar{q}g\) & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Coefficients of \(\epsilon^{-6}\) and \(\epsilon^{-5}\) poles for different colour factors of \(H\to gg\) at N\({}^{3}\)LO. Blank cells indicate vanishing coefficients for these poles.
wald, Vasily Sotnikov and Tong-Zhi Yang for elucidating discussions and suggestions on the manuscript. This work was supported by the Swiss National Science Foundation (SNF) under contract 200020-204200 and by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme grant agreement 101019620 (ERC Advanced Grant TOPUP).
## Appendix A N\({}^{3}\)LO results
### Higgs to bottom quarks
#### a.1.1 Two-particle final states
\[\mathcal{T}^{q\bar{q},(3,[2\times 1])}_{q\bar{q}}\Big{|}_{N^{2}}= +\frac{1}{\epsilon^{6}}\left(-\frac{1}{8}\right)+\frac{1}{ \epsilon^{5}}\left(-\frac{5}{4}\right)+\frac{1}{\epsilon^{4}}\left(-\frac{89} {36}+\frac{7}{96}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{1865}{864}+\frac{83}{192}\pi ^{2}\right)\] \[+\frac{1}{\epsilon^{2}}\left(-\frac{45169}{10368}+\frac{3155}{34 56}\pi^{2}+\frac{97}{36}\zeta_{3}-\frac{11}{768}\pi^{4}\right)\] \[+\frac{1}{\epsilon}\left(-\frac{174337}{15552}-\frac{23893}{2073 6}\pi^{2}-\frac{685}{288}\zeta_{3}-\frac{1619}{23040}\pi^{4}+\frac{11}{48}\pi ^{2}\zeta_{3}-\frac{33}{10}\zeta_{5}\right)\] \[-\frac{548719}{23328}+\frac{8561}{31104}\pi^{2}-\frac{22783}{1728 }\zeta_{3}+\frac{3179}{138240}\pi^{4}\] \[+\frac{3859}{1728}\pi^{2}\zeta_{3}-\frac{56}{15}\zeta_{5}-\frac{3 89}{193536}\pi^{6}+\frac{95}{16}\zeta_{3}^{2}+\mathcal{O}(\epsilon), \tag{10}\]
\[\mathcal{T}^{q\bar{q},(3,[2\times 1])}_{q\bar{q}}\Big{|}_{N^{0}}= +\frac{1}{\epsilon^{6}}\left(\frac{1}{4}\right)+\frac{1}{ \epsilon^{5}}\left(\frac{29}{16}\right)+\frac{1}{\epsilon^{4}}\left(\frac{106 3}{288}-\frac{\pi^{2}}{6}\right)\] \[+\frac{1}{\epsilon^{3}}\left(\frac{3809}{864}-\frac{161}{192}\pi ^{2}-\frac{13}{8}\zeta_{3}\right)\] \[+\frac{1}{\epsilon^{2}}\left(\frac{24637}{2592}-\frac{77}{54}\pi ^{2}-\frac{599}{72}\zeta_{3}+\frac{59}{5760}\pi^{4}\right)\] \[+\frac{1}{\epsilon}\left(\frac{5902}{243}+\frac{2335}{5184}\pi^{ 2}-\frac{469}{36}\zeta_{3}+\frac{3047}{23040}\pi^{4}+\frac{37}{32}\pi^{2}\zeta _{3}+\frac{9}{40}\zeta_{5}\right)\] \[+\frac{336379}{5832}-\frac{29917}{15552}\pi^{2}-\frac{19637}{432} \zeta_{3}-\frac{29719}{69120}\pi^{4}\] \[+\frac{6239}{1728}\pi^{2}\zeta_{3}-\frac{1469}{120}\zeta_{5}+\frac {6647}{241920}\pi^{6}+\frac{125}{8}\zeta_{3}^{2}+\mathcal{O}(\epsilon), \tag{11}\]
\[\mathcal{T}^{q\bar{q},(3,[2\times 1])}_{q\bar{q}}\Big{|}_{N^{-2}}= +\frac{1}{\epsilon^{6}}\left(-\frac{1}{8}\right)+\frac{1}{ \epsilon^{5}}\left(-\frac{9}{16}\right)+\frac{1}{\epsilon^{4}}\left(-\frac{39 }{32}+\frac{3}{32}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{9}{4}+\frac{13}{32}\pi^{2}+ \frac{13}{8}\zeta_{3}\right)\] \[+\frac{1}{\epsilon^{2}}\left(-\frac{659}{128}+\frac{197}{384}\pi ^{2}+\frac{45}{8}\zeta_{3}+\frac{47}{11520}\pi^{4}\right)\]
\[\left.\mathcal{T}_{q\bar{q}}^{q\bar{q},(3,[2\times 1])}\right|_{N^{0}}= +\frac{1}{\epsilon^{6}}\left(\frac{1}{12}\right)+\frac{1}{\epsilon ^{5}}\left(\frac{17}{16}\right)+\frac{1}{\epsilon^{4}}\left(\frac{595}{288}- \frac{3}{8}\pi^{2}\right)\]
\[\mathcal{T}_{q\bar{q}}^{q\bar{q},(3,[3\times 0])}\Big{|}_{N_{F}N^{-1}}= +\frac{1}{\epsilon^{5}}\left(-\frac{1}{8}\right)+\frac{1}{ \epsilon^{4}}\left(-\frac{35}{144}\right)+\frac{1}{\epsilon^{3}}\left(\frac{23 }{432}+\frac{17}{96}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{2}}\left(\frac{641}{2592}-\frac{199}{864} \pi^{2}+\frac{55}{72}\zeta_{3}\right)\]
\[\mathcal{T}_{q\bar{q}}^{q\bar{q},(3,[3\times 0])}\Big{|}_{N^{2}}= +\frac{1}{\epsilon^{4}}\left(-\frac{11}{162}\right)+\frac{1}{ \epsilon^{3}}\left(-\frac{1}{243}\right)+\frac{1}{\epsilon^{2}}\left(\frac{23} {324}+\frac{\pi^{2}}{108}\right)\] \[+\frac{1}{\epsilon}\left(\frac{2417}{17496}-\frac{5}{324}\pi^{2} -\frac{\zeta_{3}}{81}\right)\] \[+\frac{259}{6561}+\frac{97}{972}\pi^{2}-\frac{25}{243}\zeta_{3}+ \frac{43}{9720}\pi^{4}+\mathcal{O}(\epsilon),\] (A.11)
#### a.1.2 Three-particle final states
\[\mathcal{T}_{q\bar{q}g}^{q\bar{q},(3,[1\times 1])}\Big{|}_{N^{2}}= +\frac{1}{\epsilon^{6}}\left(\frac{29}{72}\right)+\frac{1}{ \epsilon^{5}}\left(\frac{71}{24}\right)+\frac{1}{\epsilon^{4}}\left(\frac{168 1}{144}-\frac{373}{864}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(\frac{2231}{48}-\frac{367}{96}\pi^{ 2}-\frac{685}{72}\zeta_{3}\right)\] \[+\frac{1}{\epsilon^{2}}\left(\frac{6565}{32}-\frac{23713}{1728} \pi^{2}-\frac{4135}{72}\zeta_{3}-\frac{2737}{34560}\pi^{4}\right)\] \[+\frac{1}{\epsilon}\bigg{(}\frac{533087}{576}-\frac{36157}{576} \pi^{2}-\frac{111853}{432}\zeta_{3}+\frac{4807}{11520}\pi^{4}\] \[\qquad\qquad+\frac{9749}{864}\pi^{2}\zeta_{3}-\frac{12349}{120} \zeta_{5}\bigg{)}\] \[+\frac{1644281}{384}-\frac{332527}{1152}\pi^{2}-\frac{180955}{144 }\zeta_{3}-\frac{182003}{207360}\pi^{4}\] \[+\frac{22273}{288}\pi^{2}\zeta_{3}-\frac{70603}{120}\zeta_{5}- \frac{94961}{967680}\pi^{6}+\frac{18773}{144}\zeta_{3}^{2}+\mathcal{O}(\epsilon),\] (A.12)
\[\mathcal{T}_{q\bar{q}g}^{q\bar{q},(3,[1\times 1])}\Big{|}_{N^{0}}= +\frac{1}{\epsilon^{6}}\left(-\frac{5}{8}\right)+\frac{1}{ \epsilon^{5}}\left(-\frac{179}{48}\right)+\frac{1}{\epsilon^{4}}\left(-\frac{ 631}{48}+\frac{199}{288}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{4975}{96}+\frac{695}{144}\pi ^{2}+\frac{137}{8}\zeta_{3}\right)\] \[+\frac{1}{\epsilon^{2}}\left(-\frac{5299}{24}+\frac{2227}{144}\pi ^{2}+\frac{5729}{72}\zeta_{3}+\frac{6817}{34560}\pi^{4}\right)\] \[+\frac{1}{\epsilon}\left(-\frac{186863}{192}+\frac{1107}{16}\pi^ {2}+\frac{7607}{24}\zeta_{3}-\frac{1981}{6912}\pi^{4}-\frac{7411}{288}\pi^{2} \zeta_{3}+\frac{1903}{8}\zeta_{5}\right)\] \[-\frac{1701065}{384}+\frac{356963}{1152}\pi^{2}+\frac{54101}{36} \zeta_{3}+\frac{66763}{34560}\pi^{4}\] \[-\frac{107867}{864}\pi^{2}\zeta_{3}+\frac{39227}{40}\zeta_{5}+ \frac{607987}{2903040}\pi^{6}-\frac{5805}{16}\zeta_{3}^{2}+\mathcal{O}(\epsilon),\] (A.13)
\[\mathcal{T}^{q\bar{q},(3,[1\times 1])}_{q\bar{q}g}\Big{|}_{N^{-2}}= +\frac{1}{\epsilon^{5}}\left(\frac{1}{4}\right)+\frac{1}{\epsilon^{ 5}}\left(\frac{9}{8}\right)+\frac{1}{\epsilon^{4}}\left(\frac{61}{16}-\frac{13 }{48}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(\frac{469}{32}-\frac{45}{32}\pi^{2} -\frac{85}{12}\zeta_{3}\right)\] \[+\frac{1}{\epsilon^{2}}\left(\frac{481}{8}-\frac{817}{192}\pi^{2 }-\frac{191}{8}\zeta_{3}-\frac{41}{384}\pi^{4}\right)\] \[+\frac{1}{\epsilon}\left(\frac{24727}{96}-\frac{439}{24}\pi^{2} -\frac{4279}{48}\zeta_{3}+\frac{29}{768}\pi^{4}+\frac{1505}{144}\pi^{2}\zeta_{ 3}-\frac{2113}{20}\zeta_{5}\right)\] \[+\frac{440579}{384}-\frac{5059}{64}\pi^{2}-\frac{1211}{3}\zeta_{ 3}-\frac{16933}{23040}\pi^{4}\] \[+\frac{1159}{32}\pi^{2}\zeta_{3}-\frac{11721}{40}\zeta_{5}-\frac {161467}{1451520}\pi^{6}+\frac{1265}{8}\zeta_{3}^{2}+\mathcal{O}(\epsilon),\] (A.14)
\[\mathcal{T}^{q\bar{q},(3,[1\times 1])}_{q\bar{q}g}\Big{|}_{N ^{p}N}= +\frac{1}{\epsilon^{5}}\left(-\frac{5}{24}\right)+\frac{1}{ \epsilon^{4}}\left(-\frac{67}{72}\right)+\frac{1}{\epsilon^{3}}\left(-\frac{1 25}{48}+\frac{13}{48}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{2}}\left(-\frac{157}{16}+\frac{365}{432}\pi^{ 2}+\frac{55}{18}\zeta_{3}\right)\] \[+\frac{1}{\epsilon}\left(-\frac{5203}{144}+\frac{865}{288}\pi^{2 }+\frac{601}{54}\zeta_{3}-\frac{41}{576}\pi^{4}\right)\] \[-\frac{406}{3}+\frac{3389}{288}\pi^{2}+\frac{315}{8}\zeta_{3}- \frac{2611}{25920}\pi^{4}\] \[-\frac{149}{36}\pi^{2}\zeta_{3}+\frac{143}{6}\zeta_{5}+\mathcal{O }(\epsilon),\] (A.15)
\[\mathcal{T}^{q\bar{q},(3,[1\times 1])}_{q\bar{q}g}\Big{|}_{N ^{p}N^{-1}}= +\frac{1}{\epsilon^{5}}\left(\frac{1}{6}\right)+\frac{1}{ \epsilon^{4}}\left(\frac{1}{2}\right)+\frac{1}{\epsilon^{3}}\left(\frac{73}{4 8}-\frac{2}{9}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{2}}\left(\frac{131}{24}-\frac{23}{48}\pi^{2} -\frac{53}{18}\zeta_{3}\right)\] \[+\frac{1}{\epsilon}\left(\frac{1879}{96}-\frac{85}{48}\pi^{2}- \frac{13}{2}\zeta_{3}+\frac{19}{432}\pi^{4}\right)\] \[+\frac{13763}{192}-\frac{1261}{192}\pi^{2}-\frac{863}{36}\zeta_{3 }+\frac{103}{1920}\pi^{4}\] \[+\frac{493}{108}\pi^{2}\zeta_{3}-\frac{299}{10}\zeta_{5}+\mathcal{ O}(\epsilon),\] (A.16)
\[\mathcal{T}^{q\bar{q},(3,[1\times 1])}_{q\bar{q}g}\Big{|}_{N ^{2}}= +\frac{1}{\epsilon^{4}}\left(\frac{1}{36}\right)+\frac{1}{ \epsilon^{3}}\left(\frac{1}{24}\right)+\frac{1}{\epsilon^{2}}\left(\frac{7}{4 8}-\frac{7}{432}\pi^{2}\right)\] \[+\frac{1}{\epsilon}\left(\frac{127}{288}-\frac{7}{288}\pi^{2}- \frac{25}{108}\zeta_{3}\right)\] \[+\frac{85}{64}-\frac{49}{576}\pi^{2}-\frac{25}{72}\zeta_{3}- \frac{71}{51840}\pi^{4}+\mathcal{O}(\epsilon),\] (A.17)
\[\mathcal{T}^{q\bar{q}\bar{q}g}_{q\bar{q}g}\Big{|}_{N^{2}} =+\frac{1}{\epsilon^{6}}\left(\frac{29}{72}\right)+\frac{1}{ \epsilon^{5}}\left(\frac{128}{27}\right)+\frac{1}{\epsilon^{4}}\left(\frac{219 07}{1296}-\frac{343}{288}\pi^{2}\right)\] \[\quad+\frac{1}{\epsilon^{3}}\left(\frac{455953}{7776}-\frac{18173 }{2592}\pi^{2}-\frac{557}{72}\zeta_{3}\right)\] \[\quad+\frac{1}{\epsilon^{2}}\left(\frac{11841379}{46656}-\frac{39 3785}{15552}\pi^{2}-\frac{5489}{72}\zeta_{3}+\frac{99149}{103680}\pi^{4}\right)\] \[\quad+\frac{1}{\epsilon}\Bigg{(}\frac{300578239}{279936}-\frac{10 451453}{9312}\pi^{2}-\frac{123899}{432}\zeta_{3}+\frac{55093}{20736}\pi^{4}\] \[\qquad\qquad+\frac{20885}{864}\pi^{2}\zeta_{3}-\frac{31687}{360} \zeta_{5}\Bigg{)}\] \[\quad+\frac{7825448299}{1679616}-\frac{287499737}{559872}\pi^{2} -\frac{3288509}{2592}\zeta_{3}+\frac{7339967}{622080}\pi^{4}\] \[\quad+\frac{113437}{864}\pi^{2}\zeta_{3}-\frac{92413}{120}\zeta _{5}-\frac{5842331}{26127360}\pi^{6}+\frac{14893}{144}\zeta_{3}^{2}+\mathcal{ O}(\epsilon), \tag{101}\]
\[\mathcal{T}^{q\bar{q}\bar{q}g}_{q\bar{q}g}\Big{|}_{N^{0}} =+\frac{1}{\epsilon^{6}}\left(-\frac{5}{8}\right)+\frac{1}{ \epsilon^{5}}\left(-\frac{245}{48}\right)+\frac{1}{\epsilon^{4}}\left(-\frac {1127}{72}+\frac{547}{288}\pi^{2}\right)\] \[\quad+\frac{1}{\epsilon^{3}}\left(-\frac{12541}{216}+\frac{6911} {864}\pi^{2}+\frac{135}{8}\zeta_{3}\right)\] \[\quad+\frac{1}{\epsilon^{2}}\left(-\frac{1276453}{5184}+\frac{14 1881}{5184}\pi^{2}+\frac{7249}{72}\zeta_{3}-\frac{17101}{11520}\pi^{4}\right)\] \[\quad+\frac{1}{\epsilon}\Bigg{(}-\frac{32867617}{31104}+\frac{37 92287}{31104}\pi^{2}+\frac{37321}{108}\zeta_{3}-\frac{30331}{10368}\pi^{4}\] \[\qquad\qquad-\frac{16825}{288}\pi^{2}\zeta_{3}+\frac{5579}{24} \zeta_{5}\Bigg{)}\] \[\quad-\frac{875425903}{186624}+\frac{104433767}{186624}\pi^{2}+ \frac{2151911}{1296}\zeta_{3}-\frac{1784969}{124416}\pi^{4}\] \[\quad-\frac{58523}{288}\pi^{2}\zeta_{3}+\frac{124961}{120}\zeta _{5}+\frac{2075989}{8709120}\pi^{6}-\frac{17695}{48}\zeta_{3}^{2}+\mathcal{O} (\epsilon), \tag{102}\]
\[\mathcal{T}^{q\bar{q}\bar{q}g}_{q\bar{q}g}\Big{|}_{N^{-2}} =+\frac{1}{\epsilon^{6}}\left(\frac{1}{4}\right)+\frac{1}{ \epsilon^{5}}\left(\frac{9}{8}\right)+\frac{1}{\epsilon^{4}}\left(\frac{61}{1 6}-\frac{37}{48}\pi^{2}\right)\] \[\quad+\frac{1}{\epsilon^{3}}\left(\frac{233}{16}-\frac{65}{32}\pi ^{2}-\frac{103}{12}\zeta_{3}\right)\] \[\quad+\frac{1}{\epsilon^{2}}\left(\frac{3871}{64}-\frac{511}{64} \pi^{2}-\frac{239}{8}\zeta_{3}+\frac{3289}{5760}\pi^{4}\right)\] \[\quad+\frac{1}{\epsilon}\Bigg{(}\frac{100315}{384}-\frac{38461}{ 1152}\pi^{2}-\frac{1863}{16}\zeta_{3}+\frac{7363}{11520}\pi^{4}\]
\[\mathcal{T}_{q\bar{q}gg}^{q\bar{q},(3,[2\times 0])}\Big{|}_{N^{2}}= +\frac{1}{\epsilon^{6}}\left(-\frac{41}{36}\right)+\frac{1}{ \epsilon^{5}}\left(-\frac{311}{36}\right)+\frac{1}{\epsilon^{4}}\left(-\frac{543 25}{1296}+\frac{1151}{432}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{1695641}{7776}+\frac{45337}{ 2592}\pi^{2}+\frac{380}{9}\zeta_{3}\right)\] \[+\frac{1}{\epsilon^{2}}\left(-\frac{8560249}{7776}+\frac{358123} {3888}\pi^{2}+\frac{33013}{108}\zeta_{3}-\frac{16537}{10368}\pi^{4}\right)\]
\[\mathcal{T}_{q\bar{q}gg}^{q\bar{q},(3)}\Big{|}_{N_{F}N}= +\frac{1}{\epsilon^{5}}\left(\frac{1}{2}\right)+\frac{1}{\epsilon^{ 4}}\left(\frac{65}{36}\right)+\frac{1}{\epsilon^{3}}\left(\frac{941}{108}- \frac{13}{18}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{2}}\left(\frac{16667}{432}-\frac{589}{216} \pi^{2}-\frac{71}{6}\zeta_{3}\right)\]
\[\mathcal{T}_{q\bar{q}q\bar{q}\bar{q}}^{q\bar{q},(3)}\Big{|}_{N^{-1}}= +\frac{1}{\epsilon^{5}}\left(-\frac{11}{108}\right)+\frac{1}{ \epsilon^{4}}\left(-\frac{425}{648}\right)+\frac{1}{\epsilon^{3}}\left(-\frac{76 27}{1944}+\frac{133}{432}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{5}}\left(-\frac{11}{108}\right)+\frac{1}{ \epsilon^{4}}\left(-\frac{425}{648}\right)+\frac{1}{\epsilon^{3}}\left(-\frac {7627}{1944}+\frac{133}{432}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{2}}\left(\frac{194443}{5832}-\frac{9217}{3888 8}\pi^{2}-\frac{265}{36}\zeta_{3}\right)\] \[+\frac{1}{\epsilon}\left(\frac{794069}{4374}-\frac{309253}{23328} \pi^{2}-\frac{593}{12}\zeta_{3}+\frac{3239}{51840}\pi^{4}\right)\] \[+\frac{202587241}{209952}-\frac{10661983}{139968}\pi^{2}-\frac{18 379}{72}\zeta_{3}+\frac{37319}{77760}\pi^{4}\] \[+\frac{8261}{432}\pi^{2}\zeta_{3}-\frac{9121}{60}\zeta_{5}+ \mathcal{O}(\epsilon),\] (A.29)
\[\mathcal{T}_{q\bar{q}q\bar{q}\bar{q}}^{q\bar{q},(3)}\Big{|}_{N^{0}}= +\frac{1}{\epsilon^{3}}\left(-\frac{65}{48}+\frac{5}{24}\pi^{2}- \frac{5}{6}\zeta_{3}\right)\] \[+\frac{1}{\epsilon^{2}}\left(-\frac{2753}{144}+\frac{179}{144} \pi^{2}+\frac{187}{18}\zeta_{3}-\frac{101}{1080}\pi^{4}\right)\] \[+\frac{1}{\epsilon}\left(-\frac{135293}{864}+\frac{12803}{1728} \pi^{2}+\frac{3491}{54}\zeta_{3}+\frac{1469}{6480}\pi^{4}+\frac{169}{72}\pi^ {2}\zeta_{3}-\frac{117}{2}\zeta_{5}\right)\] \[-\frac{1353227}{1296}+\frac{609173}{10368}\pi^{2}+\frac{309323} {1296}\zeta_{3}+\frac{72677}{38880}\pi^{4}\] \[-\frac{14069}{432}\pi^{2}\zeta_{3}+\frac{3187}{9}\zeta_{5}-\frac {8171}{90720}\pi^{6}+\frac{110}{3}\zeta_{3}^{2}+\mathcal{O}(\epsilon),\] (A.30)
\[\mathcal{T}_{q\bar{q}q\bar{q}\bar{q}}^{q\bar{q},(3)}\Big{|}_{N^{-1}}= +\frac{1}{\epsilon^{5}}\left(-\frac{11}{108}\right)+\frac{1}{ \epsilon^{4}}\left(-\frac{425}{648}\right)+\frac{1}{\epsilon^{3}}\left(-\frac{ 7627}{1944}+\frac{133}{432}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{5}}\left(-\frac{11}{108}\right)+\frac{1}{ \epsilon^{4}}\left(-\frac{425}{648}\right)+\frac{1}{\epsilon^{3}}\left(-\frac {7627}{1944}+\frac{133}{432}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{2}}\left(-\frac{11}{108}\right)+\frac{1}{ \epsilon^{4}}\left(-\frac{425}{648}\right)+\frac{1}{\epsilon^{3}}\left(-\frac {7627}{1944}+\frac{133}{432}\pi^{2}\right)\] \[+\frac{1}{\epsilon}\left(-\frac{11}{108}\right)+\frac{1}{ \epsilon^{4}}\left(-\frac{425}{648}\right)+\frac{1}{\epsilon^{3}}\left(- \frac{7627}{1944}+\frac{133}{432}\pi^{2}\right)\] \[-\frac{135327}{1296}+\frac{609173}{10368}\pi^{2}+\frac{309323}{12 96}\zeta_{3}+\frac{72677}{38880}\pi^{4}\] \[-\frac{14069}{432}\pi^{2}\zeta_{3}+\frac{3187}{9}\zeta_{5}-\frac {8171}{90720}\pi^{6}+\frac{110}{3}\zeta_{3}^{2}+\mathcal{O}(\epsilon),\] (A.31)
\[\mathcal{T}_{q\bar{q}q\bar{q}\bar{q}}^{q\bar{q},(3)}\Big{|}_{N^{-1}}= +\frac{1}{\epsilon^{5}}\left(-\frac{11}{108}\right)+\frac{1}{ \epsilon^{4}}\left(-\frac{425}{648}\right)+\frac{1}{\epsilon^{3}}\left(-\frac {7627}{1944}+\frac{133}{432}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{5}}\left(-\frac{11}{108}\right)+\frac{1}{ \epsilon^{4}}\left(-\frac{425}{648}\right)+\frac{1}{\epsilon^{3}}\left(-\frac {7627}{1944}+\frac{133}{432}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{5}}\left(-\frac{11}{108}\right)+\frac{1}{ \epsilon^{4}}\left(-\frac{425}{648}\right)+\frac{1}{\epsilon^{3}}\left(-\frac {7627}{1944}+\frac{133}{432}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{5}}\left(-\frac{11}{108}\right)+\frac{1}{ \epsilon^{4}}\left(-\frac{425}{648}\right)+\frac{1}{\epsilon^{3}}\left(-\frac {7627}{1944}+\frac{133}{432}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{5}}\left(-\frac{11}{108}\right)+\frac{1}{ \epsilon^{4}}\left(-\frac{425}{648}\right)+\frac{1}{\epsilon^{3}}\left(-\frac {7627}{1944}+\frac{133}{432}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{5}}\left(-\frac{11}{108}\right)+\frac{1}{ \epsilon^{4}}\left(-\frac{425}{648}\right)+\frac{1}{\epsilon^{3}}\left(-\frac {7627}{1944}+\frac{133}{432}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{5}}\left(-\frac{11}{108}\right)+\frac{1}{ \epsilon^{4}}\left(-\frac{425}{648}\right)+\frac{1}{\epsilon^{3}}\left(-\frac{7627 }{1944}+\frac{133}{432}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{5}}\left(-\frac{11}{108}\right)+\frac{1}{ \epsilon^{4}}\left(-\frac{425}{648}\right)+\frac{1}{\epsilon^{3}}\left(- \frac{7627}{1944}+\frac{133}{432}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{5}}\left(-\frac{11}{108}\right)+\frac{1}{ \epsilon^{4}}\left(-\frac{425}{648}\right)+\frac{1}{\epsilon^{3}}\left(-\frac {7627}{1944}+\frac{133}{432}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{5}}\left(-\frac{11}{108}\right)+\frac{1}{ \epsilon^{4}}\left(-\frac{425}{648}\right)+\frac{1}{\epsilon^{3}}\left(-\frac {7627}{1944}+\frac{133}{432}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{5}}\left(-\frac{11}{108}\right)+\frac{1}{ \epsilon^{4}}\left(-\frac{425}{648}\right)+\frac{1}{\epsilon^{3}}\left(- \frac{7627}{1944}+\frac{133}{432}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{5}}\left(-\frac{11}{108}\right)+\frac{1}{ \epsilon^{4}}\left(-\frac{425}{648}\right)+\frac{1}{\epsilon^{3}}\left(-\frac {7627}{1944}+\frac{133}{432}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{5}}\left(-\frac{11}{108}
\[\mathcal{T}^{q\bar{q},(3)}_{q\bar{q}q\bar{q}^{\prime}\bar{q}^{\prime}} \Big{|}_{(N_{F}-1)N}= +\frac{1}{\epsilon^{5}}\left(\frac{13}{108}\right)+\frac{1}{ \epsilon^{4}}\left(\frac{679}{648}\right)+\frac{1}{\epsilon^{3}}\left(\frac{114 11}{1944}-\frac{425}{1296}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{2}}\left(\frac{194443}{5832}-\frac{9217}{3888 8}\pi^{2}-\frac{265}{36}\zeta_{3}\right)\] \[+\frac{1}{\epsilon}\left(\frac{794069}{4374}-\frac{309253}{23328} \pi^{2}-\frac{593}{12}\zeta_{3}+\frac{3239}{51840}\pi^{4}\right)\] \[+\frac{202587241}{209952}-\frac{10661983}{139968}\pi^{2}-\frac{18 379}{72}\zeta_{3}+\frac{37319}{77760}\pi^{4}\]
\[\mathcal{T}_{q\bar{q}ggg}^{q\bar{q},(3)}\Big{|}_{N^{0}}= +\frac{1}{\epsilon^{6}}\left(-\frac{7}{12}\right)+\frac{1}{ \epsilon^{5}}\left(-\frac{79}{486}\right)+\frac{1}{\epsilon^{2}}\left(-\frac{5 3}{81}+\frac{\pi^{2}}{18}\right)\] \[+\frac{1}{\epsilon}\left(-\frac{16639}{8748}+\frac{73}{648}\pi^ {2}+\frac{76}{81}\zeta_{3}\right)\] \[-\frac{17879}{13122}+\frac{143}{3888}\pi^{2}+\frac{239}{486} \zeta_{3}+\frac{41}{38880}\pi^{4}+\mathcal{O}(\epsilon),\] (A.37)
#### a.1.4 Five-particle final states
\[\mathcal{T}_{q\bar{q}ggg}^{q\bar{q},(3)}\Big{|}_{N^{2}}= +\frac{1}{\epsilon^{6}}\left(\frac{1}{2}\right)+\frac{1}{ \epsilon^{5}}\left(\frac{331}{108}\right)+\frac{1}{\epsilon^{4}}\left(\frac{ 12653}{648}-\frac{31}{24}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(\frac{296155}{2592}-\frac{10745}{1 296}\pi^{2}-\frac{439}{18}\zeta_{3}\right)\] \[+\frac{1}{\epsilon^{2}}\left(\frac{939217}{1458}-\frac{414311}{ 7776}\pi^{2}-\frac{6239}{36}\zeta_{3}+\frac{21853}{25920}\pi^{4}\right)\] \[+\frac{1}{\epsilon}\Bigg{(}\frac{990443209}{279936}-\frac{2917765 3}{93312}\pi^{2}-\frac{183905}{162}\zeta_{3}+\frac{75767}{17280}\pi^{4}\] \[\qquad\qquad+\frac{13993}{216}\pi^{2}\zeta_{3}-\frac{10946}{45} \zeta_{5}\Bigg{)}\] \[+\frac{889159591}{46656}-\frac{246959419}{139968}\pi^{2}-\frac{5 2077425}{7776}\zeta_{3}+\frac{8315387}{311040}\pi^{4}\] \[+\frac{67895}{144}\pi^{2}\zeta_{3}-\frac{103894}{45}\zeta_{5}- \frac{93257}{1306368}\pi^{6}+\frac{7861}{12}\zeta_{3}^{2}+\mathcal{O}(\epsilon),\] (A.38)
\[\mathcal{T}_{q\bar{q}ggg}^{q\bar{q},(3)}\Big{|}_{N^{0}}= +\frac{1}{\epsilon^{6}}\left(-\frac{7}{12}\right)+\frac{1}{ \epsilon^{5}}\left(-\frac{37}{12}\right)+\frac{1}{\epsilon^{4}}\left(-\frac{1 67}{9}+\frac{25}{16}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{45169}{432}+\frac{76}{9}\pi^ {2}+\frac{63}{2}\zeta_{3}\right)\]
\[\mathcal{T}_{q\bar{q}q\bar{q}gg}^{q\bar{q},(3)}\Big{|}_{N^{0}}= +\frac{1}{\epsilon^{3}}\left(\frac{65}{48}-\frac{5}{24}\pi^{2}+ \frac{5}{6}\zeta_{3}\right)\] \[+\frac{1}{\epsilon^{2}}\left(\frac{145}{8}-\frac{157}{144}\pi^{2 }-11\zeta_{3}+\frac{101}{1080}\pi^{4}\right)\] \[+\frac{1}{\epsilon}\left(\frac{30593}{192}-\frac{1613}{192}\pi^{ 2}-\frac{451}{8}\zeta_{3}-\frac{19}{80}\pi^{4}-\frac{19}{8}\pi^{2}\zeta_{3}+ \frac{176}{3}\zeta_{5}\right)\] \[+\frac{55511}{48}-\frac{39751}{576}\pi^{2}-295\zeta_{3}-\frac{192 37}{17280}\pi^{4}\] \[+\frac{1627}{48}\pi^{2}\zeta_{3}-\frac{1437}{4}\zeta_{5}+\frac{16 7}{1701}\pi^{6}-\frac{511}{12}\zeta_{3}^{2}+\mathcal{O}(\epsilon),\] (A.42)
\[\mathcal{T}^{q\bar{q},(3)}_{q\bar{q}q\bar{q}g}\Big{|}_{N^{-1}}= +\frac{1}{\epsilon^{5}}\left(\frac{11}{108}\right)+\frac{1}{ \epsilon^{4}}\left(\frac{425}{648}\right)+\frac{1}{\epsilon^{3}}\left(\frac{169 55}{3888}-\frac{47}{144}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{2}}\left(\frac{629825}{23328}-\frac{1685}{864 }\pi^{2}-\frac{979}{108}\zeta_{3}\right)\] \[+\frac{1}{\epsilon}\left(\frac{22310435}{139968}-\frac{65447}{5 184}\pi^{2}-\frac{31345}{648}\zeta_{3}+\frac{1777}{51840}\pi^{4}\right)\] \[+\frac{762800561}{839808}-\frac{2391413}{31104}\pi^{2}-\frac{1163 287}{3888}\zeta_{3}+\frac{35567}{62208}\pi^{4}\] \[+\frac{1261}{48}\pi^{2}\zeta_{3}-\frac{34651}{180}\zeta_{5}+ \mathcal{O}(\epsilon),\] (A.43)
\[\mathcal{T}^{q\bar{q},(3)}_{q\bar{q}q\bar{q}g}\Big{|}_{N^{-2}}= +\frac{1}{\epsilon^{3}}\left(-\frac{65}{48}+\frac{5}{24}\pi^{2}- \frac{5}{6}\zeta_{3}\right)\] \[+\frac{1}{\epsilon^{2}}\left(-\frac{71}{4}+\frac{47}{48}\pi^{2}+ \frac{21}{2}\zeta_{3}-\frac{7}{90}\pi^{4}\right)\] \[+\frac{1}{\epsilon}\left(-\frac{29933}{192}+\frac{4651}{576}\pi^ {2}+\frac{142}{3}\zeta_{3}+\frac{17}{144}\pi^{4}+\frac{27}{8}\pi^{2}\zeta_{3} -44\zeta_{5}\right)\] \[-\frac{109309}{96}+\frac{39031}{576}\pi^{2}+\frac{4449}{16}\zeta _{3}+\frac{4751}{17280}\pi^{4}\] \[-\frac{1559}{48}\pi^{2}\zeta_{3}+\frac{1025}{4}\zeta_{5}+\frac{2 2229}{272160}\pi^{6}+\frac{345}{4}\zeta_{3}^{2}+\mathcal{O}(\epsilon),\] (A.44)
\[\mathcal{T}^{q\bar{q},(3)}_{q\bar{q}q\bar{q}^{\prime}\bar{q}g} \Big{|}_{(N_{F}-1)N}= +\frac{1}{\epsilon^{5}}\left(-\frac{7}{54}\right)+\frac{1}{ \epsilon^{4}}\left(-\frac{101}{108}\right)+\frac{1}{\epsilon^{3}}\left(-\frac {12461}{1944}+\frac{247}{648}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{2}}\left(-\frac{473479}{11664}+\frac{10579}{ 3888}\pi^{2}+\frac{493}{54}\zeta_{3}\right)\] \[+\frac{1}{\epsilon}\left(-\frac{5692453}{23328}+\frac{426841}{23 328}\pi^{2}+\frac{21035}{324}\zeta_{3}-\frac{3613}{25920}\pi^{4}\right)\] \[-\frac{592211495}{419904}+\frac{15998635}{139968}\pi^{2}+\frac{27 3361}{648}\zeta_{3}-\frac{148181}{155520}\pi^{4}\] \[-\frac{205}{8}\pi^{2}\zeta_{3}+\frac{13757}{90}\zeta_{5}+ \mathcal{O}(\epsilon),\] (A.45)
\[\mathcal{T}^{q\bar{q},(3)}_{q\bar{q}q^{\prime}\bar{q}^{\prime}g} \Big{|}_{(N_{F}-1)N^{-1}}= +\frac{1}{\epsilon^{5}}\left(\frac{11}{108}\right)+\frac{1}{ \epsilon^{4}}\left(\frac{425}{648}\right)+\frac{1}{\epsilon^{3}}\left(\frac{16 955}{3888}-\frac{47}{144}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{2}}\left(\frac{629825}{23328}-\frac{1685}{864 }\pi^{2}-\frac{979}{108}\zeta_{3}\right)\] \[+\frac{1}{\epsilon}\left(\frac{22310435}{139968}-\frac{65447}{51 84}\pi^{2}-\frac{31345}{648}\zeta_{3}+\frac{1777}{51840}\pi^{4}\right)\] \[+\frac{762800561}{839808}-\frac{2391413}{31104}\pi^{2}-\frac{116 3287}{3888}\zeta_{3}+\frac{35567}{62208}\pi^{4}\] \[+\frac{1261}{48}\pi^{2}\zeta_{3}-\frac{34651}{180}\zeta_{5}+ \mathcal{O}(\epsilon),\] (A.46)
### Higgs to gluons
#### a.2.1 Two-particle final states
\[\mathcal{T}^{gg,(3,[2\times 1])}_{gg}\Big{|}_{N^{3}}= +\frac{1}{\epsilon^{6}}\left(-1\right)+\frac{1}{\epsilon^{5}}\left( -\frac{33}{4}\right)+\frac{1}{\epsilon^{4}}\left(-\frac{133}{8}+\frac{2}{3} \pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{1189}{216}+\frac{583}{144} \pi^{2}+\frac{13}{2}\zeta_{3}\right)\] \[+\frac{1}{\epsilon^{2}}\left(-\frac{1445}{54}+\frac{599}{108}\pi^ {2}+\frac{352}{9}\zeta_{3}-\frac{59}{1440}\pi^{4}\right)\] \[+\frac{1}{\epsilon}\left(-\frac{14669}{81}-\frac{14795}{1296}\pi ^{2}+\frac{3385}{54}\zeta_{3}-\frac{11429}{17280}\pi^{4}-\frac{37}{8}\pi^{2} \zeta_{3}-\frac{9}{10}\zeta_{5}\right)\] \[-\frac{592597}{972}+\frac{15629}{972}\pi^{2}+\frac{63739}{324} \zeta_{3}+\frac{50081}{17280}\pi^{4}-\frac{2585}{144}\pi^{2}\zeta_{3}+\frac{9 13}{15}\zeta_{5}\] \[-\frac{6647}{60480}\pi^{6}-\frac{125}{2}\zeta_{3}^{2}+\mathcal{O }(\epsilon),\] (A.47)
\[\mathcal{T}^{gg,(3,[2\times 1])}_{gg}\Big{|}_{N_{F}N^{2}}= +\frac{1}{\epsilon^{5}}\left(\frac{3}{2}\right)+\frac{1}{ \epsilon^{4}}\left(\frac{58}{9}\right)+\frac{1}{\epsilon^{3}}\left(\frac{449} {108}-\frac{53}{72}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{2}}\left(\frac{1001}{162}-\frac{529}{216}\pi ^{2}-\frac{46}{9}\zeta_{3}\right)\] \[+\frac{1}{\epsilon}\left(\frac{70685}{972}+\frac{4811}{1296}\pi^ {2}-\frac{479}{54}\zeta_{3}+\frac{151}{960}\pi^{4}\right)\] \[+\frac{820825}{2916}-\frac{65887}{7776}\pi^{2}-\frac{1295}{324} \zeta_{3}-\frac{1813}{2880}\pi^{4}+\frac{37}{24}\pi^{2}\zeta_{3}-\frac{46}{15 }\zeta_{5}\] \[+\mathcal{O}(\epsilon),\] (A.48)
\[\mathcal{T}^{gg,(3,[2\times 1])}_{gg}\Big{|}_{N_{F}}= +\frac{1}{\epsilon^{3}}\left(\frac{1}{4}\right)+\frac{1}{ \epsilon^{2}}\left(-\frac{7}{3}+2\zeta_{3}\right)\] \[+\frac{1}{\epsilon}\left(-\frac{691}{36}-\frac{7}{144}\pi^{2}+ \frac{34}{3}\zeta_{3}+\frac{\pi^{4}}{27}\right)\] \[-\frac{17393}{216}+\frac{2771}{864}\pi^{2}+\frac{1609}{36}\zeta_ {3}+\frac{17}{81}\pi^{4}-\frac{31}{18}\pi^{2}\zeta_{3}+8\zeta_{5}+\mathcal{O} (\epsilon),\] (A.49)
\[\mathcal{T}^{gg,(3,[2\times 1])}_{gg}\Big{|}_{N_{F}^{2}N}= +\frac{1}{\epsilon^{4}}\left(-\frac{11}{18}\right)+\frac{1}{ \epsilon^{3}}\left(-\frac{61}{54}\right)+\frac{1}{\epsilon^{2}}\left(\frac{155 }{324}+\frac{\pi^{2}}{4}\right)\] \[+\frac{1}{\epsilon}\left(-\frac{6337}{1944}-\frac{25}{108}\pi^{2} +\frac{23}{27}\zeta_{3}\right)\] \[-\frac{170581}{11664}+\frac{305}{648}\pi^{2}-\frac{95}{81}\zeta _{3}+\frac{77}{1620}\pi^{4}+\mathcal{O}(\epsilon),\] (A.50)
\[\mathcal{T}^{gg,(3,[2\times 1])}_{gg}\Big{|}_{N_{F}^{2}N^{-1}}= +\frac{1}{\epsilon^{2}}\left(-\frac{1}{12}\right)+\frac{1}{ \epsilon}\left(\frac{67}{72}-\frac{2}{3}\zeta_{3}\right)\]
\[\left.\mathcal{T}^{ggg,(3,[3\times 0])}_{gg}\right|_{N_{F}N^{-2}}= +\frac{1}{\epsilon}\left(-\frac{1}{48}\right)+\frac{19}{9}+\frac{37 }{6}\zeta_{3}-10\zeta_{5}+\mathcal{O}(\epsilon),\] (A.56)
\[\mathcal{T}^{gg,(3,[3\times 0])}_{gg}\Big{|}_{N_{F}^{2}N}= +\frac{1}{\epsilon^{4}}\left(-\frac{85}{162}\right)+\frac{1}{ \epsilon^{3}}\left(-\frac{499}{486}\right)+\frac{1}{\epsilon^{2}}\left(\frac{1 13}{108}+\frac{37}{324}\pi^{2}\right)\] \[+\frac{1}{\epsilon}\left(-\frac{54305}{17496}-\frac{215}{972}\pi^ {2}+\frac{13}{81}\zeta_{3}\right)\] \[+\frac{483479}{104976}-\frac{53}{648}\pi^{2}-\frac{275}{243}\zeta _{3}+\frac{59}{19440}\pi^{4}+\mathcal{O}(\epsilon), \tag{100}\]
\[\mathcal{T}^{gg,(3,[3\times 0])}_{gg}\Big{|}_{N_{F}^{2}N^{-1}}= +\frac{1}{\epsilon^{2}}\left(-\frac{7}{36}\right)+\frac{1}{ \epsilon}\left(\frac{53}{54}-\frac{2}{3}\zeta_{3}\right)\] \[-\frac{2881}{1296}+\frac{\pi^{2}}{72}+\frac{19}{9}\zeta_{3}- \frac{\pi^{4}}{90}+\mathcal{O}(\epsilon), \tag{101}\]
\[\mathcal{T}^{gg,(3,[3\times 0])}_{gg}\Big{|}_{N_{F}^{3}}= +\frac{1}{\epsilon^{3}}\left(\frac{2}{27}\right) \tag{102}\]
#### a.2.2 Three-particle final states
\[\mathcal{T}^{gg,(3,[1\times 1])}_{ggg}\Big{|}_{N^{3}}= +\frac{1}{\epsilon^{6}}\left(\frac{23}{9}\right)+\frac{1}{ \epsilon^{5}}\left(\frac{1309}{72}\right)+\frac{1}{\epsilon^{4}}\left(\frac{1 625}{24}-\frac{301}{108}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(\frac{171727}{648}-\frac{187}{8} \pi^{2}-\frac{607}{9}\zeta_{3}\right)\] \[+\frac{1}{\epsilon^{2}}\left(\frac{4600213}{3888}-\frac{52463}{6 48}\pi^{2}-\frac{4543}{12}\zeta_{3}-\frac{3311}{4320}\pi^{4}\right)\] \[+\frac{1}{\epsilon}\bigg{(}\frac{10480321}{1944}-\frac{934837}{25 92}\pi^{2}-\frac{85027}{54}\zeta_{3}\] \[\qquad+\frac{79387}{51840}\pi^{4}+\frac{10253}{108}\pi^{2}\zeta _{3}-\frac{13393}{15}\zeta_{5}\bigg{)}\] \[+\frac{3526701253}{139968}-\frac{13179161}{7776}\pi^{2}-\frac{166 1833}{216}\zeta_{3}-\frac{275233}{38880}\pi^{4}\] \[+\frac{242561}{432}\pi^{2}\zeta_{3}-\frac{88407}{20}\zeta_{5}- \frac{101317}{120960}\pi^{6}+\frac{23447}{18}\zeta_{3}^{2}+\mathcal{O}( \epsilon), \tag{103}\]
\[\mathcal{T}^{gg,(3,[1\times 1])}_{ggg}\Big{|}_{N_{F}N^{2}}= +\frac{1}{\epsilon^{5}}\left(-\frac{9}{4}\right)+\frac{1}{ \epsilon^{4}}\left(-\frac{121}{12}\right)+\frac{1}{\epsilon^{3}}\left(-\frac{ 257}{9}+\frac{71}{24}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{2}}\left(-\frac{988}{9}+\frac{341}{36}\pi^{2} +36\zeta_{3}\right)\] \[+\frac{1}{\epsilon}\left(-\frac{179999}{432}+\frac{13967}{432} \pi^{2}+\frac{1133}{9}\zeta_{3}-\frac{199}{288}\pi^{4}\right)\] \[-\frac{12462217}{7776}+\frac{113855}{864}\pi^{2}+\frac{5365}{12} \zeta_{3}-\frac{1991}{1728}\pi^{4}\] \[-\frac{470}{9}\pi^{2}\zeta_{3}+\frac{1612}{5}\zeta_{5}+\mathcal{ O}(\epsilon), \tag{104}\]
\[\mathcal{T}^{ggg,(3,[1\times 1])}_{ggg}\Big{|}_{N^{2}N}= +\frac{1}{\epsilon^{3}}\left(\frac{1}{2}\right)+\frac{1}{\epsilon^{ 3}}\left(\frac{11}{12}\right)+\frac{1}{\epsilon^{2}}\left(\frac{25}{8}-\frac{7 }{24}\pi^{2}\right)\] \[+\frac{1}{\epsilon}\left(\frac{13037}{1296}-\frac{77}{144}\pi^{2 }-\frac{25}{6}\zeta_{3}\right)\] \[+\frac{83119}{2592}-\frac{547}{288}\pi^{2}-\frac{275}{36}\zeta_{3 }-\frac{71}{2880}\pi^{4}+\mathcal{O}(\epsilon),\] (A.62)
\[\mathcal{T}^{ggg,(3,[2\times 0])}_{ggg}\Big{|}_{N^{3}}= +\frac{1}{\epsilon^{6}}\left(\frac{23}{9}\right)+\frac{1}{ \epsilon^{5}}\left(\frac{5291}{216}\right)+\frac{1}{\epsilon^{4}}\left(\frac {14053}{162}-\frac{139}{18}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(\frac{1204027}{3888}-\frac{50765}{1 296}\pi^{2}-\frac{1195}{18}\zeta_{3}\right)\] \[+\frac{1}{\epsilon^{2}}\left(\frac{32569141}{23328}-\frac{10655 09}{7776}\pi^{2}-\frac{17017}{36}\zeta_{3}+\frac{15613}{2592}\pi^{4}\right)\] \[+\frac{1}{\epsilon}\bigg{(}\frac{864831877}{139968}-\frac{2941988 9}{46656}\pi^{2}-\frac{197453}{108}\zeta_{3}\] \[\qquad+\frac{97097}{6480}\pi^{4}+\frac{48313}{216}\pi^{2}\zeta_ {3}-\frac{77033}{90}\zeta_{5}\bigg{)}\] \[+\frac{23345187541}{839808}-\frac{859773641}{279936}\pi^{2}-\frac {695786}{81}\zeta_{3}+\frac{18651233}{311040}\pi^{4}\] \[+\frac{416647}{432}\pi^{2}\zeta_{3}-\frac{893563}{180}\zeta_{5}- \frac{716087}{816480}\pi^{6}+\frac{24409}{18}\zeta_{3}^{2}+\mathcal{O}( \epsilon),\] (A.63)
\[\mathcal{T}^{ggg,(3,[2\times 0])}_{ggg}\Big{|}_{N_{F}N^{2}}= +\frac{1}{\epsilon^{5}}\left(-\frac{367}{108}\right)+\frac{1}{ \epsilon^{4}}\left(-\frac{1256}{81}\right)+\frac{1}{\epsilon^{3}}\left(-\frac {76247}{1944}+\frac{613}{162}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{2}}\left(-\frac{1986005}{11664}+\frac{44611} {3888}\pi^{2}+\frac{99}{2}\zeta_{3}\right)\] \[+\frac{1}{\epsilon}\left(-\frac{46628939}{69984}+\frac{922633}{23 328}\pi^{2}+\frac{6091}{36}\zeta_{3}-\frac{971}{8640}\pi^{4}\right)\] \[-\frac{1069628111}{419904}+\frac{26618059}{139968}\pi^{2}+\frac {40385}{72}\zeta_{3}+\frac{324821}{155520}\pi^{4}\] \[-\frac{10991}{216}\pi^{2}\zeta_{3}+\frac{37241}{90}\zeta_{5}+ \mathcal{O}(\epsilon),\] (A.64)
\[\mathcal{T}^{ggg,(3,[2\times 0])}_{ggg}\Big{|}_{N_{F}}= +\frac{1}{\epsilon^{3}}\left(-\frac{3}{4}\right)+\frac{1}{ \epsilon^{2}}\left(\frac{101}{24}-4\zeta_{3}\right)\] \[+\frac{1}{\epsilon}\left(\frac{4883}{144}-\frac{109}{144}\pi^{2}- \frac{68}{3}\zeta_{3}-\frac{2}{27}\pi^{4}\right)\] \[+\frac{52751}{288}-\frac{14305}{864}\pi^{2}-\frac{4019}{36}\zeta _{3}-\frac{34}{81}\pi^{4}\] \[+\frac{103}{9}\pi^{2}\zeta_{3}-16\zeta_{5}+\mathcal{O}(\epsilon),\] (A.65)
\[\mathcal{T}^{ggg,(3,[2\times 0])}_{ggg}\Big{|}_{N_{F}^{2}N}= +\frac{1}{\epsilon^{4}}\left(\frac{5}{6}\right)+\frac{1}{\epsilon^{ 3}}\left(\frac{55}{36}\right)+\frac{1}{\epsilon^{2}}\left(\frac{1117}{216}- \frac{35}{72}\pi^{2}\right)\] \[+\frac{1}{\epsilon}\left(\frac{2359}{144}-\frac{385}{432}\pi^{2} -\frac{125}{18}\zeta_{3}\right)\] \[+\frac{14621}{288}-\frac{7877}{2592}\pi^{2}-\frac{1375}{108}\zeta _{3}-\frac{71}{1728}\pi^{4}+\mathcal{O}(\epsilon), \tag{101}\]
\[\mathcal{T}^{gg,(3,[1\times 1])}_{q\bar{q}g}\Big{|}_{N_{F}N^{2}}= +\frac{1}{\epsilon^{5}}\left(-\frac{2}{3}\right)+\frac{1}{ \epsilon^{4}}\left(-6\right)+\frac{1}{\epsilon^{3}}\left(-\frac{8939}{324}+ \frac{20}{27}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{2}}\left(-\frac{24287}{216}+\frac{1289}{162} \pi^{2}+\frac{154}{9}\zeta_{3}\right)\] \[+\frac{1}{\epsilon}\left(-\frac{960457}{1944}+\frac{23275}{648} \pi^{2}+\frac{3674}{27}\zeta_{3}+\frac{1087}{6480}\pi^{4}\right)\] \[-\frac{80043379}{34992}+\frac{3689513}{23328}\pi^{2}+\frac{18431} {27}\zeta_{3}-\frac{8843}{38880}\pi^{4}\] \[-\frac{1193}{54}\pi^{2}\zeta_{3}+\frac{6127}{30}\zeta_{5}+ \mathcal{O}(\epsilon), \tag{102}\]
\[\mathcal{T}^{gg,(3,[1\times 1])}_{q\bar{q}g}\Big{|}_{N_{F}}= +\frac{1}{\epsilon^{5}}\left(\frac{1}{3}\right)+\frac{1}{ \epsilon^{4}}\left(\frac{115}{36}\right)+\frac{1}{\epsilon^{3}}\left(\frac{13 87}{72}-\frac{17}{36}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{2}}\left(\frac{67231}{648}-\frac{173}{36}\pi ^{2}-13\zeta_{3}\right)\] \[+\frac{1}{\epsilon}\left(\frac{130232}{243}-\frac{39373}{1296} \pi^{2}-\frac{1033}{9}\zeta_{3}-\frac{799}{4320}\pi^{4}\right)\] \[+\frac{5296811}{1944}-\frac{1315985}{7776}\pi^{2}-\frac{77185}{1 08}\zeta_{3}-\frac{27481}{25920}\pi^{4}\] \[+\frac{265}{12}\pi^{2}\zeta_{3}-\frac{3238}{15}\zeta_{5}+\mathcal{ O}(\epsilon), \tag{103}\]
\[\mathcal{T}^{gg,(3,[1\times 1])}_{q\bar{q}g}\Big{|}_{N_{F}N^{-2}}= +\frac{1}{\epsilon^{5}}\left(-\frac{1}{18}\right)+\frac{1}{ \epsilon^{4}}\left(-\frac{61}{108}\right)+\frac{1}{\epsilon^{3}}\left(-\frac{ 661}{162}+\frac{17}{216}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{2}}\left(-\frac{49697}{1944}+\frac{1037}{129 6}\pi^{2}+\frac{49}{18}\zeta_{3}\right)\] \[+\frac{1}{\epsilon}\left(-\frac{1736189}{11664}+\frac{5821}{972 }\pi^{2}+\frac{2773}{108}\zeta_{3}+\frac{517}{8640}\pi^{4}\right)\] \[-\frac{14505527}{17496}+\frac{898633}{23328}\pi^{2}+\frac{15074} {81}\zeta_{3}+\frac{26713}{51840}\pi^{4}\] \[-\frac{1229}{216}\pi^{2}\zeta_{3}+\frac{306}{5}\zeta_{5}+ \mathcal{O}(\epsilon), \tag{104}\]
\[\mathcal{T}^{gg,(3,[1\times 1])}_{q\bar{q}g}\Big{|}_{N_{F}^{2}N}= +\frac{1}{\epsilon^{4}}\left(\frac{4}{9}\right)+\frac{1}{\epsilon ^{3}}\left(2\right)+\frac{1}{\epsilon^{2}}\left(\frac{53}{27}-\frac{31}{54}\pi ^{2}\right)\] \[-\frac{1193}{54}\pi^{2}\zeta_{3}+\frac{6127}{30}\zeta_{5}+ \mathcal{O}(\epsilon), \tag{105}\]
\[\mathcal{T}^{gg,(3,[1\times 1])}_{q\bar{q}g}\Big{|}_{N_{F}^{2}N}= +\frac{1}{\epsilon^{4}}\left(\frac{4}{9}\right)+\frac{1}{\epsilon ^{3}}\left(2\right)+\frac{1}{\epsilon^{2}}\left(\frac{53}{27}-\frac{31}{54} \pi^{2}\right)\] \[+\frac{1}{\epsilon^{4}}\left(\frac{4}{9}\right)+\frac{1}{ \epsilon^{3}}\left(2\right)+\frac{1}{\epsilon^{2}}\left(\frac{53}{27}-\frac{31} {54}\pi^{2}\right)\]
\[\mathcal{T}^{gg,(3,[2\times 0])}_{q\bar{q}g}\Big{|}_{N_{F}N^{-2}}= +\frac{1}{\epsilon^{5}}\left(-\frac{1}{18}\right)+\frac{1}{ \epsilon^{4}}\left(-\frac{61}{108}\right)+\frac{1}{\epsilon^{3}}\left(-\frac{6 61}{162}+\frac{41}{216}\pi^{2}\right)\]
\[\mathcal{T}^{ggg,(3,[2\times 0])}_{ggggg}\Big{|}_{N^{3}}= +\frac{1}{\epsilon^{6}}\left(-\frac{113}{18}\right)+\frac{1}{ \epsilon^{5}}\left(-\frac{1661}{36}\right)+\frac{1}{\epsilon^{4}}\left(-\frac{1 47871}{648}+\frac{3233}{216}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{4615997}{3888}+\frac{122815}{ 1296}\pi^{2}+\frac{4685}{18}\zeta_{3}\right)\] \[+\frac{1}{\epsilon^{2}}\left(-\frac{15593323}{2592}+\frac{3877639 }{7776}\pi^{2}+\frac{179729}{108}\zeta_{3}-\frac{200657}{25920}\pi^{4}\right)\] \[+\frac{1}{\epsilon}\bigg{(}-\frac{4232245261}{139968}+\frac{12507 3985}{46656}\pi^{2}+\frac{5912125}{648}\zeta_{3}\] \[\qquad-\frac{2118721}{51840}\pi^{4}-\frac{48431}{72}\pi^{2}\zeta _{3}+\frac{316459}{90}\zeta_{5}\bigg{)}\] \[-\frac{127277031491}{839808}+\frac{3914403535}{279936}\pi^{2}+ \frac{65526025}{1296}\zeta_{3}-\frac{66797881}{311040}\pi^{4}\] \[-\frac{1657051}{432}\pi^{2}\zeta_{3}+\frac{3630913}{180}\zeta_{5} +\frac{15983183}{6531840}\pi^{6}-\frac{234529}{36}\zeta_{3}^{2}+\mathcal{O}( \epsilon),\] (A.79)
\[\mathcal{T}^{gg,(3)}_{gggg}\Big{|}_{N_{F}N^{2}}= +\frac{1}{\epsilon^{5}}\left(\frac{10}{3}\right)+\frac{1}{\epsilon^ {4}}\left(\frac{121}{9}\right)+\frac{1}{\epsilon^{3}}\left(\frac{1171}{18}- \frac{44}{9}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{2}}\left(\frac{31807}{108}-\frac{1067}{54} \pi^{2}-\frac{758}{9}\zeta_{3}\right)\] \[+\frac{1}{\epsilon}\left(\frac{5025205}{3888}-\frac{94729}{972} \pi^{2}-\frac{9196}{27}\zeta_{3}+\frac{41}{45}\pi^{4}\right)\] \[+\frac{130080583}{23328}-\frac{869467}{1944}\pi^{2}-\frac{281887} {162}\zeta_{3}+\frac{36811}{9720}\pi^{4}\] \[+\frac{3415}{27}\pi^{2}\zeta_{3}-\frac{6596}{9}\zeta_{5}+\mathcal{ O}(\epsilon), \tag{101}\]
\[\mathcal{T}^{gg,(3)}_{q\bar{q}gg}\Big{|}_{N_{F}N^{2}}= +\frac{1}{\epsilon^{5}}\left(\frac{92}{27}\right)+\frac{1}{ \epsilon^{4}}\left(\frac{2539}{81}\right)+\frac{1}{\epsilon^{3}}\left(\frac{17 3873}{972}-\frac{658}{81}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{2}}\left(\frac{1406861}{1458}-\frac{32485}{48 6}\pi^{2}-\frac{410}{3}\zeta_{3}\right)\] \[+\frac{1}{\epsilon}\left(\frac{176604815}{34992}-\frac{4580383}{ 11664}\pi^{2}-\frac{10718}{9}\zeta_{3}+\frac{2981}{648}\pi^{4}\right)\] \[+\frac{5457831455}{209952}-\frac{76313957}{34992}\pi^{2}-\frac{23 65255}{324}\zeta_{3}+\frac{144191}{4860}\pi^{4}\] \[+\frac{18689}{54}\pi^{2}\zeta_{3}-\frac{72307}{45}\zeta_{5}+ \mathcal{O}(\epsilon), \tag{102}\]
\[\mathcal{T}^{gg,(3)}_{q\bar{q}gg}\Big{|}_{N_{F}N^{2}}= +\frac{1}{\epsilon^{5}}\left(-\frac{77}{54}\right)+\frac{1}{ \epsilon^{4}}\left(-\frac{4559}{324}\right)+\frac{1}{\epsilon^{3}}\left(- \frac{181577}{1944}+\frac{787}{216}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{2}}\left(-\frac{6463727}{11664}+\frac{44281} {1296}\pi^{2}+\frac{3775}{54}\zeta_{3}\right)\] \[+\frac{1}{\epsilon}\left(-\frac{218284493}{69984}+\frac{1789471} {7776}\pi^{2}+\frac{214975}{324}\zeta_{3}-\frac{42967}{25920}\pi^{4}\right)\] \[-\frac{7149077831}{419904}+\frac{64782577}{46656}\pi^{2}+\frac{8 782471}{1944}\zeta_{3}-\frac{2218093}{155520}\pi^{4}\] \[-\frac{40945}{216}\pi^{2}\zeta_{3}+\frac{33079}{30}\zeta_{5}+ \mathcal{O}(\epsilon), \tag{103}\]
\[\mathcal{T}^{gg,(3)}_{q\bar{q}gg}\Big{|}_{N_{F}N^{2}}= +\frac{1}{\epsilon^{5}}\left(\frac{2}{9}\right)+\frac{1}{ \epsilon^{4}}\left(\frac{61}{27}\right)+\frac{1}{\epsilon^{3}}\left(\frac{53 87}{324}-\frac{31}{54}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{2}}\left(\frac{103885}{972}-\frac{1873}{324} \pi^{2}-\frac{118}{9}\zeta_{3}\right)\] \[+\frac{1}{\epsilon}\left(\frac{3730075}{5832}-\frac{166547}{3888} \pi^{2}-\frac{6757}{54}\zeta_{3}+\frac{833}{6480}\pi^{4}\right)\] \[+\frac{7996690}{2187}-\frac{3227527}{11664}\pi^{2}-\frac{293857}{ 324}\zeta_{3}+\frac{69983}{38880}\pi^{4}\]
\[\left.\mathcal{T}^{gg,(3)}_{q\bar{q}q\bar{q}}\right|_{N_{F}N^{-2}}= +\frac{1}{\epsilon^{4}}\left(-\frac{5}{6}+\frac{2}{3}\zeta_{3} \right)+\frac{1}{\epsilon}\left(-\frac{223}{18}+\frac{95}{18}\zeta_{3}+\frac{ \pi^{4}}{18}\right)\] \[-\frac{12643}{108}+\frac{55}{24}\pi^{2}+\frac{3785}{108}\zeta_{3 }+\frac{511}{1080}\pi^{4}-\frac{11}{6}\pi^{2}\zeta_{3}+\frac{59}{3}\zeta_{5}+ \mathcal{O}(\epsilon),\] (A.89)
\[\mathcal{T}^{gg,(3)}_{q\bar{q}q\bar{q}^{\prime}\bar{q}^{\prime}} \Big{|}_{N^{2}_{P}}= +\frac{1}{\epsilon^{3}}\left(\frac{2}{27}\right)+\frac{1}{ \epsilon^{2}}\left(\frac{7}{27}\right)+\frac{1}{\epsilon}\left(-\frac{85}{486} -\frac{\pi^{2}}{54}\right)\] \[-\frac{28699}{2916}+\frac{7}{12}\pi^{2}-\frac{2}{81}\zeta_{3}+ \mathcal{O}(\epsilon),\] (A.90)
\[\mathcal{T}^{gg,(3)}_{q\bar{q}q\bar{q}^{\prime}\bar{q}^{\prime}} \Big{|}_{N^{2}_{P}N^{-1}}= +\frac{1}{\epsilon}\left(-\frac{5}{9}+\frac{4}{9}\zeta_{3} \right)-\frac{313}{54}+\frac{46}{27}\zeta_{3}+\frac{\pi^{4}}{27}+\mathcal{O}( \epsilon),\] (A.91)
\[\mathcal{T}^{gg,(3)}_{q\bar{q}q\bar{q}^{\prime}\bar{q}^{\prime}} \Big{|}_{N_{P}(N_{F}-1)N}= +\frac{1}{\epsilon^{4}}\left(-\frac{2}{9}\right)+\frac{1}{ \epsilon^{3}}\left(-\frac{64}{27}\right)+\frac{1}{\epsilon^{2}}\left(-\frac{26 5}{18}+\frac{29}{54}\pi^{2}\right)\] \[+\frac{1}{\epsilon}\left(-\frac{39211}{486}+\frac{829}{162}\pi^{ 2}+\frac{86}{9}\zeta_{3}\right)\] \[-\frac{819503}{1944}+\frac{20695}{648}\pi^{2}+\frac{7640}{81} \zeta_{3}-\frac{325}{1296}\pi^{4}+\mathcal{O}(\epsilon),\] (A.92)
\[\mathcal{T}^{gg,(3)}_{q\bar{q}q^{\prime}\bar{q}^{\prime}} \Big{|}_{N_{P}(N_{F}-1)N^{-1}}= +\frac{1}{\epsilon^{4}}\left(\frac{1}{9}\right)+\frac{1}{\epsilon ^{3}}\left(\frac{31}{27}\right)+\frac{1}{\epsilon^{2}}\left(\frac{77}{9}- \frac{11}{36}\pi^{2}\right)\] \[+\frac{1}{\epsilon}\left(\frac{54263}{972}-\frac{341}{108}\pi^{2 }-\frac{61}{9}\zeta_{3}\right)\] \[+\frac{989915}{2916}-\frac{2551}{108}\pi^{2}-\frac{1819}{27} \zeta_{3}+\frac{1153}{12960}\pi^{4}+\mathcal{O}(\epsilon),\] (A.93)
#### a.2.4 Five-particle final states
\[\mathcal{T}^{gg,(3)}_{ggggg}\Big{|}_{N^{3}}= +\frac{1}{\epsilon^{6}}\left(\frac{5}{2}\right)+\frac{1}{ \epsilon^{5}}\left(\frac{440}{27}\right)+\frac{1}{\epsilon^{4}}\left(\frac{16 909}{162}-\frac{53}{8}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(\frac{99671}{162}-\frac{27995}{648} \pi^{2}-\frac{1213}{9}\zeta_{3}\right)\] \[+\frac{1}{\epsilon^{2}}\left(\frac{10112047}{2916}-\frac{1090571 }{3888}\pi^{2}-\frac{31427}{36}\zeta_{3}+\frac{94121}{25920}\pi^{4}\right)\] \[+\frac{1}{\epsilon}\bigg{(}\frac{41548462}{2187}-\frac{38894531} {23328}\pi^{2}-\frac{3780355}{648}\zeta_{3}\] \[\qquad\qquad+\frac{105677}{4320}\pi^{4}+\frac{19835}{54}\pi^{2} \zeta_{3}-\frac{15767}{9}\zeta_{5}\bigg{)}\] \[+\frac{1190169835}{11664}-\frac{1320846413}{139968}\pi^{2}-\frac{ 137338601}{3888}\zeta_{3}+\frac{480287}{3240}\pi^{4}\] \[+\frac{1023077}{432}\pi^{2}\zeta_{3}-\frac{388685}{36}\zeta_{5}- \frac{5079149}{6531840}\pi^{6}+\frac{47611}{12}\zeta_{3}^{2}+\mathcal{O}( \epsilon),\] (A.94)
\[\mathcal{T}^{gg,(3)}_{q\bar{q}ggg}\Big{|}_{N_{FN}}= +\frac{1}{\epsilon^{5}}\left(-\frac{113}{54}\right)+\frac{1}{ \epsilon^{4}}\left(-\frac{623}{36}\right)+\frac{1}{\epsilon^{3}}\left(-\frac{2 30443}{1944}+\frac{3569}{648}\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{230443}{1944}+\frac{3569}{648 }\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{230443}{1944}+\frac{3569}{648 }\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{230443}{1944}+\frac{3569}{648 }\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{230443}{1944}+\frac{3569}{648 }\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{230443}{1944}+\frac{3569}{648 }\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{230443}{1944}+\frac{3569}{648 }\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{230443}{1944}+\frac{3569}{648 }\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{230443}{1944}+\frac{3569}{648 }\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{230443}{1944}+\frac{3569}{648 }\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{230443}{1944}+\frac{3569}{648 }\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{230443}{1944}+\frac{3569}{648 }\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{230443}{1944}+\frac{3569}{648 }\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{230443}{1944}+\frac{3569}{648 }\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{230443}{1944}+\frac{3569}{648 }\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{230443}{1944}+\frac{3569}{648} \pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{230443}{1944}+\frac{3569}{648 }\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{230443}{1944}+\frac{3569}{648 }\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{230443}{1944}+\frac{3569}{648 }\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{230443}{19444}+\frac{3569}{648 }\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{230443}{1944}+\frac{3569}{648 }\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{230443}{1944}+\frac{3569}{648 }\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{230443}{1944}+\frac{3569}{648 }\pi^{2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{230443}{1944}+\frac{3569}{648}\pi^ {2}\right)\] \[+\frac{1}{\epsilon^{3}}\left(-\frac{230443}{1944}+\frac{3569}{648} \pi^{2}\right)\] \[+\frac{1}{
\[\mathcal{T}^{gg,(3)}_{q\bar{q}q\bar{q}g}\Big{|}_{N_{F}}= +\frac{1}{\epsilon^{2}}\left(-\frac{73}{72}+\frac{\pi^{2}}{36}+ \frac{5}{9}\zeta_{3}\right)\] \[+\frac{1}{\epsilon}\left(-\frac{3563}{216}+\frac{83}{216}\pi^{2} +\frac{269}{54}\zeta_{3}+\frac{103}{1620}\pi^{4}\right)\]
\[\left.\mathcal{T}^{gg,(3)}_{q\bar{q}q^{\prime}\bar{q}^{\prime}g} \right|_{N_{F}(N_{F}-1)N^{-1}}= +\frac{1}{\epsilon^{4}}\left(-\frac{7}{54}\right)+\frac{1}{ \epsilon^{3}}\left(-\frac{443}{324}\right)+\frac{1}{\epsilon^{2}}\left(-\frac{ 2257}{216}+\frac{235}{648}\pi^{2}\right)\] \[+\frac{1}{\epsilon}\left(-\frac{812683}{11664}+\frac{14903}{3888 }\pi^{2}+\frac{455}{54}\zeta_{3}\right)\] \[-\frac{30173765}{69984}+\frac{76093}{2592}\pi^{2}+\frac{28471}{32 4}\zeta_{3}-\frac{9487}{77760}\pi^{4}+\mathcal{O}(\epsilon),\] (A.100)
\[\left.\mathcal{T}^{gg,(3)}_{q\bar{q}q^{\prime}\bar{q}g} \right|_{N_{F}(N_{F}-1)N^{-1}}= +\frac{1}{\epsilon^{4}}\left(-\frac{7}{54}\right)+\frac{1}{ \epsilon^{3}}\left(-\frac{443}{324}\right)\] \[+\frac{1}{\epsilon^{2}}\left(-\frac{2257}{216}+\frac{235}{648} \pi^{2}\right)\] \[+\frac{1}{\epsilon}\left(-\frac{812683}{11664}+\frac{14903}{3888 }\pi^{2}+\frac{455}{54}\zeta_{3}\right)\] \[-\frac{30173765}{69984}+\frac{76093}{2592}\pi^{2}+\frac{28471}{32 4}\zeta_{3}-\frac{9487}{77760}\pi^{4}+\mathcal{O}(\epsilon),\] (A.101)
\[\left.\mathcal{T}^{gg,(3)}_{q\bar{q}q^{\prime}\bar{q}g} \right|_{N_{F}(N_{F}-1)N^{-1}}= +\frac{1}{\epsilon^{4}}\left(\frac{7}{54}\right)+\frac{1}{ \epsilon^{3}}\left(-\frac{443}{324}\right)+\frac{1}{\epsilon^{2}}\left(-\frac {2257}{216}+\frac{235}{648}\pi^{2}\right)\] \[+\frac{1}{\epsilon}\left(-\frac{812683}{11664}+\frac{14903}{3888 }\pi^{2}+\frac{455}{54}\zeta_{3}\right)\] \[-\frac{30173765}{69984}+\frac{76093}{2592}\pi^{2}+\frac{28471}{32 4}\zeta_{3}-\frac{9487}{77760}\pi^{4}+\mathcal{O}(\epsilon),\] (A.102)
## Appendix B Lower order results
### Higgs to bottom quarks
#### b.1.1 Nlo
\[\left.\mathcal{T}^{q\bar{q},(1)}_{q\bar{q}}\right|_{N^{0}}= -\frac{1}{\epsilon^{2}}+\frac{1}{\epsilon}\left(-\frac{3}{2} \right)+\left(-1+\frac{7}{12}\pi^{2}\right)+\epsilon\left(-2+\frac{7}{3}\zeta _{3}\right)\]
\[\mathcal{T}_{q\bar{q}}^{q\bar{q},(2,[1\times 1])}\Big{|}_{N^{-1}}= +\frac{1}{\epsilon^{4}}\left(-\frac{1}{4}\right)+\frac{1}{\epsilon ^{3}}\left(-\frac{3}{4}\right)+\frac{1}{\epsilon^{2}}\left(-\frac{17}{16}+ \frac{1}{24}\pi^{2}\right)\] \[+\frac{1}{\epsilon}\left(-\frac{7}{4}+\frac{7}{16}\pi^{2}+\frac{ 7}{6}\zeta_{3}\right)+\left(-\frac{15}{4}+\frac{1}{12}\pi^{2}+\frac{7}{4}\zeta _{3}+\frac{7}{480}\pi^{4}\right)\] \[+\epsilon\left(-8+\frac{29}{48}\pi^{2}+\frac{7}{3}\zeta_{3}-\frac {73}{1920}\pi^{4}-\frac{7}{36}\pi^{2}\zeta_{3}+\frac{31}{10}\zeta_{5}\right)\] \[+\epsilon^{2}\Bigg{(}-17+\frac{5}{4}\pi^{2}+\frac{77}{12}\zeta_{3 }+\frac{7}{240}\pi^{4}\] \[\qquad\qquad-\frac{49}{48}\pi^{2}\zeta_{3}+\frac{93}{20}\zeta_{5 }+\frac{31}{12096}\pi^{6}-\frac{49}{18}\zeta_{3}^{2}\Bigg{)}+\mathcal{O}( \epsilon^{3}),\] (B.3)
\[\mathcal{T}_{q\bar{q}}^{q\bar{q},(2,[1\times 1])}\Big{|}_{N^{-1}}= +\frac{1}{\epsilon^{4}}\left(-\frac{1}{4}\right)+\frac{1}{ \epsilon^{3}}\left(-\frac{3}{4}\right)+\frac{1}{\epsilon^{2}}\left(-\frac{17} {16}+\frac{1}{24}\pi^{2}\right)\] \[+\frac{1}{\epsilon}\left(-\frac{7}{4}+\frac{7}{16}\pi^{2}+\frac{ 7}{6}\zeta_{3}\right)+\left(-\frac{15}{4}+\frac{1}{12}\pi^{2}+\frac{7}{4} \zeta_{3}+\frac{7}{480}\pi^{4}\right)\] \[+\epsilon\left(-8+\frac{29}{48}\pi^{2}+\frac{7}{3}\zeta_{3}-\frac {73}{1920}\pi^{4}-\frac{7}{36}\pi^{2}\zeta_{3}+\frac{31}{10}\zeta_{5}\right)\] \[+\epsilon^{2}\Bigg{(}-17+\frac{5}{4}\pi^{2}+\frac{77}{12}\zeta_{3 }+\frac{7}{240}\pi^{4}\] \[\qquad\qquad-\frac{49}{48}\pi^{2}\zeta_{3}+\frac{93}{20}\zeta_{5 }+\frac{31}{12096}\pi^{6}-\frac{49}{18}\zeta_{3}^{2}\Bigg{)}+\mathcal{O}( \epsilon^{3}),\] (B.4)
\[\mathcal{T}_{q\bar{q}}^{q\bar{q},(2,[1\times 1])}\Big{|}_{N^{-1}}= +\frac{1}{\epsilon^{4}}\left(\frac{1}{4}\right)+\frac{1}{\epsilon ^{3}}\left(\frac{3}{4}\right)+\frac{1}{\epsilon^{2}}\left(\frac{17}{16}-\frac {1}{24}\pi^{2}\right)+\frac{1}{\epsilon}\left(\frac{7}{4}-\frac{7}{16}\pi^{2} -\frac{7}{6}\zeta_{3}\right)\] \[+\left(\frac{15}{4}-\frac{1}{12}\pi^{2}-\frac{7}{4}\zeta_{3}-\frac {7}{480}\pi^{4}\right)\] \[+\epsilon\left(8-\frac{29}{48}\pi^{2}-\frac{7}{3}\zeta_{3}+\frac {73}{1920}\pi^{4}+\frac{7}{36}\pi^{2}\zeta_{3}-\frac{31}{10}\zeta_{5}\right)\] \[+\epsilon^{2}\Bigg{(}+17-\frac{5}{4}\pi^{2}-\frac{77}{12}\zeta_{3 }-\frac{7}{240}\pi^{4}\] \[\qquad\qquad+\frac{49}{48}\pi^{2}\zeta_{3}-\frac{93}{20}\zeta_{5 }-\frac{31}{12096}\pi^{6}+\frac{49}{18}\zeta_{3}^{2}\Bigg{)}+\mathcal{O}( \epsilon^{3}),\] (B.5)
\[\mathcal{T}_{q\bar{q}}^{q\bar{q},(2,[2\times 0])}\Big{|}_{N^{-1}}= +\frac{1}{\epsilon^{4}}\left(-\frac{1}{4}\right)+\frac{1}{ \epsilon^{3}}\left(-\frac{3}{4}\right)+\frac{1}{\epsilon^{2}}\left(-\frac{17} {16}+\frac{13}{24}\pi^{2}\right)\] \[+\epsilon\left(-8+\frac{29}{48}\pi^{2}+\frac{7}{3}\zeta_{3}-\frac {73}{1920}\pi^{4}-\frac{7}{36}\pi^{2}\zeta_{3}+\frac{31}{10}\zeta_{5}\right)\] \[+\epsilon^{2}\Bigg{(}-17+\frac{5}{4}\pi^{2}+\frac{77}{12}\zeta_{3 }+\frac{7}{240}\pi^{4}\] \[\qquad\qquad-\frac{49}{48}\pi^{2}\zeta_{3}+\frac{93}{20}\zeta_{5 }+\frac{31}{12096}\pi^{6}-\frac{49}{18}\zeta_{3}^{2}\Bigg{)}+\mathcal{O}( \epsilon^{3}),\] (B.6)
\[\mathcal{T}_{q\bar{q}}^{q\bar{q},(2,[1\times 1])}\Big{|}_{N^{-1}}= +\frac{1}{\epsilon^{4}}\left(-\frac{1}{4}\right)+\frac{1}{ \epsilon^{3}}\left(-\frac{3}{4}\right)+\frac{1}{\epsilon^{2}}\left(-\frac{17} {16}+\frac{13}{24}\pi^{2}\right)\] \[+\epsilon\left(-8+\frac{29}{48}\pi^{2}+\frac{7}{3}\zeta_{3}-\frac {73}{1920}\pi^{4}-\frac{7}{36}\pi^{2}\zeta_{3}+\frac{31}{10}\zeta_{5}\right)\] \[+\epsilon^{2}\Bigg{(}-17+\frac{5}{4}\pi^{2}+\frac{77}{12}\zeta_{3 }+\frac{7}{240}\pi^{4}\] \[\qquad\qquad-\frac{49}{48}\pi^{2}\zeta_{3}+\frac{93}{20}\zeta_{5 }+\frac{31}{12096}\pi^{6}-\frac{49}{18}\zeta_{3}^{2}\Bigg{)}+\mathcal{O}( \epsilon^{3}),\] (B.7)
\[\mathcal{T}_{q\bar{q}}^{q\bar{q},(2,[1\times 1])}\Big{|}_{N^{-1}}= +\frac{1}{\epsilon^{4}}\left(-\frac{1}{4}\right)+\frac{1}{ \epsilon^{3}}\left(-\frac{3}{4}\right)+\frac{1}{\epsilon^{2}}\left(-\frac{17} {16}+\frac{13}{24}\pi^{2}\right)\] \[+\epsilon\left(-8+\frac{29}{48}\pi^{2}+\frac{7}{3}\zeta_{3}-\frac {73}{1920}\pi^{4}-\frac{7}{36}\pi^{2}\zeta_{3}+\frac{31}{10}\zeta_{5}\right)\] \[+\epsilon^{2}\Bigg{(}-17+\frac{5}{4}\pi^{2}+\frac{77}{12}\zeta_{3 }+\frac{7}{240}\pi^{4}\] \[\qquad\qquad-\frac{49}{48}\pi^{2}\zeta_{3}+\frac{93}{20}\zeta_{5 }+\frac{31}{12096}\pi^{6}-\frac{49}{18}\zeta_{3}^{2}\Bigg{)}+\mathcal{O}( \epsilon^{3}),\] (B.8)
\[\mathcal{T}_{q\bar{q}}^{q\bar{q},(2,[1\times 1])}\Big{|}_{N^{-1}}= +\frac{1}{\epsilon^{4}}\left(-\frac{1}{4}\right)+\frac{1}{ \epsilon^{3}}\left(-\frac{3}{4}\right)+\frac{1}{\epsilon^{2}}\left(-\frac{17} {16}+\frac{13}{24}\pi^{2}\right)\] \[+\epsilon\left(-8+\frac{29}{48}\pi^{2}+\frac{7}{3}\zeta_{3}-\frac {73}{1920}\pi^{4}-\frac{7}{36}\pi^{2}\zeta_{3}+\frac{31}{10}\zeta_{5}\right)\] \[+\epsilon^{2}\Bigg{(}-17+\frac{5}{4}\pi^{2}+\frac{77}{12}\zeta_{3 }+\frac{7}{240}\pi^{4}\] \[\qquad\qquad-\frac{49}{48}\pi^{2}\zeta_{3}+\frac{93}{20}\zeta_{5 }+\frac{31}{12096}\pi^{6}-\frac{49}{18}\zeta_{3}^{2}\Bigg{)}+\mathcal{O}( \epsilon^{3
\[\mathcal{T}_{q\bar{q}g}^{q\bar{q},(2,2[2\times 0])}\Big{|}_{N_{F}}= +\frac{1}{\epsilon^{4}}\left(\frac{1}{4}\right)+\frac{1}{\epsilon^{ 3}}\left(\frac{17}{8}\right)+\frac{1}{\epsilon^{2}}\left(\frac{217}{144}- \frac{1}{2}\pi^{2}\right)\] \[+\frac{1}{\epsilon}\left(-\frac{491}{864}-\frac{13}{24}\pi^{2}+ \frac{7}{12}\zeta_{3}\right)+\left(\frac{455}{162}+\frac{377}{432}\pi^{2}- \frac{47}{36}\zeta_{3}+\frac{263}{1440}\pi^{4}\right)\] \[+\epsilon\left(\frac{2557}{486}+\frac{727}{1296}\pi^{2}+\frac{110 5}{108}\zeta_{3}-\frac{1169}{5760}\pi^{4}-\frac{13}{8}\pi^{2}\zeta_{3}+\frac{1 63}{20}\zeta_{5}\right)\] \[+\epsilon^{2}\Bigg{(}+\frac{15239}{1458}+\frac{2869}{1944}\pi^{2 }+\frac{5171}{324}\zeta_{3}-\frac{3889}{25920}\pi^{4}\] \[\qquad\qquad\qquad-\frac{2797}{432}\pi^{2}\zeta_{3}+\frac{101}{12 }\zeta_{5}-\frac{631}{15120}\pi^{6}-\frac{403}{36}\zeta_{3}^{2}\Bigg{)}+ \mathcal{O}(\epsilon^{3}),\] (B.7)
\[\mathcal{T}_{q\bar{q}g}^{q\bar{q},(2,2[2\times 0])}\Big{|}_{N^{-1}}= +\frac{1}{\epsilon^{4}}+\frac{3}{\epsilon^{3}}+\frac{1}{\epsilon^ {2}}\left(\frac{73}{8}-\frac{4}{3}\pi^{2}\right)+\frac{1}{\epsilon}\left( \frac{131}{4}-\frac{23}{8}\pi^{2}-\frac{53}{3}\zeta_{3}\right)\] \[+\frac{1879}{16}-\frac{85}{8}\pi^{2}-39\zeta_{3}+\frac{19}{72}\pi ^{4}\Bigg{)}\] \[+\epsilon\left(\frac{13763}{32}-\frac{1261}{32}\pi^{2}-\frac{863} {6}\zeta_{3}+\frac{103}{320}\pi^{4}+\frac{493}{18}\pi^{2}\zeta_{3}-\frac{897 }{5}\zeta_{5}\right)\] \[+\epsilon^{2}\Bigg{(}+\frac{102725}{64}-\frac{9461}{64}\pi^{2}- \frac{13385}{24}\zeta_{3}+\frac{9979}{5760}\pi^{4}\] \[\qquad\qquad\qquad+\frac{387}{8}\pi^{2}\zeta_{3}-\frac{1707}{5} \zeta_{5}-\frac{827}{11340}\pi^{6}+\frac{1931}{9}\zeta_{3}^{2}\Bigg{)}+ \mathcal{O}(\epsilon^{3}),\] (B.8)
\[\mathcal{T}_{q\bar{q}g}^{q\bar{q},(2)}\Big{|}_{N^{F}}= +\frac{1}{\epsilon^{3}}\left(\frac{1}{3}\right)+\frac{1}{\epsilon ^{2}}\left(\frac{1}{2}\right)+\frac{1}{\epsilon}\left(\frac{7}{4}-\frac{7}{3 6}\pi^{2}\right)+\left(\frac{127}{24}-\frac{7}{24}\pi^{2}-\frac{25}{9}\zeta_{ 3}\right)\] \[+\frac{1}{\epsilon}\left(\frac{1}{3}\right)+\frac{1}{\epsilon^{2 }}\left(\frac{1}{2}\right)+\frac{1}{\epsilon}\left(\frac{7}{4}-\frac{7}{36} \pi^{2}\right)+\left(\frac{127}{24}-\frac{7}{24}\pi^{2}-\frac{25}{9}\zeta_{3}\right)\]
\[\mathcal{T}^{q\bar{q}\bar{q}g}_{q\bar{q}gg}\Big{|}_{N^{-1}}= +\frac{1}{\epsilon}\left(\frac{13}{16}-\frac{1}{8}\pi^{2}+\frac{1} {2}\zeta_{3}\right)+\left(\frac{253}{32}-\frac{3}{8}\pi^{2}-6\zeta_{3}+\frac{2 }{45}\pi^{4}\right)\]
\[+\epsilon\left(\frac{3331}{64}-\frac{221}{96}\pi^{2}-17\zeta_{3}- \frac{1}{6}\pi^{4}-\frac{11}{12}\pi^{2}\zeta_{3}+22\zeta_{5}\right)\] \[+\epsilon^{2}\Bigg{(}+\frac{37001}{128}-\frac{2909}{192}\pi^{2}- \frac{803}{12}\zeta_{3}-\frac{37}{90}\pi^{4}\] \[\qquad\qquad+\frac{59}{6}\pi^{2}\zeta_{3}-\frac{489}{4}\zeta_{5}+ \frac{277}{11340}\pi^{6}-\frac{95}{6}\zeta_{3}^{2}\Bigg{)}+\mathcal{O}( \epsilon^{3}), \tag{113}\]
\[\mathcal{T}_{q\bar{q}Q\bar{Q}}^{q\bar{q}(2)}\Big{|}_{(N_{F}-1)}= +\frac{1}{\epsilon^{3}}\left(-\frac{1}{12}\right)+\frac{1}{ \epsilon^{2}}\left(-\frac{7}{18}\right)+\frac{1}{\epsilon}\left(-\frac{443}{ 216}+\frac{11}{72}\pi^{2}\right)\] \[+\left(-\frac{12923}{1296}+\frac{17}{27}\pi^{2}+\frac{67}{18} \zeta_{3}\right)\] \[+\epsilon\left(-\frac{358115}{7776}+\frac{4171}{1296}\pi^{2}+ \frac{695}{54}\zeta_{3}+\frac{137}{4320}\pi^{4}\right)\] \[+\epsilon^{2}\Bigg{(}-\frac{9579035}{46656}+\frac{119635}{7776} \pi^{2}+\frac{5051}{81}\zeta_{3}-\frac{49}{6480}\pi^{4}\] \[\qquad\qquad-\frac{629}{108}\pi^{2}\zeta_{3}+\frac{1651}{30}\zeta _{5}\Bigg{)}+\mathcal{O}(\epsilon^{3}) \tag{114}\]
### Higgs to gluons
#### b.2.1 Nlo
\[\mathcal{T}_{gg}^{gg,(1)}\Big{|}_{N_{F}}= +\frac{1}{\epsilon}\left(\frac{2}{3}\right), \tag{115}\]
\[\mathcal{T}_{gg}^{gg,(1)}\Big{|}_{N}= +\frac{1}{\epsilon^{2}}\left(-2\right)+\frac{1}{\epsilon}\left(- \frac{11}{3}\right)+\left(+\frac{7}{6}\pi^{2}\right)+\epsilon\left(-2+\frac{1 4}{3}\zeta_{3}\right)\] \[+\epsilon^{2}\left(-6-\frac{73}{720}\pi^{4}\right)+\epsilon^{3} \left(-14+\frac{7}{6}\pi^{2}-\frac{49}{18}\pi^{2}\zeta_{3}+\frac{62}{5}\zeta_ {5}\right)\] \[+\epsilon^{4}\left(-30+\frac{7}{2}\pi^{2}+\frac{14}{3}\zeta_{3}- \frac{437}{60480}\pi^{6}-\frac{49}{9}\zeta_{3}^{2}\right)+\mathcal{O}( \epsilon^{5}), \tag{116}\]
\[\mathcal{T}_{ggg}^{gg,(1)}\Big{|}_{N}= +\frac{1}{\epsilon^{2}}\left(2\right)+\frac{1}{\epsilon}\left( \frac{11}{3}\right)+\left(\frac{73}{6}-\frac{7}{6}\pi^{2}\right)+\epsilon \left(\frac{451}{12}-\frac{77}{36}\pi^{2}-\frac{50}{3}\zeta_{3}\right)\] \[+\epsilon^{2}\left(\frac{2729}{24}-\frac{511}{72}\pi^{2}-\frac{27 5}{9}\zeta_{3}-\frac{71}{720}\pi^{4}\right)\] \[+\epsilon^{3}\left(\frac{16411}{48}-\frac{3157}{144}\pi^{2}- \frac{1825}{18}\zeta_{3}-\frac{781}{4320}\pi^{4}+\frac{175}{18}\pi^{2}\zeta_{3} -\frac{482}{5}\zeta_{5}\right)\] \[+\epsilon^{4}\Bigg{(}+\frac{98513}{96}-\frac{19103}{288}\pi^{2}- \frac{11275}{36}\zeta_{3}-\frac{5183}{8640}\pi^{4}\] \[\qquad\qquad+\frac{1925}{108}\pi^{2}\zeta_{3}-\frac{2651}{15}\zeta _{5}-\frac{4027}{60480}\pi^{6}+\frac{625}{9}\zeta_{3}^{2}\Bigg{)}+\mathcal{O} (\epsilon^{5}), \tag{117}\]
\[\mathcal{T}^{gg,(2,[2\times 0])}_{gg}\Big{|}_{N_{F}N}= +\frac{1}{\epsilon}\left(-\frac{2}{3}\right)+\left(-\frac{7}{3} \right)+\epsilon\left(-\frac{15}{2}+\frac{7}{18}\pi^{2}\right)\] \[+\epsilon^{2}\left(-\frac{93}{4}+\frac{49}{36}\pi^{2}+\frac{50} {9}\zeta_{3}\right)+\epsilon^{3}\left(-\frac{567}{8}+\frac{35}{8}\pi^{2}+ \frac{175}{9}\zeta_{3}+\frac{71}{2160}\pi^{4}\right)\] \[+\epsilon^{4}\left(-\frac{3429}{16}+\frac{217}{16}\pi^{2}+\frac{ 125}{2}\zeta_{3}+\frac{497}{4320}\pi^{4}-\frac{175}{54}\pi^{2}\zeta_{3}+\frac{ 482}{15}\zeta_{5}\right)\] \[+\mathcal{O}(\epsilon^{5}), \tag{103}\]
#### b.2.2 Nnlo
\[\mathcal{T}^{gg,(2,[1\times 1])}_{gg}\Big{|}_{N_{F}^{2}}= +\frac{1}{\epsilon^{2}}\left(\frac{1}{9}\right), \tag{104}\]
\[\mathcal{T}^{gg,(2,[1\times 1])}_{gg}\Big{|}_{N_{F}N}= +\frac{1}{\epsilon^{3}}\left(-\frac{2}{3}\right)+\frac{1}{ \epsilon^{2}}\left(-\frac{11}{9}\right)+\frac{1}{\epsilon}\left(+\frac{7}{18 }\pi^{2}\right)+\left(-\frac{2}{3}+\frac{14}{9}\zeta_{3}\right)\] \[+\epsilon\left(-2-\frac{73}{2160}\pi^{4}\right)+\epsilon^{2} \left(-\frac{14}{3}+\frac{7}{18}\pi^{2}-\frac{49}{54}\pi^{2}\zeta_{3}+\frac{6 2}{15}\zeta_{5}\right)\] \[+\mathcal{O}(\epsilon^{3}), \tag{105}\]
\[\mathcal{T}^{gg,(2,[1\times 1])}_{gg}\Big{|}_{N^{2}}= +\frac{1}{\epsilon^{4}}\left(1\right)+\frac{1}{\epsilon^{3}} \left(\frac{11}{3}\right)+\frac{1}{\epsilon^{2}}\left(\frac{121}{36}-\frac{1 }{6}\pi^{2}\right)\] \[+\frac{1}{\epsilon}\left(2-\frac{77}{36}\pi^{2}-\frac{14}{3}\zeta _{3}\right)+\left(\frac{29}{3}-\frac{77}{9}\zeta_{3}-\frac{7}{120}\pi^{4}\right)\] \[+\epsilon\left(25-\frac{1}{3}\pi^{2}+\frac{803}{4320}\pi^{4}+ \frac{7}{9}\pi^{2}\zeta_{3}-\frac{62}{5}\zeta_{5}\right)\] \[+\epsilon^{2}\Bigg{(}+\frac{170}{3}-\frac{113}{36}\pi^{2}-\frac{2 8}{3}\zeta_{3}\] \[\qquad\qquad+\frac{539}{108}\pi^{2}\zeta_{3}-\frac{341}{15}\zeta _{5}-\frac{31}{3024}\pi^{6}+\frac{98}{9}\zeta_{3}^{2}\Bigg{)}+\mathcal{O}( \epsilon^{3}), \tag{106}\]
\[\mathcal{T}^{gg,(2,[2\times 0])}_{gg}\Big{|}_{N_{F}N^{-1}}= +\frac{1}{\epsilon}\left(-\frac{1}{4}\right)+\left(\frac{67}{24 }-2\zeta_{3}\right)+\epsilon\left(\frac{2027}{144}-\frac{43}{72}\pi^{2}-\frac {23}{3}\zeta_{3}-\frac{1}{27}\pi^{4}\right)\] \[+\epsilon^{2}\left(\frac{47491}{864}-\frac{2621}{432}\pi^{2}- \frac{281}{9}\zeta_{3}-\frac{23}{162}\pi^{4}+\frac{41}{9}\pi^{2}\zeta_{3}-8 \zeta_{5}\right)\] \[+\mathcal{O}(\epsilon^{3}), \tag{107}\]
\[\mathcal{T}^{gg,(2,[2\times 0])}_{gg}\Big{|}_{N_{F}^{2}}= +\frac{1}{\epsilon^{2}}\left(\frac{2}{9}\right), \tag{108}\]
\[\mathcal{T}^{gg,(2,[2\times 0])}_{gg}\Big{|}_{N_{F}N}= +\frac{1}{\epsilon^{3}}\left(-\frac{7}{6}\right)+\frac{1}{ \epsilon^{2}}\left(-\frac{13}{6}\right)+\frac{1}{\epsilon}\left(\frac{155}{108 }+\frac{13}{36}\pi^{2}\right)\]
\[+\epsilon^{2}\Bigg{(}-\frac{3663205}{23328}+\frac{21911}{1296}\pi^{2}+ \frac{548}{81}\zeta_{3}+\frac{527}{1296}\pi^{4}\] \[\qquad\qquad+\frac{175}{54}\pi^{2}\zeta_{3}-\frac{9}{5}\zeta_{5} \Bigg{)}+\mathcal{O}(\epsilon^{3}),\] (B.24)
\[\mathcal{T}^{gg,(2,[2\times 0])}_{gg}\Big{|}_{N^{2}}= +\frac{1}{\epsilon^{4}}\left(1\right)+\frac{1}{\epsilon^{3}} \left(\frac{77}{12}\right)+\frac{1}{\epsilon^{2}}\left(\frac{175}{36}-\frac{2 5}{12}\pi^{2}\right)\] \[+\frac{1}{\epsilon}\left(-\frac{119}{27}-\frac{143}{72}\pi^{2}- \frac{25}{6}\zeta_{3}\right)+\left(\frac{8237}{324}+\frac{335}{72}\pi^{2}- \frac{33}{2}\zeta_{3}+\frac{31}{40}\pi^{4}\right)\] \[+\epsilon\left(\frac{200969}{1944}-\frac{83}{54}\pi^{2}-\frac{11 39}{54}\zeta_{3}-\frac{5071}{4320}\pi^{4}+\frac{323}{36}\pi^{2}\zeta_{3}+ \frac{71}{10}\zeta_{5}\right)\] \[+\epsilon^{2}\Bigg{(}+\frac{4082945}{11664}-\frac{25735}{648}\pi ^{2}-\frac{13109}{81}\zeta_{3}-\frac{15343}{4320}\pi^{4}\] \[\qquad\qquad+\frac{781}{108}\pi^{2}\zeta_{3}-\frac{341}{10}\zeta _{5}+\frac{491}{10080}\pi^{6}+\frac{901}{18}\zeta_{3}^{2}\Bigg{)}+\mathcal{O} (\epsilon^{3}),\] (B.25)
\[\mathcal{T}^{gg,(2)}_{ggg}\Big{|}_{NFN}= +\frac{1}{\epsilon^{3}}\left(2\right)+\frac{1}{\epsilon^{2}} \left(\frac{11}{3}\right)+\frac{1}{\epsilon}\left(\frac{37}{3}-\frac{7}{6} \pi^{2}\right)+\left(\frac{467}{12}-\frac{77}{36}\pi^{2}-\frac{50}{3}\zeta_{3}\right)\] \[+\epsilon\left(\frac{26149}{216}-\frac{529}{72}\pi^{2}-\frac{275} {9}\zeta_{3}-\frac{71}{720}\pi^{4}\right)\] \[+\epsilon^{2}\left(\frac{162827}{432}-\frac{3445}{144}\pi^{2}- \frac{629}{6}\zeta_{3}-\frac{781}{4320}\pi^{4}+\frac{175}{18}\pi^{2}\zeta_{3} -\frac{482}{5}\zeta_{5}\right)\] \[+\mathcal{O}(\epsilon^{3}),\] (B.26)
\[\mathcal{T}^{gg,(2)}_{ggg}\Big{|}_{N^{2}}= +\frac{1}{\epsilon^{4}}\left(-\frac{9}{2}\right)+\frac{1}{ \epsilon^{3}}\left(-\frac{121}{6}\right)+\frac{1}{\epsilon^{2}}\left(-\frac{1 70}{3}+\frac{71}{12}\pi^{2}\right)\] \[+\frac{1}{\epsilon}\left(-\frac{23195}{108}+\frac{341}{18}\pi^{2 }+72\zeta_{3}\right)\] \[+\left(-\frac{173249}{216}+\frac{13831}{216}\pi^{2}+\frac{2266}{9 }\zeta_{3}-\frac{199}{144}\pi^{4}\right)\] \[+\epsilon\left(-\frac{11793239}{3888}+\frac{332557}{1296}\pi^{2}+ \frac{15799}{18}\zeta_{3}-\frac{1991}{864}\pi^{4}-\frac{940}{9}\pi^{2}\zeta_{3 }+\frac{3224}{5}\zeta_{5}\right)\] \[+\epsilon^{2}\Bigg{(}-\frac{90432685}{7776}+\frac{7855165}{7776} \pi^{2}+\frac{1168555}{324}\zeta_{3}-\frac{53491}{5184}\pi^{4}\] \[\qquad\qquad-\frac{31361}{108}\pi^{2}\zeta_{3}+\frac{10186}{5} \zeta_{5}+\frac{29303}{90720}\pi^{6}-728\zeta_{3}^{2}\Bigg{)}+\mathcal{O}( \epsilon^{3}),\] (B.27)
\[\mathcal{T}^{gg,(2)}_{q\bar{q}g}\Big{|}_{NFN^{-1}}= +\frac{1}{\epsilon^{3}}\left(-\frac{1}{3}\right)+\frac{1}{ \epsilon^{2}}\left(-\frac{41}{18}\right)+\frac{1}{\epsilon}\left(-\frac{325} {27}+\frac{1}{2}\pi^{2}\right)\]
\[\mathcal{T}^{gg,(2)}_{q\bar{q}gg}\Big{|}_{N_{F}N^{-1}}= +\frac{1}{\epsilon^{3}}\left(\frac{1}{3}\right)+\frac{1}{\epsilon^ {2}}\left(\frac{41}{18}\right)+\frac{1}{\epsilon}\left(\frac{1327}{108}-\frac{ 1}{2}\pi^{2}\right)+\left(\frac{4864}{81}-\frac{41}{12}\pi^{2}-\frac{86}{9} \zeta_{3}\right)\] \[+\epsilon\left(\frac{134897}{486}-\frac{1331}{72}\pi^{2}-\frac{17 09}{27}\zeta_{3}+\frac{23}{1080}\pi^{4}\right)\]
\[+\epsilon^{2}\Bigg{(}+\frac{898435}{729}-\frac{4865}{54}\pi^{2}- \frac{27545}{81}\zeta_{3}+\frac{419}{1296}\pi^{4}\] \[\qquad\qquad\qquad+\frac{131}{9}\pi^{2}\zeta_{3}-\frac{1672}{15} \zeta_{5}\Bigg{)}+\mathcal{O}(\epsilon^{3}), \tag{103}\]
\[\mathcal{T}^{gg,(2)}_{q\bar{q}gg}\Big{|}_{N_{F}N}= +\frac{1}{\epsilon^{3}}\left(-\frac{3}{2}\right)+\frac{1}{ \epsilon^{2}}\left(-\frac{155}{18}\right)+\frac{1}{\epsilon}\left(-\frac{523 }{12}+\frac{79}{36}\pi^{2}\right)\] \[+\left(-\frac{16579}{81}+\frac{1385}{108}\pi^{2}+37\zeta_{3}\right)\] \[+\epsilon\left(-\frac{74282}{81}+\frac{42251}{648}\pi^{2}+\frac{ 6065}{27}\zeta_{3}-\frac{1007}{2160}\pi^{4}\right)\] \[+\epsilon^{2}\Bigg{(}-\frac{5799143}{1458}+\frac{74429}{243}\pi ^{2}+\frac{10376}{9}\zeta_{3}-\frac{2921}{1296}\pi^{4}\] \[\qquad\qquad\qquad-\frac{2971}{54}\pi^{2}\zeta_{3}+\frac{1503}{5} \zeta_{5}\Bigg{)}+\mathcal{O}(\epsilon^{3}), \tag{104}\]
\[\mathcal{T}^{gg,(2)}_{q\bar{q}q\bar{q}}\Big{|}_{N_{F}N^{-1}}= +\left(-\frac{5}{12}+\frac{1}{3}\zeta_{3}\right)+\epsilon\left( -\frac{313}{72}+\frac{23}{18}\zeta_{3}+\frac{1}{36}\pi^{4}\right)\] \[+\epsilon^{2}\left(-\frac{12521}{432}+\frac{5}{8}\pi^{2}+\frac{1 27}{27}\zeta_{3}+\frac{23}{216}\pi^{4}-\frac{1}{2}\pi^{2}\zeta_{3}+12\zeta_{5 }\right)\] \[+\mathcal{O}(\epsilon^{3}), \tag{105}\]
\[\mathcal{T}^{gg,(2)}_{q\bar{q}Q\bar{Q}}\Big{|}_{N_{F}(N_{F}-1)}= +\frac{1}{\epsilon^{2}}\left(\frac{1}{9}\right)+\frac{1}{ \epsilon}\left(\frac{7}{9}\right)+\left(\frac{677}{162}-\frac{1}{6}\pi^{2} \right)+\epsilon\left(\frac{241}{12}-\frac{7}{6}\pi^{2}-\frac{80}{27}\zeta_{3}\right)\] \[+\epsilon^{2}\left(\frac{529217}{5832}-\frac{677}{108}\pi^{2}- \frac{560}{27}\zeta_{3}+\frac{29}{1080}\pi^{4}\right)+\mathcal{O}(\epsilon^{ 3}) \tag{106}\] |
2307.08181 | Nematic spin correlations pervading the phase diagram of
FeSe$_{1-x}$S$_{x}$ | We use resonant inelastic X-ray scattering (RIXS) at the Fe-L$_3$ edge to
study the spin excitations of uniaxial-strained and unstrained
FeSe$_{1-x}$S$_{x}$ ($0\leq x\leq0.21$) samples. The measurements on unstrained
samples reveal dispersive spin excitations in all doping levels, which show
only minor doping dependence in energy dispersion, lifetime, and intensity,
indicating that high-energy spin excitations are only marginally affected by
sulfur doping. RIXS measurements on uniaxial-strained samples reveal that the
high-energy spin-excitation anisotropy observed previously in FeSe is also
present in the doping range $0< x\leq0.21$ of FeSe$_{1-x}$S$_{x}$. The
spin-excitation anisotropy persists to a high temperature up to $T>200$ K in
$x=0.18$ and reaches a maximum around the nematic quantum critical doping
($x_c\approx0.17$). Since the spin-excitation anisotropy directly reflects the
existence of nematic spin correlations, our results indicate that high-energy
nematic spin correlations pervade the regime of nematicity in the phase diagram
and are enhanced by the nematic quantum criticality. These results emphasize
the essential role of spin fluctuations in driving electronic nematicity and
open the door for uniaxial strain tuning of spin excitations in quantum
materials hosting strong magnetoelastic coupling and electronic nematicity. | Ruixian Liu, Wenliang Zhang, Yuan Wei, Zhen Tao, Teguh C. Asmara, Yi Li, Vladimir N. Strocov, Rong Yu, Qimiao Si, Thorsten Schmitt, Xingye Lu | 2023-07-17T00:55:39Z | http://arxiv.org/abs/2307.08181v1 | # Nematic spin correlations pervading the phase diagram of FeSe\({}_{1-x}\)S\({}_{x}\)
###### Abstract
We use resonant inelastic X-ray scattering (RIXS) at the Fe-L\({}_{3}\) edge to study the spin excitations of uniaxial-strained and unstrained FeSe\({}_{1-x}\)S\({}_{x}\) (\(0\leq x\leq 0.21\)) samples. The measurements on unstrained samples reveal dispersive spin excitations in all doping levels, which show only minor doping dependence in energy dispersion, lifetime, and intensity, indicating that high-energy spin excitations are only marginally affected by sulfur doping. RIXS measurements on uniaxial-strained samples reveal that the high-energy spin-excitation anisotropy observed previously in FeSe is also present in the doping range \(0<x\leq 0.21\) of FeSe\({}_{1-x}\)S\({}_{x}\). The spin-excitation anisotropy persists to a high temperature up to \(T>200\) K in \(x=0.18\) and reaches a maximum around the nematic quantum critical doping (\(x_{c}\approx 0.17\)). Since the spin-excitation anisotropy directly reflects the existence of nematic spin correlations, our results indicate that high-energy nematic spin correlations pervade the regime of nematicity in the phase diagram and are enhanced by the nematic quantum criticality. These results emphasize the essential role of spin fluctuations in driving electronic nematicity and open the door for uniaxial strain tuning of spin excitations in quantum materials hosting strong magnetoelastic coupling and electronic nematicity.
Nematic order in quantum materials refers to an electronic state characterized by broken rotational symmetry and retained orientational order (or translational symmetry) [1]. In iron-based superconductors (FeSCs), nematic order manifests as strong \(C_{2}\) symmetric electronic anisotropies in the paramagnetic orthorhombic state with a small orthorhombic lattice distortion \(\delta=(a-b)/(a+b)\)[2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12]. Under uniaxial strain (\(\varepsilon\)) along the \(a/b\) axis, such anisotropy can persist to a temperature range beyond the nematic transition, revealing the fluctuating regime in the tetragonal state [7; 8; 9; 10; 11; 12]. Nematic order/fluctuation has been demonstrated to be a universal feature of FeSCs [4; 5; 10]. Since electronic nematicity is essential for determining the exotic electronic properties of FeSCs and driving other emergent orders therein, it has attracted tremendous research interest in the past decade [2; 3; 4; 5; 6].
FeSe\({}_{1-x}\)S\({}_{x}\), hosting exotic electronic properties and a pristine nematic transition without subsequent magnetic transition, has been a focus for studying electronic nematicity and its correlation with other intertwined order/fluctuations [4; 6; 13; 14]. Figure 1(a) illustrates the compositional phase diagram of FeSe\({}_{1-x}\)S\({}_{x}\)[6]. The parent compound FeSe exhibits a tetragonal-to-orthorhombic structural (nematic) transition at \(T_{s}\approx 90\) K and a superconducting transition at \(T_{c}\approx 8\) K [15; 16]. The nematic state at \(T<T_{s}\) is a quantum-disordered magnetic state with intense antiferromagnetic (AF) spin excitations as revealed by neutron scattering and RIXS studies of twinned FeSe [17; 18; 19; 20; 21; 22]. Furthermore, our previous RIXS study on uniaxial-strain detwinned FeSe revealed underdamped spin excitations with strong anisotropy between the excitations along \([H,0]\) and \([0,K]\) directions, which persists to a temperature slightly above \(T_{s}\) under moderate uniaxial strain and has been suggested to be driven by spin nematicity [23]. With increasing sulfur doping, the nematic transition is gradually suppressed to \(T_{s}=0\) at the putative nematic quantum critical point (NQCP) (\(x_{c}\approx 0.17\)), while \(T_{c}\) increases slightly from \(T_{c}\approx 8\) K for \(x=0\) to \(T_{c}\approx 10\) K at \(x\approx 0.08\) and decrease to \(T_{c}\approx 5\) K across the NQCP [6]. Beyond the nematic ordering region at \(T\lesssim T_{s}\) and \(x\lesssim x_{c}\), the nematic susceptibility of FeSe\({}_{1-x}\)S\({}_{x}\) derived from elastoresistance measurements revealed a widely spread nematic fluctuating regime in the tetragonal phase ( \(T\gtrsim T_{s}\) and \(x\gtrsim x_{c}\)) [24]. The NQCP leads to maximized nematic susceptibility [24], and induces dramatic changes in various electronic properties [4; 25; 26; 27]. However, it is unclear how the AF spin excitations evolve across the nematic ordering phase boundary and the NQCP. In particular, on sulfur doping, as the nematic order weakens and disappears at the NQCP, it remains to be explored experimentally to what extent the spin-excitation anisotropy (nematic spin correlations) will pervade the phase diagram and be affected by the NQCP.
To address these issues, we use Fe-L\({}_{3}\) RIXS to measure the spin excitations of unstrained and uniaxial-strained FeSe\({}_{1-x}\)S\({}_{x}\) (\(x=0-0.21\)) across the NQCP (Fig. 1(a)) [28; 29; 30; 31]. To explore the nematic spin correlations, one needs to apply uniaxial strain along \(a/b\) axis of FeSe\({}_{1-x}\)S\({}_{x}\) and measure the possible spin-excitation anisotropy [23]. In this work, we use a uniaxial strain device based on differential thermal expansions of invar alloy and aluminum to apply uniaxial strain on FeSe\({}_{1-x}\)S\({}_{x}\) (Fig. 1(b)) [32; 33]. While the measurements on unstrained samples reveal persistent spin excitations with minor doping dependence (Figs. 2 and 3), the measurements on uniaxial-strained samples reveal an enhancement of the spin-excitation anisotropy near the NQCP. Moreover, the prominent spin-excitation anisotropy persists in high doping
levels (\(x=0.21\)) and temperature (\(T>100\) K in \(x=0.11\) and \(T>200\) K in \(x=0.18\)) far beyond the nematic ordering region, demonstrating that the high-energy nematic spin correlations pervade a wide doping and temperature range in the phase diagram of FeSe\({}_{1-x}\)S\({}_{x}\). Our results demonstrate that the electronic instability underlying the nematic fluctuating regime dominates a wide temperature and doping range in the phase diagram and significantly affects high-energy spin excitations. This discovery corroborates the spin-nematic picture and provides new insight for understanding the interplay between the intertwined orders/fluctuations in FeSe\({}_{1-x}\)S\({}_{x}\).
The doping levels studied in this work are marked by vertical red bars in the phase diagram of FeSe\({}_{1-x}\)S\({}_{x}\) (Fig. 1(a)). Figure 1(b) illustrates the scattering geometry and the reciprocal space of the RIXS measurements, which were performed near the Fe-L\({}_{3}\) edge (incident energy \(E_{i}\approx 708\) eV) as shown in the total fluorescence yield (TFY) X-ray absorption spectroscopy (XAS) data in Fig. 1(c). The RIXS and XAS measurements were carried out with the RIXS experimental station at the ADRESS beamline of the Swiss Light Source at the Paul Scherrer Institut [33; 38; 39].
To apply uniaxial strain on the sample, a thin FeSe\({}_{1-x}\)S\({}_{x}\) crystal was glued onto the titanium bridge in the center of the uniaxial-strain device using epoxy (Fig. 1(b)). Upon cooling, the differential thermal expansion coefficients between the aluminum frame (\(\alpha\approx-24\times 10^{-6}\)/K) and the invar-alloy blocks (\(\alpha\approx-2\times 10^{-6}\)/K) can generate a uniaxial strain up to \(\varepsilon=\varepsilon_{xx}-\varepsilon_{yy}\approx-0.8\%\) on the neck of the titanium bridge at low temperature, which can be transferred to the thin crystal glued on it [32; 33]. The uniaxial strain on the sample's surface can be accurately measured using a microscope [33].
To facilitate the discussion of the results presented below, we denote the RIXS spectra and the spin excitations along \(H,K\) and \([H,H]\) directions as \(I_{h/k/hh}(q_{\short
To extract the information about the spin excitations and understand the spin-excitation anisotropy quantitatively, we use a damped harmonic oscillator function [19; 23] to describe the spin excitations collected with RIXS:
\[S(q,E)=A\,\frac{2\,\gamma\,E}{\left(E^{2}-E_{0}^{2}\right)^{2}+(\gamma\,E)^{2}}, \tag{2}\]
where \(E_{0}(\mathbf{q}_{\
decreases only by \(\sim 15\%\) when warmed to \(T=100\) K (\(>T_{s}=65\) K) (inset of Fig. 4(a)), indicating that the nematic spin correlations can persist to higher temperatures well above the \(T_{s}\) in the nematic ordering regime (\(x<x_{c}\)). Furthermore, we show in Fig. 4(a) \(I_{h}(q_{\
for \(x=0.15\) (\(\varepsilon\approx-0.6\%\), blue), \(x=0.17\) (\(\varepsilon\approx-0.6\%\), red), \(x=0.18\) (\(\varepsilon\approx-0.4\%\), green), and \(x=0.21\) (\(\varepsilon\approx-0.4\%\), black), revealing a decreasing tendency for higher doping. We note that the anisotropy of the RIXS spectra persists to an energy scale \(\sim 1\) eV of the electron-hole pair tail, which could be attributed to an anisotropy in incoherent charge scattering [41].
Figure 4(c) summarizes the doping-dependent spin excitation anisotropy \(S_{h}(q_{\mathrm{i}})/S_{k}(q_{\mathrm{i}})\) with \(q_{\mathrm{i}}=0.375\) extracted from the fitting of the spin excitations [33]. Consistent with the enhancement at \(x=0.17\) (Fig. 3(g)), the anisotropy in Fig. 4(c) reaches a maximum in the doping region \(x\approx 0.15-0.18\). Such a doping evolution of the nematic spin fluctuations is in line with the exotic electronic properties at the NQCP (\(x_{c}\approx 0.17\)) [24; 25; 26; 27]. It is well known that quantum critical fluctuations usually dominate static electronic transport properties and low-energy charge/spin dynamics [42]. The \(E\sim 200\) meV nematic spin correlations exhibiting a maximum near the NQCP strongly suggest that the nematic fluctuations can also dominate the physics at a much higher energy scale (under a uniaxial-strain field).
The nematic fluctuating regime in FeSCs has been well established in various studies of (quasi-)static properties, such as the softening of the shear modulus \(C_{66}\)[43; 44; 45], the divergent nematic susceptibility obtained from elastoresistance \(-2m_{66}\)[10; 24], and the persistence of local orthorhombicity (short-range orthorhombic structure) in the tetragonal state of both iron pnictides and FeSe [36; 46; 47; 48]. As such local orthorhombicity persists in both iron pnictides and iron chalcogenides, it should be a common feature of the nematic fluctuating regime of FeSCs. Our identification of spin-excitation anisotropy at high energies is consistent with the general notion that high-energy fluctuations are concomitant with short-spatial-range correlations. Our results, thus, uncover a common feature that high-energy nematic spin fluctuations permeate across FeSCs.
In summary, we find the high-energy spin-excitations in FeSe\({}_{1-x}\)S\({}_{x}\) (\(x\lesssim 0.21\)) are only marginally affected by sulfur doping, and illustrate that the nematic spin correlations pervade wide doping (\(0\lesssim x\lesssim 0.21\)) and temperature region of the phase diagram, corroborating the spin-nematic picture. The enhancement of the nematic spin correlations near the NQCP suggests that the nematic fluctuations could affect the physics at a higher energy scale. The strain-induced spin-excitation anisotropy in the tetragonal state of FeSe\({}_{1-x}\)S\({}_{x}\) establishes uniaxial strain as an effective way to tune spin and/or charge fluctuations in similar quantum materials hosting electron-lattice coupling.
The work at Beijing Normal University is supported by the National Key Projects for Research and Development of China (Grant No. 2021YFA1400400) and the National Natural Science Foundation of China (Grant Nos. 12174029, and 11922402) (X.L.). The RIXS experiments were carried out at the ADRESS beamline of the Swiss Light Source at the Paul Scherrer Institut (PSI). The work at PSI is supported by the Swiss National Science Foundation through project no. 200021_207904, 200021_178867, and the Sinergia network Mott Physics Beyond the Heisenberg Model (MPBH) (project numbers CRSII2 160765/1 and CRSII2 141962). Y.W. and T.C.A. acknowledge financial support from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 884104 (Y.W.) and No. 701647 (T.C.A.) (PSI-FELLOW-III-3i) Work at Renmin has in part been supported by the National Science Foundation of China Grant No. 12174441. Work at Rice was primarily supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, under Award No. DE-SC0018197, and by the Robert A. Welch Foundation Grant No. C-1411. Q.S. acknowledges the hospitality of the Kavli Institute for Theoretical Physics, supported in part by the National Science Foundation under Grant No. NSF PHY1748958, during the program "A Quantum Universe in a Crystal: Symmetry and Topology across the Correlation Spectrum", as well as the hospitality of the Aspen Center for Physics, which is supported by NSF grant No. PHY-1607611.
|
2306.06285 | Circular Rectifiction of 3D Video and Efficient Modification of 3D-HEVC | Video acquired from multiple cameras located along a line is often rectified
to video virtually obtained from cameras with ideally parallel optical axes
collocated on a single plane and principal points on a line. Such an approach
simplifies video processing including depth estimation and compression.
Nowadays, for many application video, like virtual reality or virtual
navigation, the content is often acquired by cameras located nearly on a circle
or on a part of that. Therefore, we introduce new operation of circular
rectification that results in multiview video virtually obtained from cameras
located on an ideal arc and with optical axes that are collocated on a single
plane and they intersect in a single point. For the circularly rectified video,
depth estimation and compression are simplified. The standard 3DHEVC codec was
designed for rectified video and its efficiency is limited for video acquired
from cameras located on an arc. Therefore, we developed a 3-D HEVC codec
modified in order to compress efficiently circularly rectified video. The
experiments demonstrate its better performance than for the standard 3D-HEVC
codec. | JarosÅaw Samelak, Marek DomaÅski | 2023-06-09T22:25:29Z | http://arxiv.org/abs/2306.06285v1 | # Circular Rectification of 3D Video
###### Abstract
Video acquired from multiple cameras located along a line is often rectified to video virtually obtained from cameras with ideally parallel optical axes collocated on a single plane and principal points on a line. Such an approach simplifies video processing including depth estimation and compression. Nowadays, for many application video, like virtual reality or virtual navigation, the content is often acquired by cameras located nearly on a circle or on a part of that. Therefore, we introduce new operation of circular rectification that results in multiview video virtually obtained from cameras located on an ideal arc and with optical axes that are collocated on a single plane and they intersect in a single point. For the circularly rectified video, depth estimation and compression are simplified. The standard 3D-HEVC codec was designed for rectified video and its efficiency is limited for video acquired from cameras located on an arc. Therefore, we developed a 3-D HEVC codec modified in order to compress efficiently circularly rectified video. The experiments demonstrate its better performance than for the standard 3D-HEVC codec.
multiview video, 3D video coding, circular camera arrangement, interview prediction
## I Introduction
At the beginning of the second decade of 2000s, extensive efforts were aimed at development of multiview and 3D video coding technology for the content acquired from multiple cameras densely distributed on a line. The research resulted in development of multiview and 3D extensions of Advanced Video Coding (AVC) [4] and High Efficiency Video Coding (HEVC) [5] like MV-HEVC (Multi-View HEVC) and 3D-HEVC [6]. This development was related to expected applications of autostereoscopic displays that simultaneously display several dozens of views related to the locations slightly shifted along a straight line. Unfortunately, autostereoscopic displays still have not gained sufficient popularity until now. This is one of the reasons why multiview and 3D profiles of AVC and HEVC are of limited usage now.
More recently, rash development of the virtual reality technology caused raising interest in compression of multiview or 3D video acquired by cameras located around a scene, often nearly on a circle or on an arc [8-11]. Unfortunately, both MV-HEVC and 3D-HEVC were designed the video content acquired from camera densely distributed on a line and for such content they provide substantial bitrate reduction as compared to simulcast HEVC [6, 7]. Unfortunately this gain reduces to very small or even negligible values for the content obtained from cameras sparsely distributed on a circle or even arbitrarily located around a scene [1, 2]. This effect results from the usage of simple disparity-compensated inter-view predictions in MV-HEVC and 3D-HEVC. In the references, this problem [3] was dealt by modifications of 3D-HEVC codecs by the use of real 3D mappings for inter-view predictions instead of the simple disparity-compensated prediction [1, 2]. Unfortunately, the usage of real 3D mappings results in substantial increase of the computational effort for inter-view predictions.
In 3D-HEVC, it is assumed that input video was acquired with cameras located ideally on a line with their optical axes being parallel on a single plain [6]. Such requirement is impossible to meet in practice due to differences between cameras and difficulties in positioning them ideally on a line. Therefore, prior to compression, multiview video is usually rectified, which corresponds to correcting positions of cameras and suppressing the results of differences in their properties (Fig. 1). In multiview and 3D video processing, video acquired from multiple camera located along a line is usually rectified in order to process video virtually obtained from cameras with ideally parallel optical axes collocated on a single plane [12, 13]. It should be stressed that rectification does not correct real positions of cameras, but transforms the views obtained from the cameras onto a common plane, which is tantamount to acquiring video with ideally positioned cameras. Such an approach simplifies video processing including depth estimation and compression.
Here, we propose to perform similar operation for multiview/3D video obtained from cameras located on an arc. Obviously, they are never located on an ideal arc. Therefore, in this paper, we introduce new operation of circular rectification that results in multiview video virtually obtained from cameras located on an ideal arc and with optical axes that are collocated on a single plane and are intersecting in a single point. In the paper, we describe the procedure for circular rectification of multiview video. For the circularly rectified video, depth estimation and compression are simplified.
Fig. 1: Linear camera setup before and after rectification.
As mentioned before, the 3D-HEVC codec was developed and tested predominantly for rectified video and its efficiency is limited for video acquired from cameras located on an arc. Therefore, in this paper, we develop a 3-D HEVC codec modified in order to compress efficiently circularly rectified video. The experiments demonstrate its better performance than for the standard 3D-HEVC codec.
Let recapitulate, for this paper, the two main goals are: - develop the concept and the procedure for circular rectification,
- propose an efficient modification of 3D-HEVC codec for processing of circularly rectified 3D video.
## II Main idea
Our main idea is to introduce a similar approach as used for applications of 3D-HEVC, but for cameras arranged ideally on a circle, with their optical axes lying on a plane and directed towards centre of the circle (Fig. 2). Setting up such multi-camera system in practice would be even more challenging, however it provides much more information about the scene, which is crucial for many applications.
The proposed approach constitutes an alternative processing path to two aproaches already described in the references and depicted by the two left pathes in Fig. 3. The first step of the proposal is circular rectification, i.e. correction of real camera positions to points located on a circle, with cameras' optical axes parallel to the ground and directed towards the centre of the circle, and transformation of the input 3D video according to the change.
## III Circular rectification
In this section, the proposed process of circular rectification is escribed, including derivation of circle parameters, modification of camera parameters and transforming test sequences.
### _Camera parameters_
In the paper, it is assumed that all camera parameters are known and represented by intrinsic parameter matrix [3\(\times\)3] \(\mathbb{K}\) (1), rotation matrix [3\(\times\)3] \(\mathbb{R}\) and 3-component translation vector \(\mathbb{T}\). The derivation of the camera parameters is described elsewhere [14, 15, 16].
\[\mathbb{K}=\begin{bmatrix}f_{x}&c&o_{x}\\ 0&f_{y}&o_{y}\\ 0&0&1\end{bmatrix}\ \, \tag{1}\]
where:\(f_{x}\)\(f_{y}\) - focal lengths, \(o_{x}\)\(o_{y}\) - coordinates of the optical centre, \(c\) - skew factor.
The abovementioned intrinsic and extrinsic camera parameters can be used to calculate the projection matrix [4\(\times\)4] \(\mathbb{P}\) for each camera using Eq. (2),
\[\mathbb{P}=\begin{bmatrix}\mathbb{K}&0\\ 0&1\end{bmatrix}\begin{bmatrix}\mathbb{R}&\mathbb{T}\\ 0^{T}&1\end{bmatrix}. \tag{2}\]
Then, the positions of corresponding points in two camera views \(A\) and \(B\) can be derived according to Formula (3) that can be used to transform the video by "moving" the location of the camera (matrix \(\mathbb{P}\) is usually nonsingular).
\[\begin{bmatrix}z_{B}\cdot x_{B}\\ z_{B}\cdot y_{B}\\ z_{B}\end{bmatrix}=\mathbb{P}_{B}\cdot\mathbb{P}_{A}^{-1}\begin{bmatrix}z_{A} \cdot Y_{A}\\ z_{A}\cdot Y_{A}\\ 1\end{bmatrix}\, \tag{3}\]
where: (\(x_{A}\), \(y_{A}\)), (\(x_{B}\), \(y_{B}\)) are the positions of corresponding points in view \(A\) and \(B\), respectively,
\(z_{A}\), \(z_{B}\) are the depth values of corresponding points in view \(A\) and \(B\), respectively,
\(\mathbb{P}_{A}\), \(\mathbb{P}_{B}\) are the projection matrices for view \(A\) and \(B\).
It is also assumed that the cameras are located roughly on a circle around the scene, otherwise rectification could introduce inacceptable distortions to the input video. Similar multi-camera systems have already been successfully set up and used to record 3D video test sequences (e.g. in [9, 22]).
### _Derivation of circle parameters and new camera positions_
The first step of circular rectification is finding the parameters of the circle, based on the positions (represented by translation vectors \(\mathbb{T}\) ) of the cameras. The circle is represented by position of its centre (\(x_{com}\), \(0\), \(z_{con}\)) and radius \(r\). In order to find the aforementioned parameters, we use circle Equation (4) with positions (\(x_{i}\), \(0\), \(z_{i}\)) of each of the \(N\) cameras, and perform non-linear regression by minimizing the sum of squares \(S\) according to Equation (5).
\[(x_{i}-x_{cen})^{2}+(z_{i}-z_{cen})^{2}=r^{2} \tag{4}\]
\[S=\sum\nolimits_{i=1}^{N}(\sqrt{(x_{i}-x_{cen})^{2}+(z_{i}-z_{cen})^{2}}-r)^{2} \tag{5}\]
It should be noted that vertical positions are ignored (\(y_{con}\) = 0, \(y_{i}\) = 0) because proposed rectification assumes that all cameras, as well as the centre of the circle, are located at the same height.
After derivation of circle parameters, the next step is to find for each camera its modified position (\(x_{i}\)', \(0\), \(z_{i}\)') on the
Fig. 3: Available and proposed paths of processing and encoding of 3D video.
Fig. 2: Circular camera setup before (top) and after proposed rectification (bottom).
circle, the closest to the original location. Figure 4 presents both original and new camera positions of one of real 3D video test sequences.
### _Rotation of cameras into ideally circular arrangement_
In the proposed process of circular rectification the goal is not only to correct the locations of the cameras, but also to direct their optical axes precisely towards the centre of a circle derived in the previous subsection. To achieve that, modification of rotation matrices is necessary.
Rotation matrix \(\mathbb{R}\) represents combined rotation of a camera around 3 orthogonal axes. In the ideally circular arrangement, optical axes are assumed to be on a single plane (parallel to the ground). Such camera rotation can be represented by the following matrix (6):
\[\mathbb{R}_{i}^{\prime}=\begin{bmatrix}\cos\alpha_{i}&0&\sin\alpha_{i}\\ 0&1&0\\ -\sin\alpha_{i}&0&\cos\alpha_{i}\end{bmatrix}\, \tag{6}\]
where \(\alpha_{i}\) is the angle between position of \(i\)-th camera and circle centre, therefore:
\[\cos\alpha_{i}=\frac{x_{i}-x_{\text{rem}}}{r},\qquad\sin\alpha_{i}=\frac{x_{i }^{\prime}-x_{\text{rem}}}{r} \tag{7}\]
All the necessary parameters: the circle centre position \((x_{\text{rem}},z_{\text{rem}})\) and its radius \(r\), as well as the modified \(i\)-th camera position \((x_{i}^{\prime},z_{i}^{\prime})\) are already derived, thus there is no need to provide additional input parameters to find the rotation matrix of rectified cameras.
### _Modification of intrinsic camera parameters_
Previous subsections \(B\) and \(C\) present process of deriving extrinsic parameters of cameras located on an ideal circle. This subsection shortly describes how to evaluate internal camera parameters of circularly rectified sequence.
First, the skew factor is set to \(c\)=0, similarly to linear rectification in the state-of-the-art 3D-HEVC [7, 17]. Then, focal length \(f_{y}\) and vertical component of principal point \(o_{y}\) are averaged and set equal for every camera. For comparison, in 3D-HEVC the values of \(f_{y}\) and \(o_{y}\) were not used at all. A more sophisticated approach is required to derive the horizontal component of principal point coordinate \(o_{x}\). The cameras in a quasi-circular arrangement are usually directed towards the centre of recorded scene, which can be (and often is) much closer to the cameras than the centre of a circle. Therefore, due to modification of rotation matrices towards the centre of a circle, the field of view of each camera can be significantly modified. This can result in only small proportion of original field of view being covered by given camera after circular rectification (Fig. 5). In such a case, rectified views would contain only a small part of original content, which is highly unwanted.
Shifting camera field of view can be achieved not only by rotating the camera, but also by changing its principal point (Fig. 6). In the proposed circular rectification technique, new principal \(o_{x}\)' is calculated for each camera to assert roughly the coverage of recorded scene as without rectification. It is done by projecting point equal to original optical centre \(o_{x}\) onto 3D space. New value of \(o_{x}\)' should compensate rectification of rotation matrix, thus projecting it onto 3D space should result in the same location.
## IV Inter-view prediction for circularly rectified video and modified HEVC codec
As mentioned before, inter-view prediction in 3D-HEVC is simplified due video rectification. On the other hand, inter-view prediction that uses full perspective projection requires complex operations on matrices and is noticeably slower. The proposed circular rectification is a trade-off between the abovementioned approaches. On one hand, the cameras can be located on a circle, which results in better coverage of recorded scene. On the other hand, the number of camera parameters required to describe the system is significantly reduced after circular rectification. The authors observed that rectification of camera parameters may be used to optimize the inter-view prediction for faster compression of circularly rectified 3D video. After applying rectified intrinsic and extrinsic camera parameters derived in Section III for full projection equations (3), the authors derived simplified formulas for projecting points between views \(A\) and \(B\):
\[z_{B}=\left(x_{A}-o_{xA}\right)\frac{x_{A}}{f_{x}}\sin\alpha \alpha+\left(z_{A}-r\right)\cos\alpha\alpha+r \tag{8}\] \[y_{B}=o_{y}+\frac{x_{A}}{z_{B}}\left(y_{A}-o_{y}\right)\] (9) \[x_{B}=o_{xB}+\frac{1}{z_{B}}\left[(x_{A}-o_{xA})z_{A}\cos \alpha-(z_{A}-r)f_{x}\sin\alpha\right] \tag{10}\]
where \(\Delta\alpha=\alpha_{B}-\alpha_{A}\).
The above formulas allow to predict the position of a point in view \(B\) based on its position in view \(A\) and circular camera parameters. The authors have modified the inter-view prediction in 3D-HEVC by replacing standard disparity derivation with point projection that uses the above equations. Moreover, a number of prediction techniques that exploit inter-view similarities have been modified, e.g. Inter-view Motion Prediction, View Synthesis Prediction, Neighboring Block Disparity Vector, Depth-oriented Neighboring Block Disparity Vector, Illumination Compensation. Table I compares parameters used by state-of-the-art 3D-HEVC and modified 3D-HEVC encoders for compression of circular and arbitrary 3D video. One may observe that rectified circular camera setup requires much less parameters than arbitrary, and 2 more values compared to unmodified 3D-HEVC. In the proposed encoder, all parameters, including non-standard \(o_{y}\), \(a\), \(r\), are transmitted in the bitstream in Video Parameter Set (VPS).
Obviously, the changes in the bitstream result in the proposed codec not being compliant with the 3D-HEVC
Fig. 4: Top view of a multi-camera system with original camera positions (blue dots) and shifted to ideal cirle (orange dots) for Breakdancers test sequence.
Fig. 5: Problem with shifted field of view after circular rectification Fig. 6: Rectified camera directed towards circle centre and with modified optical centre \(o_{x}\)’ to shift its field of view
standard. Nevertheless, the authors prove that support for rectified circular 3D video compression could be added with only minor changes in the bitstream syntax.
## V Methodology of experiments
The goal of the experiments is to assess the compression efficiency and encoding time using the aforementioned codecs with respect to standard 3D-HEVC codec. Additionally, authors compared the encoding time of intra-view prediction only, for both modified 3D-HEVC encoders (circular and arbitrary). Compression efficiency was compared by measuring average bitrate reduction for luma component of texture views, using Bjontegaard metric [20].
The experiments were conducted by encoding 7 views of 4 commonly-used multiview test sequences [21 - 23]. Encoding was performed at 4 _QP_ values (25, 30, 35, 40) and 100 frames. The test sequences were rectified by full perspective projection (3). Obviously, rectified view may contain some unfilled areas - these are interpolated from surrounding content. Both texture and depth maps were rectified. Encoders were configured identically, following Common Test Conditions for 3D video experiments [18]. The only difference was in input camera parameters, which were prepared according to requirements of every encoder (Table I). Circular rectification of test sequences was done in pre-processing phase, so it does not affect encoding time results. Moreover, all three encoders were based on the same version of 3D-HEVC state-of-the-art, publicly available test model HTM-13.0 [19].
## VI Experimental results
Table II shows bitrate reduction for the modified 3D-HEVC encoder for compression of circularly rectified video against unmodified 3D-HEVC and modified 3D-HEVC for arbitrary camera locations (from [1]). Our proposal reduces bitrate on average by 6% compared to the state-of-the-art technique. This is because 3D-HEVC does not perform accurate inter-view prediction if video was acquired by camera arrangements other than linear. Compared to encoder that supports any camera setup, solution optimized for circular arrangement provides slightly better results. The difference is caused by a lower number of camera parameters required by the proposal and simpler inter-view prediction, which results in reduced numbers of errors.
Table III presents reduction of total encoding time, while Table IV compares inter-view prediction time between two modified encoders. It should be noted that the proposed encoder is up to 10% faster than encoder with full perspective projection, and at the same time it's inter-view prediction is 44 times faster, due to much simpler projection equations, optimized for circularly rectified 3D video. Surprisingly, the proposed encoder was also faster than plain 3D-HEVC by roughly 4%, even though inter-view prediction of the former is more complex than state-of-the-art technique. The reason of such phenomenon is related to results for compression efficiency (Table II). As mentioned before, modified 3D-HEVC is more accurate in predicting content of circularly rectified video.
## VII Conclusions
In the paper, the authors propose a novel approach to 3D video compression. 3D video acquired by cameras located nearly on an arc is proposed to undergo circular rectification proposed and discussed in Section III. The state-of-the-art 3D-HEVC technique is modified for efficient compression of such video. The codec modification is mostly related to modification of the inter-view prediction. The total bitrate for the modified codec appears to be lower by about 6% as compared to the standard 3-D HEVC applied for the standard MPEG test 3D video sequences captured by cameras distributed on an arc. The authors developed a process for correction of camera parameters to ideal circle together with circular video rectification. Moreover, projection equations for optimized inter-view prediction of circularly rectified 3D video were derived and implemented on top of 3D-HEVC reference test model. Proposed modifications were evaluated experimentally and compared to unmodified 3D-HEVC and the 3D-HEVC codec adopted to compression of video acquired by camera with arbitrary locations as proposed in [1]. The latter appears to be more complex as its inter-view prediction is 44-fold slower than the interview prediction developed in this paper for circularly rectified video. Therefore, the technique seems to be an interesting proposal for applications within MPEG Immersive Video [10] where total bitrate for immersive video content can be reduced due to efficient exploitation of the inter-view redundancy. |
2308.01046 | Flexible Coherent Optical Access: Architectures, Algorithms, and
Demonstrations | To cope with the explosive bandwidth demand, significant progress has been
made in the ITU-T standardization sector to define a higher-speed passive
optical network (PON) with a 50Gb/s line rate. Recently, 50G PON becomes mature
gradually, which means it is time to discuss beyond 50G PON. For ensuring an
acceptable optical power budget, beyond 50G PON will potentially use coherent
technologies, which can simultaneously promote the applications of flexible
multiple access such as time/frequency-domain multiple access (TFDMA). In this
paper, we will introduce the architectures, algorithms, and demonstrations for
TFDMA-based coherent PON. The system architectures based on an ultra-simple
coherent transceiver and specific signal spectra are designed to greatly reduce
the cost of ONUs. Meanwhile, fast and low-complexity digital signal processing
(DSP) algorithms are proposed for dealing with upstream and downstream signals.
Based on the architectures and algorithms, we experimentally demonstrate the
first real-time TFDMA-based coherent PON, which can support at most 256 end
users, and peak line rates of 100Gb/s and 200Gb/s in the upstream and
downstream scenarios, respectively. In conclusion, the proposed technologies
for the coherent PON make it more possible to be applied in the future beyond
50G PON. | Ji Zhou, Zhenping Xing, Haide Wang, Kuo Zhang, Xi Chen, Qiguang Feng, Keshuang Zheng, Yijia Zhao, Zhen Dong, Tao Gui, Zhicheng Ye, Liangchuan Li | 2023-08-02T09:41:14Z | http://arxiv.org/abs/2308.01046v1 | # Flexible Coherent Optical Access: Architectures, Algorithms, and Demonstrations
###### Abstract
To cope with the explosive bandwidth demand, significant progress has been made in the ITU-T standardization sector to define a higher-speed passive optical network (PON) with a 50Gb/s line rate. Recently, 50G PON becomes mature gradually, which means it is time to discuss beyond 50G PON. For ensuring an acceptable optical power budget, beyond 50G PON will potentially use coherent technologies, which can simultaneously promote the applications of flexible multiple access such as time/frequency-domain multiple access (TFDMA). In this paper, we will introduce the architectures, algorithms, and demonstrations for TFDMA-based coherent PON. The system architectures based on an ultra-simple coherent transceiver and specific signal spectra are designed to greatly reduce the cost of ONUs. Meanwhile, fast and low-complexity digital signal processing (DSP) algorithms are proposed for dealing with upstream and downstream signals. Based on the architectures and algorithms, we experimentally demonstrate the first real-time TFDMA-based coherent PON, which can support at most 256 and users, and peak line rates of 100Gb/s and 200Gb/s in the upstream and downstream scenarios, respectively. In conclusion, the proposed technologies for the coherent PON make it more possible to be applied in the future beyond 50G PON.
Time/frequency-domain multiple access, ultra-simple coherent transceiver, beyond 50G, passive optical network.
## I Introduction
Driven by the sustainably growing traffic demand, the line rate of passive optical network (PON) is rising steadily. Fig. 1 shows the roadmap for IEEE and ITU-T standards of PON. Significant progress has been made in the ITU-T standardization sector to define higher-speed (HS) PON with a line rate of 50Gb/s [1, 2]. Recently, 50G PON becomes mature gradually, which means it is time to discuss beyond 50G PON [3, 4, 5]. For ensuring an acceptable optical power budget, beyond 50G PON will potentially use coherent technologies [6, 7, 8]. Among all the potential solutions, a coherent PON based on flexible time-domain/frequency-domain multiple access (TFDMA) is one of the most appealing options, which can combine the statistical multiplexing capability of TDMA and the dedicated frequency allocation capability of FDMA [9, 10, 11]. Based on digital subcarrier multiplexing (DSCM), FDMA-based coherent PON allows each optical network unit (ONU) to transmit and receive only a subset of subcarriers, which significantly reduces the bandwidth of its transceiver [12, 13, 14]. Other benefits of coherent PON include: 1) it improves the receiver sensitivity due to the use of a local oscillator (LO), and 2) it can use C-band wavelength resources that have not been used in previous PON since the dispersion can be effectively compensated by digital signal processing (DSP) [15, 16, 17].
Unfortunately, the use of a full coherent transceiver at each ONU is still cost-prohibitive for the PON scenario. To further reduce the cost of an ONU, it has been proposed to use a single-polarization heterodyne receiver based on Alamouti coding rather than a fully coherent receiver in the downstream scenario [18, 19, 20]. In the upstream scenario, The ONU can also use a single Mach-Zehnder modulator (MZM) instead of a dual-polarization in-phase and quadrature MZM. Such an ultra-simple coherent transceiver contains one digital-to-analog converter (DAC), one single MZM, one optical coupler, one balanced photo-detector (BPD), one analog-to-digital converter (ADC), and two lasers. Two lasers become the major cost in the ultra-simple coherent transceiver [21]. The high-cost external cavity laser used in traditional coherent transceivers cannot meet the requirement of the ultra-simple coherent transceiver.
In this paper, we experimentally demonstrate the first real-time TFDMA-based coherent PON using an ultra-simple transceiver. The proposed PON can support a splitting ratio up to 1:256, and peak line rates of 100Gb/s and 200Gb/s in the upstream and downstream scenarios, respectively. In addition, we prove that high-precision DSP-aided frequency locking makes the cost-effective distributed feedback (DFB) laser feasible for the ultra-simple coherent transceiver. This paper is an extended version of our published post-deadline paper in OFC 2023 [22]. More detailed information about the DSP algorithms is added in this extended version.
Fig. 1: The roadmap for IEEE and ITU-T standards of PONs.
The main contributions of this paper are as follows:
* The system architectures based on an ultra-simple coherent transceiver and specific signal spectra are designed to greatly reduce the cost of ONUs.
* We propose fast and low-complexity DSP algorithms for effectively processing upstream and downstream signals in TFDMA-based coherent PON.
* We demonstrate the first real-time TFDMA-based coherent PON, which can support at most 256 end users, and peak line rates of 100Gb/s and 200Gb/s in the upstream and downstream scenarios, respectively.
The remainder of this paper is organized as follows. The system architectures and specific signal spectra are shown in Section II. In Section III, the DSP algorithms for TFDMA-based coherent PON are introduced in detail. In Section IV, the experimental setups and results are given. Finally, the paper is concluded in Section V.
## II System Architecture and Spectra Design
In this section, we will introduce the system architectures with an ultra-simple coherent transceiver at the ONU and a full coherent transceiver at the OLT. Meanwhile, the specific signal spectra are designed for the upstream and downstream scenarios in the TFDMA-based coherent PON.
The system architecture of the coherent PON is shown in Fig. 2(a), which is designed to support at most 256 ONUs by using 1:4, 1:8, and 1:8 passive optical splitters. Without loss of generality, we test 4 ONUs on the edge of the network. The transmitter and receiver devices of the ultra-simple coherent transceiver at the ONU are depicted in Fig. 2(b). The ultra-simple coherent transceiver consists of one DAC, one single MZM, one optical coupler, one ADC, one single BPD, and two DFB lasers. Two DFB lasers are used as the LO and optical carrier for the downstream and upstream scenarios, respectively. Therefore, Single-polarization heterodyne detection and unidimensional signal generation are implemented by the ultra-simple coherent transceiver. For the downstream scenario, an Alamouti-coding signal should be received at the ONU to avoid the state-of-polarization (SOP)-caused signal disappearance. For the upstream scenario, the transmitted signal at the ONU should be a real-valued carrier-less amplitude phase (CAP) signal to tolerate direct-current (DC) leakage. For generating the Alamouti-coding signal and detecting the CAP signal at the OLT, a full coherent transceiver can be deployed, as shown in Fig. 2 (c). The full coherent transceiver at the OLT makes it possible to gradually evolve the line rate by updating the transceiver at the ONU.
Figure 3 (a) shows the designed signal spectra with a bandwidth granularity of 12.5GHz for the downstream scenario. In the downstream scenario, 5\(\times\)12.5Gbaud digital subcarriers with a granularity of 12.5GHz are filled in the whole bandwidth. Only 4 digital subcarriers carry data, and the central subcarrier is blank for implementing the heterodyne detection. Then, Alamouti coding with a rate of 1/2 is used to implement the single-polarization detection. However, the capacity of the Alamouti-based dual-polarization optical system is equivalent to that of a single-polarization optical system with the same bandwidth. Therefore, the peak line rate of the downstream scenario is only 200Gb/s when 16QAM is modulated on 50GHz bandwidth. We use probabilistic constellation shaping 16QAM (PCS-16QAM) to achieve flexible-rate adaption from 100Gb/s to 200Gb/s for making full use of optical power budget [23, 24, 25]
In the upstream scenario, each ONU transmits a real-valued
Fig. 3: (a) Designed spectra with a bandwidth granularity of 12.5GHz for the downstream scenario. (b) Designed spectra with a bandwidth granularity of 6.25GHz for the upstream scenario. SC: Subcarrier. BW: Bandwidth. DS: Downstream. US: Upstream.
Fig. 2: (a) The architecture diagram of the coherent PON with 256 splitter ratio. (b) Transmitter (Tx) and receiver (Rx) devices of the ultra-simple coherent transceiver at the ONU. (c) Tx and Rx devices of the full coherent transceiver at the OLT.
CAP signal, as shown in Fig. 3 (b). The reasons are that 1) a single MZM can only modulate a real-valued signal, and 2) a guard band around DC frequency is required. The MZM at the ONU is biased at the null point. To achieve steep rising and falling edges, the burst-mode signals are generated by switching on/off the DAC rather than the laser. However, due to the limited extinction ratio of the MZM, the DC leakage of laser power is not negligible when the DAC is off, hence the guard band is necessary. We further break each 12.5Gbaud subcarrier into 2\(\times\)6.25Gbaud subcarriers for providing fine-granularity transmission in the upstream scenario. More frequency resources can be provided for dedicated usage in the upstream scenario. Dedicated subcarriers allow high-end users to get free from rogue ONUs, which only exist in the upstream scenario. Each ONU can transmit either the inner (Type-1) or the outer (Type-2) subcarrier in CAP with two subcarriers. In our work, only quadrature phase shift keying (QPSK) was modulated for the upstream scenario to obtain a peak line rate of 25Gb/s per ONU and a total peak line rate of 100 Gb/s from the OLT's perspective.
In conventional FDMA-based PON, four subcarriers for the downstream scenario and eight subcarriers for the upstream scenario cannot support the bandwidth allocation for 256 ONUs. However, if the subcarrier number is increased, it is hard to compensate for the phase noise by the DSP algorithm, and the guard band increases to decrease the spectral efficiency. The TDMA is added to the subcarriers to implement fFDMA, which is a feasible method for increasing the number of ONUs. In the TFDMA, the subcarriers can be individually allocated to low-latency and bandwidth-hungry ONUs. Meanwhile, for the common ONUs, the subcarrier can be divided into some time slots to provide flexible bandwidth allocation. In conclusion, the system architectures, specific signal spectra, and TFDMA can be used to support low-latency or low-cost ONUs.
## III DSP Algorithms for Coherent PON
Traditional coherent DSP algorithms are not suitable for TFDMA-based coherent PON. In this section, we will introduce the specially designed DSP algorithms for the TFDMA-based coherent PON. The DSP algorithms for the upstream and downstream scenarios work in the burst mode and continuous mode [26, 27, 28], respectively. For the upstream scenario, burst-mode DSP algorithms based on training sequences are used to achieve fast convergence for reducing the overhead and improving spectral efficiency. For the downstream scenario, the continuous-mode DSP algorithms at the ONU are sensitive to computational complexity and power consumption. Only a part of the DSP algorithms should be always turned on to track the dynamical distortions, such as frequency-offset estimation (FOE), timing error detection, coefficient estimation of the equalizer, and carrier-phase estimation (CPE). At the time slots of other ONUs, the DSP algorithms processing the payloads and forward error correction (FEC) can be turned off to reduce power consumption. In addition, frame synchronization is required only once when the ONU is initially registered.
### _Frame Detection and Coarse FOE_
In this subsection, periodic sequences are designed to simultaneously implement frame detection and coarse FOE.
#### Iii-A1 Frame Detection
For the upstream scenario, frame detection is required to recognize two symmetric frequency tones for confirming the arrival of burst fame. The symmetric frequency tones are generated by a periodic sequence. Frame detection is not necessary for the continuous-mode downstream scenario. The frame detection can be implemented by the following steps. Firstly, a sliding window operation is applied to the received signal. Then, the extracted signal is down-sampled and transferred to the frequency domain by a fast Fourier transform (FFT). Finally, the average power of the non-zero points is calculated. The frequency points with a power less than average power are filtered out. These mentioned operations are repeated three times to find the accurate frequency tones.
#### Iii-A2 Coarse FOE
After the frame detection, the frequency offset should be compensated to ensure the signal spectrum within the frequency-domain range of the matched filter. Fig. 4 shows the frequency tones with and without frequency offset. The detected frequency tones after the frame detection can be used to estimate the frequency offset by
\[\Delta f_{\text{Coarse}}=\frac{1}{2}\times(f_{1}+f_{2}) \tag{1}\]
where \(f_{1}\) and \(f_{2}\) are the frequencies of the \(-f_{0}+\Delta f\) and \(f_{0}+\Delta f\), respectively. \(\Delta f\) is the actual frequency offset. The FFT size confines the accuracy of the FOE, which is limited by the parallelism of real-time implementation. For example, when the FFT size is 32 and the signal bandwidth is 8GHz, the resolution of one frequency point is only 250MHz, which may lead to \(\pm 125\)MHz deviation for the FOE. Therefore, the FOE based on the detected frequency tones is coarse, and more fine FOE is required.
### _Burst- and Continuous-Mode Timing Recovery_
For the upstream scenario of the TFDMA-based coherent PON, burst-mode timing recovery should be used to accelerate convergence to reduce the overhead. For the downstream scenario, the timing recovery works in continuous mode.
Fig. 5 shows the burst-mode timing recovery with sampling phase initialization for the upstream scenario, and continuous-mode timing recovery for the downstream scenario. The common structure of burst-mode and continuous-mode timing recovery is the frequency-domain Godard algorithm. For the upstream scenario, an appropriate sampling phase initialization plays a significant role in reducing the convergence time of timing recovery. Fortunately, the initialized sampling phase
Fig. 4: The frequency tones with and without frequency offset (FO).
offset \(\tau_{0}\) can be estimated by using two frequency tones in Subsection III-A, which can be calculated by
\[\tau_{0}=\frac{1}{4\pi f_{0}}\text{arg}[R(f_{0})\times R^{*}(-f_{0})] \tag{2}\]
where \((.)^{*}\) denotes the conjugate operation. \(R(f)\) is the received frequency tones with the initialized sampling phase offset such as \(\delta(f-f_{0})e^{j2\pi f_{0}\tau+\phi_{0}}\) and \(\delta(f+f_{0})e^{-j2\pi f_{0}\tau+\phi_{0}}\). \(\phi_{0}\) is the phase noise, which does not influence the estimation of sampling phase offset. The initialized sampling phase offset is injected into the frequency-domain Godard algorithm to reduce the convergence time.
The frequency-domain Godard algorithm is implemented using the signal spectrum \(\mathbf{S}\) after match filtering with a roll-off factor \(\beta\). The timing error is estimated as
\[e=\sum_{k=\frac{(1-\beta)}{2\pi\eta_{s}}N}^{\frac{(1+\beta)}{2\pi\eta_{s}}N-1} \operatorname{Im}\left[S_{k}\cdot S_{k+(1-1/sps)N}^{*}\right] \tag{3}\]
where \(\operatorname{Im}(\cdot)\) denotes the imaginary part of a complex value. \(sps\) is the samples per symbol. \(N\) is the number of the frequency points after the \(N\)-FFT, which should choose the value that allows the upper and lower bounds to be integers. Based on Eq. (3), only \(K\) frequency points are used to estimate timing error with relatively low complexity where \(K\) is equal to \(\beta N/sps-1\). Finally, the estimated phase is updated after every iteration.
### _Frame Synchronization and Fine FOE_
In this subsection, training sequences are designed to simultaneously implement frame synchronization and fine FOE.
#### Iii-C1 Frame Synchronization
Frame synchronization is implemented based on the specially designed sequences \(\left[\mathbf{s_{1}}\ \mathbf{s_{2}}\right]\) and \(\left[\mathbf{s_{3}}\ \mathbf{s_{4}}\right]\) for X polarization and Y polarization, respectively. The \(\mathbf{s_{1}}\) and \(\mathbf{s_{3}}\) consist of QPSK symbols with different pseudorandom binary sequences. The \(\mathbf{s_{2}}\) and \(\mathbf{s_{4}}\) are separately generated by
\[s_{2/4}(i)=s_{1/3}(i)\times pn(i) \tag{4}\]
where \(i\) is from 1 to \(L\). \(\mathbf{pn}\) is a pseudo-random noise (PN) sequence with the length of \(L\). Finally, the specially designed sequences repeat three times to generate the successive sequences \(\left[\mathbf{s_{1/3}}\ \mathbf{s_{2/4}},\ \mathbf{s_{1/3}}\ \mathbf{s_{2/4}},\ \mathbf{s_{1/3}}\ \mathbf{s_{2/4}}\right]\).
Frame synchronization is realized by using a sliding window and the cross-correlation on each polarization, which is expressed as
\[P(d)=\sum_{i=0}^{L-1}r(d+i)\times[pn(i)\times r^{*}(d+i+L)] \tag{5}\]
where \(\mathbf{r}\) is the received signal of \(\mathbf{s}\). The frame synchronization is based on the timing metric, which is calculated by
\[M(d)=\frac{|P(d)|^{2}}{P_{r}^{2}(d)}. \tag{6}\]
where the half-symbol energy \(P_{r}(d)\) is defined as
\[P_{r}(d)=\frac{1}{2}\sum_{i=0}^{2L-1}|r(d+i)|^{2}. \tag{7}\]
There are five sharp peaks in the \(M(d)\). To enhance the tolerance for noise, \(M(d)\) with \(0\), \(L\), \(2L\), \(3L\) and \(4L\) delay are stacked over, which can be expressed as
\[M^{\prime}(d)=\sum_{i=0}^{4}M(d+i\times L), \tag{8}\]
The highest-peak position of \(M^{\prime}(d)\) is the accurate frame synchronization position.
#### Iii-C2 Fine FOE
The successive sequences for frame synchronization can also be used to implement fine FOE. For convenient analysis, we define \(\left[\mathbf{s_{1/3}}\ \mathbf{s_{2/4}},\ \mathbf{s_{1/3}}\ \mathbf{s_{2/4}},\ \mathbf{s_{1/3}}\ \mathbf{s_{2/4}}\right]\) as \(\left[\mathbf{s_{B1}},\ \mathbf{s_{B2}},\ \mathbf{s_{B3}}\right]\). The received signal of \(\mathbf{s_{B1}},\ \mathbf{s_{B2}}\) and \(\mathbf{s_{B3}}\) can be defined as \(\mathbf{r_{B1}},\ \mathbf{r_{B2}}\) and \(\mathbf{r_{B3}}\), respectively. When the fine frequency offset is considered, the received signal can be defined as
\[r_{\text{Bi}}(n)=s_{\text{Bi}}(n)\cdot\exp(j\frac{2\pi n}{R_{s}}\cdot\Delta f_ {\text{fine}}) \tag{9}\]
where \(i\) is from 1 to 3. \(\Delta f_{\text{Fine}}\) is the fine frequency offset. \(R_{s}\) is the baud rate. The fine FOE estimates the frequency offset by [29]
\[\Delta f_{\text{Fine}} =\frac{R_{s}}{4\pi L}\cdot\arg\left(\mathbf{r_{B2}}\times\mathbf{ r_{B1}^{H}}+\mathbf{r_{B3}}\times\mathbf{r_{B2}^{H}}\right) \tag{10}\] \[=\frac{R_{s}}{4\pi L}\cdot\arg\left[2|\mathbf{s_{B1}}|^{2}\cdot \exp\left(j\frac{4\pi L}{R_{s}}\cdot\Delta f_{\text{Fine}}\right)\right]\]
where \(\arg(\cdot)\) represents the operation of taking the angle of a complex value and \((\cdot)^{\mathbf{H}}\) denotes the conjugate transpose operation. Due to the average effect, the fine FOE is more accurate with the increase of \(L\). However, the range of fine FOE is from \(-R_{s}/4L\) to \(R_{s}/4L\), which decreases with the increase of \(L\). Therefore, the \(L\) should be set to a suitable value for balancing the accuracy and range. When the \(L\) is set to \(10\) and \(R_{s}\) is set to 8Gbaud, the range of fine FOE is from \(-200\)MHz to \(200\)MHz. If the frequency offset of the laser is locked within \(\pm 100\)MHz, the coarse FOE is not required.
### _Burst- and Continuous-Mode MIMO Equalizer_
For the upstream scenario, the burst-mode multiple-input-multiple-output (MIMO) equalizer is used to accelerate convergence for reducing the overhead. For the downstream scenario, the MIMO equalizer works in continuous mode. Fig.
Fig. 5: The burst-mode timing recovery with sampling phase initialization for the upstream scenario, and continuous-mode timing recovery for the downstream scenario. Interp.: Interpolation. NCO: Numerically controlled oscillator. Est.: estimation.
6 shows the block diagram for the burst- and continuous-mode MIMO equalizer. Different from the continuous-mode MIMO equalizer, the burst-mode MIMO equalizer has a coefficient initialization. For fast converging the coefficients, LS-based coefficient estimation is implemented using a designed const amplitude zero auto-correlation training sequence [30]. After the coefficient initialization, it switches to decision-directed-LMS (DD-LMS) algorithm to track the coefficients. For the continuous-mode MIMO equalizer, the DD-LMS algorithm tracks the coefficients all the time. Before the DD-LMS algorithm, the carrier-phase noise should be estimated and compensated based on a pilot-based CPE. In our work, the MIMO equalizer at the ONU should be specially designed for dealing with Alamouti-coding received signals [31, 32]. Meanwhile, the MIMO equalizer at the OLT can be simplified to a multiple-input-single-output equalizer due to the transmitted single-polarization real-valued signal.
### _Pilot-Based CPE_
As Fig. 7 shows, a pilot-based CPE integrated with the maximum likelihood algorithm is used to estimate the carrier phase noise with low computational complexity. One pilot symbol is periodically inserted into every \(M\) payload symbol for the pilot-based CPE, which estimates the phase noise of the pilot symbol as
\[\phi_{\text{Pilot}}(n)=\arg\left[r_{p}^{*}(n)\times p(n)\right] \tag{11}\]
where \(r_{p}(j)\) is the \(n\)-th received pilot symbol after the MIMO equalizer and \(p(n)\) denotes the \(n\)-th pilot symbol. The phase noise of the \(M-1\) symbols between the \(n\)-th and the \((n+1)\)-th pilot symbols is initialized as \(\phi_{\text{Pilot}}(n)\). Then, the following maximum likelihood algorithm is employed to estimate the residual phase noise as
\[\phi_{\text{ML}}(i)=\frac{1}{2Q+1}\sum_{l=i-Q}^{i+Q}\arg\left[q(l)\times\hat{q }^{*}(l)\right] \tag{12}\]
where \(q(l)\) denotes the received signal after the carrier phase compensation. \(\hat{q}(l)\) is its decision. \(Q\) is the half-length of the average filter. After the pilot-based CPE and compensation, the QAM can be decided to regenerate the bit sequence, which can finally be sent into FEC to correct the error bits.
## IV Experimental Setups and Results
As Fig. 8 (a) and Fig. 8 (b) show, we developed the linecards including field-programmable gate array (FPGA) integrated with 31GSa/s DACs and ADCs for upstream and downstream scenarios. At the OLT, a commercial coherent driver modulator (CDM) and integrated coherent receiver (ICR) were used to implement the full coherent transceiver. One FPGA integrated with four DACs and One FPGA integrated with four ADCs were used to generate the transmitted signal and process the received signal, respectively. At the ONU, single-polarization MZM and single-polarization heterodyne ICR were used to implement the ultra-simple coherent transceiver. Only one FPGA integrated with one DAC and one ADC was employed to generate the transmitted signal and process the received signal. The proposed ultra-simple coherent transceiver for ONU has low cost and low power consumption.
The measured downstream and upstream spectra are shown in Fig. 9 (a). Due to the limited sampling rate of the DACs, two experimental cases were done for the downstream scenario: 1) using a linecard to generate one subcarrier with data, and 2)
Fig. 8: (a) The linecards including field-programmable gate array (FPGA) integrated with 31GSa/s DACs and ADCs for upstream and downstream scenarios. (b) Schematic diagrams of the full coherent transceiver and the simplified coherent transceiver. AWG: arbitrary waveform generator. CDM: coherent driver modulator. ICR: integrated coherent receiver. SP: single polarization. Het.: heterodyne.
Fig. 6: The block diagram for the burst- and continuous-mode multiple-input-multiple-output (MIMO) equalizer.
Fig. 7: The structure of a pilot-based CPE integrated with the maximum likelihood (ML) algorithm.
using an AWG to generate four subcarriers with data and one blank subcarrier. In both cases, we used a linecard for implementing the real-time DSP to process the downstream signals at the ONU, and the same card was also used to generate an upstream CAP signal with two subcarriers. Specifically, we generated a CAP signal with either the inner subcarrier or the outer subcarrier at each ONU, and then we used another linecard for the burst-mode DSP to process the upstream signal at the OLT. Fig. 9 (b) depicts the waveforms of the downstream TDM signals and the upstream TDMA signals. It is worth noting that the downstream TDM signal is a continuous signal and the upstream TDMA signal is a burst signal. Fig. 9 (c) zooms in on the rising edges and the falling edges of the burst signals. The double-sided arrows denote the dimension of 10ns. Obviously, the rising and falling edges are much less than 1ns. Simultaneously, the receiver DSPs can provide real-time estimations of laser frequency offsets. The estimated laser frequency offsets were sent to the microcontroller units (MCUs), which then control the temperature and the driving injection currents of the DFB lasers at the ONUs to achieve real-time frequency locking.
Figure 10 shows the time and frequency source allocations for the downstream and upstream tests. As Fig. 10 (a) shows, when the AWG was used at the transmitter for the downstream scenario, ONU1 and ONU2 were assigned with the dedicated subcarriers on the frequency of f4 and f3, respectively. ONU3 and ONU4 shared the subcarrier on the frequency of f2, which occupied half of the time slots. Note that although an ONU only detects one subcarrier, it can receive the power of all four data subcarriers. As Fig. 10 (b) depicts, when the FPGA linecard was used at the transmitter of OLT, we only tested the dedicated subcarrier cases with different spectral efficiency. Fig. 10 (c) depicts that ONU1 and ONU2 were assigned to different time slots of the subcarrier on the frequency of f5 for the upstream scenario. ONU3 and ONU4 shared the subcarrier on the frequency of f6.
As Fig. 11 (a) depicts, to achieve total line rates of 100Gb/s, 150Gb/s, and 200Gb/s in the downstream scenario (assuming sending the same modulation formats to all ONUs), the required ROPs are \(-36.8\)dBm, \(-32\)dBm, and \(-25\)dBm at the BER threshold of \(2\times 10^{-2}\), respectively. \(15\%\) overhead soft-decision FEC with a BER threshold of \(2\times 10^{-2}\) is considered. An EDFA was employed after the CDM at the OLT to amplify optical output power to 7dBm. Therefore, the optical power budgets are 43.8dB, 39dB, and 32dB for line rates of 100Gb/s, 150Gb/s, and 200Gb/s, respectively. It is worth noting that the EDFA can be replaced by a cost-effective booster SOA for commercial deployment. As Fig. 11 (b) shows, the required ROPs for 25Gb/s QPSK, 37.5Gb/s PCS-16QAM, and 50Gb/s 16QAM flexible-rate transmission are \(-41.8\)dBm, \(-37\)dBm, and \(-31\)dBm, respectively. The power of one subcarrier was measured as the ROP in the test cases.
Switching from O-band wavelength to C-band wavelength can reduce fiber loss. The measured total loss of 1:256 splitter and 20 km fiber is 31 dB in the lab. The situation in a real-world network may be worse, and it might be challenging
Fig. 10: Time and frequency source allocations for (a) downstream test with AWG at the transmitter and FPGA at the receiver, (b) downstream test with FPGA, and (c) upstream test with FPGA.
Fig. 9: (a) Measured spectra of downstream and upstream scenarios. (b) Measured waveforms of TDM downstream and TDMA upstream signals. (c) Rising and falling edges of the burst-mode signals.
to achieve 1:256 splitting ratio when 16QAM is modulated. Fortunately, the optical power budgets of PCS-16QAM and QPSK are less stringent. With the development of high-bandwidth transceiver devices, it is possible to further increase the symbol rate and reduce the entropy of PCS-16QAM. Thus, it is promising to simultaneously achieve 1:256 splitting ratio and 200 Gb/s downstream line rate in the real-world network.
Figure 11 (c) shows that the required ROP is approximately \(-45\)dBm for the upstream scenario with 12.5Gb/s per ONU. The ONUs have the ability to transmit data on both the inner and the outer CAP with two subcarriers for achieving the maximum rate of 25Gb/s per ONU. The required ROP should be approximately \(-42\)dBm for the upstream scenario with 25Gb/s per ONU. We used one single MZM at the transmitter of each ONU, thus the insertion loss of the ONU transmitter is much less than that of the OLT transmitter. The launch optical power was approximately 1dBm without any optical amplifier at the ONUs. Therefore, the optical power budget of the upstream scenario can achieve 43dB for a peak line rate of 100Gb/s.
Fig. 12 shows the real-time frequency-locking results for the DFB laser when the environment temperature was varied between 10\({}^{\text{o}}\)C and 60\({}^{\text{o}}\)C. The topic subfigure shows the estimated frequency offset of the DFB laser using the DSP. The middle and bottom subfigures show the injection current and target temperature of TEC (Thermo-Electric Cooler) set by the MCU, respectively. When the environment temperature was changed, the injection current and target temperature of TEC transtiorally fluctuated and immediately settled down. By controlling both the current and the temperature, the frequency offset is strictly confined within (\(-100\), \(100\)) MHz, which can be well compensated by the receiver DSP. The experimental results verify the cost-effective DFB laser can meet the requirement for the coherent PON.
## V Conclusions
In this paper, we introduce the architectures, algorithms, and demonstrations for the TFDMA-based coherent PONs. The system architectures consist of a fully coherent transceiver at the OLT and an ultra-simple coherent transceiver at the ONU, which can greatly reduce the cost and power consumption of the ONU. Meanwhile, the Almouti coding and specific spectra are designed to implement the proposed system architectures. Fast and low-complexity DSP algorithms are used for processing upstream and downstream signals. Based on the system architectures and the DSP algorithms, the first real-time TFDMA-based coherent PON was experimentally demonstrated to support at most 256 end users, and the peak line rates of 100Gb/s and 200Gb/s in the upstream and downstream scenarios, respectively. Meanwhile, the real-time frequency locking within a frequency offset of (\(-100\), \(100\)) MHz verifies the feasibility of a cost-effective DFB laser. In conclusion, the system architectures, DSP algorithms, and real-time demonstrations enable TFDMA-based coherent PON in the future beyond 50G PON.
Fig. 11: The BER versus ROP for (a) downstream test with AWG at the transmitter and FPGA at the receiver, (b) downstream test with FPGA at both transmitter and receiver, and (c) upstream test with FPGA at both transmitter and receiver.
Fig. 12: The real-time frequency-locking results for the DFB laser when the environment temperature was varied between 10 \({}^{\text{o}}\)C and 60\({}^{\text{o}}\)C. |
2310.04374 | Rock anisotropy promotes hydraulic fracture containment at depth | We report laboratory experiments and numerical simulations demonstrating that
the anisotropic characteristics of rocks play a major role in the elongation of
hydraulic fractures propagating in a plane perpendicular to bedding. Transverse
anisotropy leads to larger hydraulic fracture extension in the
parallel-to-bedding/divider direction compared to the
perpendicular-to-bedding/arrester direction. This directly promotes vertical
containment of hydraulic fractures in most sedimentary basins worldwide even in
the absence of any favorable in-situ stress contrasts or other material
heterogeneities. More importantly, the ratio of the energy dissipated in fluid
viscous flow in the fracture to the energy dissipated in the creation of new
surfaces is found to play a critical role on fracture elongation, with
fracture-energy dominated hydraulic fractures being the most elongated while
the viscous dominated ones remain more circular. These results open the door to
a better engineering and control of hydraulic fractures containment at depth in
view of the competition between material anisotropy and injection parameters
(fluid viscosity and rate of injection). | Guanyi Lu, Seyyedmaalek Momeni, Carlo Peruzzo, Fatima-Ezzahra Moukhtari, Brice Lecampion | 2023-10-06T17:00:32Z | http://arxiv.org/abs/2310.04374v1 | # Rock anisotropy promotes hydraulic fracture containment at depth
###### Abstract
In this paper, we present a novel approach to the study of the robustness of hydraulic fracture containment at depth. We present a novel approach to the study of the robustness of hydraulic fracture containment at depth. We present a novel approach to the study of the robustness of hydraulic fracture containment at depth. We present a novel approach to the robustness of hydraulic fracture containment at depth.
###### Abstract
We report laboratory experiments and numerical simulations demonstrating that the anisotropic characteristics of rocks play a major role in the elongation of hydraulic fractures propagating in a plane perpendicular to bedding. Transverse anisotropy leads to larger hydraulic fracture extension in the parallel-to-bedding/divider direction compared to the perpendicular-to-bedding/arrest direction. This directly promotes vertical containment of hydraulic fractures in most sedimentary basins worldwide even in the absence of any favorable in-situ stress contrasts or other material heterogeneities. More importantly, the ratio of the energy dissipated in fluid viscous flow in the fracture to the energy dissipated in the creation of new surfaces is found to play a critical role on fracture elongation, with fracture-energy dominated hydraulic fractures being the most elongated while the viscous dominated ones remain more circular. These results open the door to a better engineering and control of hydraulic fractures containment at depth in view of the competition between material anisotropy and injection parameters (fluid viscosity and rate of injection).
## Plain Language Summary
The widespread application of hydraulic fracturing for unconventional hydrocarbon production has prompted concerns about fractures extending vertically to sensitive rock layers, highlighting the need to understand fluid-driven fracturing for informed public discourse and improved industrial practices. Through laboratory experiments and numerical simulations, we show that the intrinsic anisotropic characteristics of sedimentary rocks lead to limited hydraulic fracture height growth across the bedding planes in the most common geological situations. Furthermore, we quantify the roles of elastic constants, fracture toughness, as well as the fluid injection conditions in shaping hydraulic fracture in transversely isotropic rocks. Our findings suggest that the hydraulic fracture is most elongated in the toughness-dominated regime, and the impact of rock anisotropy vanishes when the fracture propagates in the viscosity-dominated regime.
## 1 Introduction
Hydraulic fractures are widely used for the production enhancement of wells in unconventional hydro-carbon resources among other applications (Detournay, 2016). These tensile fractures propagate quasi-statically in rocks due to the injection of fluid at pressures greater than the minimum in-situ compressive stress. These fractures grow perpendicular to the minimum in-situ stress direction, which is in most sedimentary basins horizontal, such that hydraulic fractures propagate vertically (Hubbert & Willis, 1957). Controlling the vertical height growth of a hydraulic fracture has long been considered a key factor for successful applications since the desire is to create a fracture that extends to the full height of the reservoir, while preventing excessive vertical growth that could create communication pathways for unwanted fluid migration into adjacent strata (Economides & Nolte, 2000; Fisher & Warpinski, 2012; Bunger & Lecampion, 2017). Over the years, concerns have been raised over serious environmental issues, such as contamination of underground drinking water resources by upward migration of fracturing fluid (Howarth & Ingraffea, 2011; Osborn et al., 2011; Warner et al., 2012; Vidic et al., 2013; Vengosh et al., 2014; EPA, 2016) and compromised seal integrity of the caprock in geologic carbon sequestration (Schrag, 2007; Fu et al., 2017), both of which may happen as a result of unbounded vertical fracture growth. Therefore, it is essential to accurately predict and control the vertical propagation of hydraulic fractures. However, predicting fracture height is particularly challenging, as numerous field evidences suggest that the actual height of a hydraulic fracture often differs from what is predicted by state of the art hydraulic fracturing models (Smith & Montgomery, 2015). Microseismic and tiltmeter monitoring data from thousands of hydraulic fracturing treatments indicates that the induced fractures are generally more constrained in
the vertical direction and are longer laterally compared to theoretical predictions (Fisher & Warpinski, 2012; Flewelling et al., 2013).
The limitations of the vertical growth of hydraulic fractures are traditionally thought to be a result of strong variation of in-situ stresses and material properties across rock formations (Simonson et al., 1978; N. Warpinski et al., 1982; van Eekelen, 1982; Jeffrey & Bunger, 2009; Xing et al., 2018), as well as interaction with pre-existing discontinuities in/across different rock formations (Teufel & Clark, 1984; N. R. Warpinski & Teufel, 1987; X. Zhang et al., 2007; Zhou et al., 2008). However, hydraulic fractures more elongated horizontally than vertically have also been observed in homogeneous formations not exhibiting any increase in confining stress vertically that could explain this limited height growth (Ciezobka et al., 2018; Kohli & Zoback, 2021). We argue that - rock anisotropy - an intrinsic characteristic of sedimentary rocks has a first order impact on the shape of hydraulic fractures and thus their ultimate vertical extent. Unconventional hydrocarbon reservoirs are formed primarily in sedimentary basins, which have strong anisotropic material properties at a fine scale thanks to their deposition and diagenesis (Hornby et al., 1994; Sone & Zoback, 2013). More specifically, the anisotropy is caused by a common directional feature of sedimentary rocks - beds, which are generally sub-horizontal planes formed during the deposition of the sediments. Mechanical properties of sedimentary rocks, such as mudstones and shales, are found to vary substantially along different directions with respect to the bedding planes (Heng et al., 2015; S. Zhang et al., 2018; Moukhtari, 2020; Lu et al., 2021). They are widely modeled as a transversely isotropic material at the continuum scale (Jones & Wang, 1981; Johnston & Christensen, 1995; Wang, 2002b; Moukhtari et al., 2020; Lu et al., 2021). A recent theoretical study has shown that the shape of a vertical hydraulic fracture that grows perpendicular to the bedding direction in a transversely isotropic material differs remarkably from what would be expected in an isotropic medium, indicating a strong impact by the rock's anisotropic characteristics on the vertical containment of hydraulic fractures at depth (Moukhtari et al., 2020).
In the following, we bring together laboratory hydraulic fracturing experiments and numerical simulations to uncover the key factors that govern the vertical containment of hydraulic fractures at depth in a transversely isotropic rock formation. It is important to re-inforce that we are here mostly interested in the propagation of planar hydraulic fractures in a plane perpendicular to bedding - a configuration of most practical relevance at depth - and do not address propagation in the bedding plane (which is typically favored at shallow depth / low confining stresses).
## 2 Extensive acoustic monitoring methods to capture laboratory hydraulic fracture evolution
A total of four hydraulic fracturing experiments are carried out on cubic blocks of Del Carmen slate in a true-triaxial load frame (Figure 1). Del Carmen slate is a finely laminated metamorphic rock with an extremely small porosity from La Bana, Leon, North-West Spain. It exhibits two typical transversely isotropic properties: (1) an anisotropic variation of the critical energy to propagate a fracture as function of the fracture growth direction with respect to the plane of isotropy (bedding plane in transversely isotropic rocks), \(K_{\rm IC}(\theta)\) (Figure 1A), and (2) five independent elastic constants \(C_{\rm ij}\) (or the corresponding anisotropic elastic moduli \(E^{\prime}(\theta)\) (Chertov, 2012; Laubie & Ulm, 2014; Moukhtari et al., 2020) and Thomsen parameters (Thomsen, 1986)). These properties have been measured in laboratory (Moukhtari, 2020) and are reported in Appendix A.
The slate specimens used in the experiments are 250\(\times\)250\(\times\)250-mm cubic blocks (Figure 1B). All samples are prepared with an axisymmetric notch (10-mm radius) that emanates from the center of the horizontal wellbore with 8-mm radius. The fracture is driven by the injection of a Newtonian fluid in the axisymmetric notch through a wellbore drilled in the center of the specimen. The injection system is separated into two parts by a choke valve:
(1) the upstream that starts from the pump and ends before the valve (under constant pumping rate, \(Q_{0}\), and pump pressure, \(P_{\rm pump}\)), and (2) the downstream that consists of the fluid passing through the valve and flowing into the fracture (with a fluid influx of \(Q_{\rm in}\) and wellbore pressure of \(P_{\rm w}\)). Three types of fluid are used as the injection fluid (Table 1): (1) Mixture of glycerol and water is used in K1 to facilitate fracture growth in a toughness-dominated regime, (2) T2 uses 99% glycerol for maintaining the propagation in a transition regime, and (3) glucose is used in M3 and M4 to target for the viscosity-dominated regime hydraulic fracturing growth. The bedding plane is set to be orthogonal to the fracture plane to replicate the in-situ condition of a vertical hydraulic fracture growth at depth in sedimentary basins (Figure 1A). We apply a sufficiently large vertical stress (normal to the bedding plane) in a true triaxial frame with the following confining stresses for all four tests: \(\sigma_{v}=20\) MPa, \(\sigma_{H\rm max}=13\) MPa, \(\sigma_{\rm{\it{min}}}=0.5\) MPa. This setup maximizes the vertical extent of the created fracture and avoid any deviation of the fracture into a bedding plane.
Extensive acoustic measurements, via both passive and active acoustic methods, are used to image the hydraulic fracture propagation. The appearance of micro-cracks adjacent to the macro-scale fracture is accompanied by the emission of transient elastic waves due
Figure 1: A, Cross sectional view of a propagating hydraulic fracture in the experiments on an anisotropic slate (right) with a conceptual sketch of the passive (acoustic emissions) and active (wave transmission) acoustic monitoring system (left). The experiments are designed to mimic a vertical hydraulic fracture propagating in a layered rock formation at depth. B, Photograph of the Carmen slate block under confinements. C, Active and passive acoustic sensor layout.
to the release of strain energy, which is referred to as acoustic emissions (AEs) (Lockner, 1993; Shah & Labuz, 1995; Chang & Lee, 2004; Hampton et al., 2018, 2019; Lu et al., 2021). Our passive acoustic monitoring network consists of 16 piezoelectric sensors mounted on all six surfaces of the block as shown in Figure 1C. Throughout the experiments, each of the 16 VS150-M Vallen resonant (at 150 KHz) piezoelectric sensors, covering frequencies from 100 KHz to 1 MHz, records AEs in a continuous mode with a sampling rate of 10 MHz. The 3D hypocenter location of the AE events are obtained by a semi-automatic algorithm using a modified Time Difference Of Arrival (TDOA) method (Kundu, 2014; Momeni et al., 2021), with the compressional-wave velocities of the rock at different orientations measured for intact specimens. The relative magnitudes of the AEs are estimated based on wave amplitudes and source-to-receiver distance (Zang et al., 1998).
In parallel to passive monitoring, an active acoustic array consisting of 16 source-receiver sensor pairs allows us to track the evolution of the macro-scale fracture. These source-receiver pairs, mounted on two opposite vertical faces parallel to the hydraulic fracture plane (Figure 1A and C), enable estimation of the fracture width at 16 locations via an analysis of transmitted waves with a 90\({}^{\circ}\) incident angle. In a three-layer geometry (rock-fluid-rock) as shown in Figure 1A, the thickness of the fluid layer (i.e., fracture width), \(w_{\rm f}\), is evaluated by matching the spectrum of the transmitted signals travelling between two facing source-receiver transducers with the predicted values (Groenenboom & Fokkema, 1998; Liu et al., 2020; Liu & Lecampion, 2022). More specifically, we solve for \(w_{\rm f}\) by minimizing the difference between the transmitted signal and the product of a reference signal and a transmission coefficient in the frequency domain (Groenenboom & Fokkema, 1998; Liu et al., 2020). Repetitive acoustic surveys are carried out at a fixed time interval (every 10 seconds), using a total of 32 Controltech resonant (at 750 KHz) piezoelectric compressional-wave transducers (16 source-receiver pairs) with frequency coverage from 100 KHz to 4 MHz. Each survey consists of 50 source excitations using the Ricker function with a peak frequency of 750 KHz that are stacked to improve the signal-to-noise ratio, and \(w_{\rm f}\) is computed at every survey.
The simultaneous passive and active acoustic monitoring provide a wealth of information on both micro-fracturing and macro-scale hydraulic fracture width evolution. Integrating these methods allows to successfully capture the three-dimensional (3D) evolution of hydraulic fracture growth in these experiments.
nates. Our aim is to study the impact of transverse isotropy on the overall shape of hydraulic fracture propagating in both types of propagation regimes.
The relative influence of these two dissipative mechanisms (surfaces creation and viscous flow) on hydraulic fracture growth can be quantified by a dimensionless fracture toughness \(\mathcal{K}_{m}\) obtained from scaling considerations (Savitski & Detournay, 2002; Detournay, 2004; Bunger & Detournay, 2007; Hu & Garagash, 2010; Lu et al., 2017; Lecampion et al., 2017). In fact, it is computed as the square root of the ratio of fracture creation to viscous flow energy dissipation. Accounting for a time-varying injection from a point source, it is given by
\[\mathcal{K}_{m}(t)=\frac{K_{\mathrm{IC}}t^{5/18}}{{E^{\prime}}^{13/18}\mu^{5/1 8}V_{\mathrm{in}}(t)^{1/6}} \tag{1}\]
where \(K_{\mathrm{IC}}\) is the fracture toughness, \(E^{\prime}=E/(1-\nu^{2})\) represents the plane strain elastic modulus, \(\mu^{\prime}=12\mu\) with \(\mu\) the dynamic viscosity of the injection fluid, and \(t=T-T_{0}\) where \(T\) and \(T_{0}\) are the absolute and fracture initiation time, respectively. \(V_{\mathrm{in}}(t)=\int_{0}^{t}Q_{\mathrm{in}}(\tau)\mathrm{d}\tau\) represents the total volume of fluid in the fracture, where \(Q_{\mathrm{in}}\) is the fluid influx into the fracture accounting for wellbore compressibility (Liu & Lecampion, 2022). A radial hydraulic fracture in an isotropic material (Savitski & Detournay, 2002) grows in the toughness-dominated regime for \(\mathcal{K}_{m}\geq 1.1\), and in the viscosity-dominated regime when \(\mathcal{K}_{m}\leq 0.32\). We consider any value ranging from 0.32 to 1.1 as a transitional regime between the two limits. Similar scaling laws hold for a transversely isotropic rock (Moukhtari et al., 2020) pending the use of an average characteristic value for the toughness and elastic modulus. The experimental conditions are summarized in Table 1. By varying the fluid injection conditions (fracturing fluid viscosity and injection rate), we aim for specific propagation regimes (toughness-dominated, viscosity-dominated, and transition regimes) in different experiments. Figure 2A gives the evolution of \(\mathcal{K}_{m}\) with normalized testing time, \(t/T_{\mathrm{exp}}\). Two experiments were performed under viscosity-dominated regime (hereafter denoted as M3 and M4), experiment K1 was in toughness-dominated regime, and T2 is considered to be in the transition regime. The pressure and AE events histories in all four experiments are plotted in Figure 3.
First, we focus on the toughness-dominated regime test, K1. AE data is collected throughout the experiment. A majority of the events are concentrated along the final fracture plane as the AE hypocenters (Figure 4A) overlap with the fracture plane highlighted in the post-test photograph of sample surface (also confirmed by the AE density plot in Figure S1 in Supporting Information S1). The upward growth of the hydraulic fracture along positive \(\mathbf{e}_{3}\) direction (perpendicular-to-bedding) was stopped by a specific bedding plane located \(\sim\)2 cm above the wellbore (which was visible on the specimen surfaces before the test when wetted). Although there was no interruption of the hydraulic fracture by any bedding plane below the wellbore, the fracture plane did not reach the bottom face. On the contrary, we observe larger fracture length along \(\mathbf{e}_{1}\) direction (parallel-to-bedding), as the fracture eventually extended to the full length of 250 mm in \(\mathbf{e}_{1}\) direction. The event locations projected on the 2D \(\mathbf{e}_{1}\mathbf{e}_{3}\)-plane in Figure 4C also suggest that micro-cracking extends further along the parallel-to-bedding direction compared to the perpendicular-to-bedding direction.
This finding is consistent with numerical and analytical studies that suggest an ellipse-like shape for a fluid driven fracture propagating in a transversely isotropic rock (Laubie & Ulm, 2014; Bessmerthykh & Dontsov, 2018; Dontsov, 2019; Moukhtari et al., 2020). Following these previous works, and considering that AEs generally take place in the adjacent areas of the growing fracture front, it is sensible to assume that the frontier formed by the AEs also expands with an elliptical shape. This assumption enables us to reconstruct a generalized AE front by solving a least-squares problem (Text S2 in Supporting Information S1) to fit the best ellipse for the outermost events that occur within a given time interval (green ellipse in Figure 4D). Four snapshots of the reconstructed fronts are shown in Figure 4D. The major and minor semi-axes are found to be aligned generally with the
directions, implying a clear elongation of the hydraulic fracture growth along the parallel-to-bedding direction. To investigate the elongation of the hydraulic fracture, we measure the ratio of the fracture extent along \(\mathbf{e}_{1}\) direction, \(a\), over its value along \(\mathbf{e}_{3}\) direction, \(b\). As shown in Figure 4C, \(a\) increases to as high as \(\sim\)3 times of \(b\), indicating a significant elongation of the hydraulic fracture along the parallel-to-bedding direction.
To demonstrate that such elongated fracture growth is due to the rock's transversely isotropic characteristics, instead of being caused by the specific bedding plane interrupting the fracture propagation, the experimental results are compared with numerical predictions by two models: an isotropic model and a transversely isotropic one for the rock. These simulations are carried out using an extensively verified planar 3D hydraulic fracturing solver (Zia & Lecampion, 2020; Moukhtari et al., 2020) as detailed in Text S2 in Supporting Information S1. The first model solves the problem of a hydraulic fracture propagating in an isotropic medium with a constant fracture toughness (Zia & Lecampion, 2020) (independent of the propagation direction). To prevent the fracture from advancing across the observed
specific bedding plane, a jump in fracture toughness is imposed at the bedding location, to a level that is much higher than the uniform fracture toughness of the medium. As a result, the fracture initially grows in a radial shape until hitting the bedding (Figure 4D). As its propagation is partially disrupted by the bedding plane, its center begins to shift downward in an effort to maintain a somewhat radial shape, and eventually reaches the bottom surface prior to hitting both vertical faces. Substantial discrepancies are found between the numerical predictions and the reconstructed AE fronts. To summarize, for an isotropic rock, the arrest of the propagation of a planar hydraulic fracture on one side would enhance, instead of suppressing, its growth in the opposite direction. In the second model, we account for the transversely isotropic features of the medium (Zia & Lecampion, 2020; Moukhtari et al., 2020). More specifically, the rock's elastic deformation and resistance to creation of new fracture surfaces induced by fluid pressure now depend on five transversely isotropic elastic constants, as well as the anisotropic fracture toughness. Detailed comparisons between the AE fronts and the predicted fracture fronts provided in Figure 4E reveal that: (1) The AE front constantly propagates ahead of the hydraulic fracture front; (2) both fronts advance at roughly the same pace; (3) the AE clusters are scattered initially and become more concentrated in the predicted fracture front region. The elliptical AE front has a better agreement with the predicted fracture front when accounting for transverse isotropy compared to the radial shape as typically observed in hydraulic fractures in an isotropic medium. We will use the transversely isotropic model as numerical predictions for hydraulic fracture growth hereafter.
Figure 3: Evolution of pump pressure, \(P_{\rm pump}\), wellbore pressure, \(P_{\rm w}\), and number of AEs with time of all experiments. The time of hydraulic fracture (HF) initiation and end of growth are determined by clear signs such as pressure change, and the location and number of AEs. In both M3 and M4, very few AEs were detected in the first \(\sim\)100 seconds after the fracture initiation (initiation time determined by change in slope of the wellbore pressure as fluid starts to flow into the hydraulic fracture). Therefore, this period of time was disregarded in the analysis of AE front evolution.
## 4 Effect of viscous fluid dissipation on fracture elongation
Next, we demonstrate the fracture growth in experiments under other regimes: experimental results for the transition regime (T2) and the viscosity-dominated regime tests (M3
Figure 4: Experimental and simulation results of test K1. A, Hypocenter location of the AEs plotted on \(\mathbf{e_{2}e_{3}}\)-plane, superimposed on the post-test photograph of the sample surface, with the event occurrence time and magnitude indicated by the color and size of the circles, respectively. The final hydraulic fracture (HF) plane (white dashed line) is seen to be completely stopped by a visible bedding plane (purple). B, C, 3D and \(\mathbf{e_{1}e_{3}}-\)planar view of the event hypocenter locations. D, Four snapshots taken in different times demonstrating the comparisons between the reconstructed AE front and predicted fracture from assuming hydraulic fracture growth in an isotropic medium (projected on the same 250\(\times\)250-mm \(\mathbf{e_{1}e_{3}}-\)plane as in C). Events that occur within the last \(\Delta t=18\) secs before the time of every snapshot are also plotted. In the elliptical front reconstruction for K1, the events located above the bedding plane are disregarded since the hydraulic fracture was stopped by the weak plane, and these events are considered as pure micro-cracks that do not coalesce into the macro-scale fracture. E, Comparisons between AE front and predicted fracture front obtained from the hydraulic fracture solver considering a transversely isotropic medium. The grey area highlights the advancement of the AE front between two snapshots.
& M4) are displayed in Figure 5, Figure 6 and Figure S3 in Supporting Information S1, respectively. In all experiments, the final fracture plane remains vertical with little inclination (as illustrated by the AE density plots in Figure S1 in Supporting Information S1). The evolution of the AE frontier in all tests indicates that: (1) the effect of rock anisotropy is most significant in the toughness-dominated regime, which evidently promotes fracture containment in the perpendicular-to-bedding direction; (2) as the propagation regime transitions towards the viscosity-dominated regime (decreasing \(\mathcal{K}_{m}\)), the reconstructed front becomes less elliptical, and the AEs are found to be more scattered across the entire fracture plane. Post-test visual examination on the fracture path confirms larger vertical growth in the two viscosity-dominated tests compared to the toughness-dominated and transition tests.
Notably, the predicted fracture width in T2 converges to the measured one at multiple locations near the wellbore (Figure 5E), confirming that the hydraulic fracture was centered at the wellbore and remained vertical during the experiment. In the viscosity-dominated tests, regardless of the scattering in the events, it is seen that the AE front matches well with the predicted fracture front throughout the lifetime of both specimens. The fracture width in M3 (Figure S4 in Supporting Information S1) increases together with the numerical solution in both early and late times. However, a drop at an intermediate time in most measurement locations (#10, 11 and 15) is observed. This phenomenon is possibly associated with the occurrence of a fluid lag - as often observed in viscosity-dominated hydraulic fracture tests (Bunger and Detournay, 2008). Strong elasto-hydrodynamics coupling in the near-tip region of a hydraulic fracture induces cavitation such that the fluid front lags behind the fracture tip (Garagash and Detournay, 2000). Consequently, the acoustic signal cannot travel through the hydraulic fracture when it hits this near-tip nonwetted zone, which leads to erroneous estimations of the fracture opening (Liu and Lecampion, 2023).
Theoretically, the impact of rock anisotropy was found to vary between propagation regimes (Moukhtari et al., 2020). However, for a given regime, the fracture shape evolves in a self-similar manner and can be grasped by the aspect ratio between the major and minor semi-axes of the fracture footprint noted as \(a/b\). Two relations have been proposed for \(a/b\) corresponding to hydraulic fracture propagation respectively in the toughness- and viscosity-dominated regimes (Moukhtari et al., 2020). The aspect ratio is the lowest in the viscosity-dominated regime, and is found to evolve as
\[a/b\approx\left[0.76(E_{3}^{\prime}/E_{1}^{\prime})^{1/3}+0.24\right]^{-1}\]
In the toughness-dominated regime, the elongation is more pronounced and the aspect ratio scales as
\[a/b\approx\left(\frac{K_{\text{IC},3}\cdot E_{1}^{\prime}}{K_{\text{IC},1}\cdot E _{3}^{\prime}}\right)^{2}\]
The values of \(a/b\) measured from lab experiments are compared to these two analytical solutions, as well as the numerical predictions (Figure 2B and C). We find in K1 that \(a/b\) firstly rises above the toughness regime limit when the hydraulic fracture growth is partially stopped by the bedding. Its value then drops and converges to the toughness limit as the hydraulic fracture regains the elliptical shape as its center is shifted downward. The reconstructed AE front in T2 is seen to be less elliptical compared to the model predictions at the beginning as the AEs are more scattered, but it approaches the numerical predictions as the hydraulic fracture propagation continues.
Interestingly, we observe an unexpected elliptical front shape in both viscosity-dominated regime tests at the beginning of propagation, and \(a/b\) ultimately decreases to \(\sim\)1 (approaching a circular footprint). Such initial uneven fracture growth is likely related to the fluid lag at the beginning of fracture growth. It has been established (Garagash, 2006; Lecampion
& Detournay, 2007; Bunger & Detournay, 2007) that although the fluid lag may be large at early-time of the propagation, it ultimately coalesces with the fracture front over a characteristic time-scale of order \({E^{\prime}}^{3}\mu^{\prime}/\sigma_{h\min}^{3}\) (where \(\sigma_{h\min}\) is the minimum confining stress). In the case of an initially significant fluid lag, the shape of the hydraulic fracture is primarily determined by the effect of the anisotropic fracture toughness (as the tip is dry) and elastic constants, which explains why an elliptical shape is observed at early time, while the fracture becomes ultimately more radial as the fluid reaches the fracture front.
The intrinsic anisotropy of the rock is also evident in the topography of the created fracture surfaces. The post-test photograph and the 3D roughness profile of part of the
Figure 5: Experimental and simulation results for the T2 test (transition regime). A\(-\)D, Post-test photograph, 3D and \(\mathbf{e_{1}e_{3}}-\)planar view of the event hypocenters compared with model predictions. E, Width evolution at source-receiver (S-R) pairs #3, 12, and 13 in T2, with sensor locations indicated on the \(\mathbf{e_{1}e_{3}}\)-plane. The width is 5\(\sim\)10 \(\mu\)m larger than the model prediction. Such discrepancy can be explained by the fact that elastic constants used in numerical modeling are based on ultrasonic wave-speed measurements, which, in general, are higher than their quasi-static values. The numerical solver thus likely underestimates the fracture width due to overestimation of the elastic stiffness constants.
fracture plane created in M3 (Figure 7A and B), as well as the main principal surface curvature plots in Figure 7C show a clear direction-dependent rough surface characterized by parallel grooves aligned with the orientation of bedding planes. The importance of heterogeneity on controlling fracture roughness has been recently well quantified for a model material (hydrogel) (Steinhardt & Rubinstein, 2022). The anisotropic fracture roughness, with a rougher texture across the bedding and a smoother texture in the parallel-to-bedding direction, can thus be attributed to different length-scales of heterogeneity in the perpendicular and parallel-to-bedding directions associated with the rock deposition. Naturally, the propagation of fractures along the rougher direction necessitates higher energy consumption compared to the smoother direction, which is thereby speculated to be one factor that causes the elongation of these fractures.
## 5 Conclusions
Clear correlation between the aspect ratio of a hydraulic fracture, \(a/b\), and the dimensionless toughness, \(\mathcal{K}_{m}\), is revealed by both laboratory experiments and numerical simulations (Figure 2A and B) - larger \(\mathcal{K}_{m}\) results in a more elongated hydraulic fracture shape that restricts the fracture growth in the perpendicular-to-bedding orientation, whereas smaller \(\mathcal{K}_{m}\) leads to a more isotropic propagation. We conclude that rock anisotropy has a dominating effect on the vertical containment/horizontal elongation of hydraulic fractures in the absence of variation in confining stresses and material properties. We have clearly demonstrated that the intrinsic layering of shale formation, which is reflected in their transversely isotropic behavior, can favor the containment of hydraulic fracture at depth when the orien
Figure 6: Experimental and simulation results of the M3 test (viscosity-dominated regime). A\(-\)D, Post-test photographs, 3D and \(\boldsymbol{e_{1}e_{3}}-\)planar view of the event hypocenters compared with model predictions.
tation of the material isotropy plane is perpendicular to the minimum confining stress. Such a configuration is ubiquitous in sedimentary basins worldwide. Including the effect of rock anisotropy more systematically - with proper material characterization - is clearly needed in order to reconcile field observations further. More importantly, our results highlight the fact that the injection parameters (larger fracturing fluid viscosity and larger injection rate) can suppress the beneficial impact of material anisotropy on fracture containment by increasing the energy spent in viscous fluid flow (see Eq. (1)). The results presented here open the door to a more consistent engineering design of hydraulic fracturing treatments in shales with respect to their confinement at depth.
Figure 7: A, Post-test photograph of the direction-dependent rough fracture surface of a part of the fracture plane created in M3 (80 mm \(\times\) 100 mm). B, 3D profile of the same partial fracture plane mapped by a Keyence VR-3200 optical profilometer with a voxel resolution of 47\(\mu\)m\({}^{3}\). C, An estimate for the main principal surface curvature \(\Upsilon\) (see Text S3 in Supporting Information S1) of the original surface elevation profile is plotted at three different scales (from left to right: \(\sigma=235\mu m\), \(\sigma=471\mu m\), \(\sigma=941\mu m\)). The repeated bands of stark contrast representing high curvature are oriented in the same direction as the bedding (\(\mathbf{e}_{1}\)) and coincide with parallel grooves recognisable by direct inspection of the fracture surface.
## Appendix A Rock characterization
Mineral composition and organic content of the Del Carmen slate are determined through X-ray Powder Diffraction analysis (Moukhtari, 2020). We observe a concentration of 35.7% of laminated silicates (in particular chlorite and mica), 43.61% of quartz, and some minor constituents such as plagioclase (12.84%) and feldspars (3.15%). Values of elastic constants \(C_{\rm ij}\) are determined by ultrasonic measurements of compressional- and shear-wave velocities (Tsvankin, 2012; Wang, 2002a) (Table A1). The fracture toughness is measured by three-point loading on semicircular bending specimens (Kuruppu et al., 2014). Samples are prepared in two orientations with respect to the bedding planes to measure two extreme values of \(K_{\rm IC}\) along the parallel-to-bedding (\(K_{\rm IC,1}\)) and perpendicular-to-bedding (\(K_{\rm IC,3}\)) directions, respectively. The direction dependent anisotropic fracture toughness, \(K_{\rm IC}(\theta)\), used in the numerical predictions is then determined by a specific form (Moukhtari, 2020) (function of \(K_{\rm IC,1}\), \(K_{\rm IC,3}\), \(\theta\), and \(C_{\rm ij}\)) that ensures an exact elliptical shape for a planar fracture under uniform loading. The elastic properties appear to be highly consistent among all samples, as the standard deviations of the ultrasonic wave-speeds are, on average, less than 1% of the mean values (for example, \(V_{P}(\theta=0)=6432\pm 41\) m/s).
## Appendix B Open Research Section
The raw active acoustic data, as well as the processed experimental data, including the fluid injection records and detailed acoustic emission results are available on Zenodo ([https://doi.org/10.5281/zenodo.7738236](https://doi.org/10.5281/zenodo.7738236)). We also provide processed experimental data as source data for the figures used in this paper. The raw passive acoustic dataset for the 16 channels recorded in continuous mode is too large (several TBs) to share in a public repository but can be made available upon request.
The hydraulic fracture simulator Pyfrac is available at [https://github.com/GeoEEnergyLab](https://github.com/GeoEEnergyLab) -EPFL/Pyfrac. The source code for fracture width estimation from the active acoustic measurement is available at [https://github.com/GeoEEnergyLab-EPFL/FracMonitoring.git](https://github.com/GeoEEnergyLab-EPFL/FracMonitoring.git).
This work was funded by the Swiss National Science Foundation through grants no. 160577 & no. 192237. We are grateful to Prof. Pedro M. Reis for generously providing access to the surface roughness measurement equipment. We thank Dr. T. Adatte for the X-ray powder diffraction measurement of the Del Carmen slate mineralogy.
## Author Contributions
Conceptualization: Guanyi Lu, Brice Lecampion
Data curation: Guanyi Lu, Seyyedmaalek Momeni, Carlo Peruzzo
Formal analysis: Guanyi Lu, Seyyedmaalek Momeni, Carlo Peruzzo, Fatima-Ezzahra Moukhtari, Brice Lecampion
Funding acquisition: Brice Lecampion
Investigation: Guanyi Lu, Seyyedmaalek Momeni, Carlo Peruzzo
Methodology: Guanyi Lu, Seyyedmaalek Momeni, Carlo Peruzzo, Brice Lecampion
Supervision: Brice Lecampion
## Author Declaration
The authors declare no competing interests.
|
2303.13730 | Bayesian modeling of population variance for aggregated measurements | Growth curves are commonly used in modeling aimed at crop yield prediction.
Fitting such curves often depends on availability of detailed observations,
such as individual grape bunch weight or individual apple weight. However, in
practice, aggregated weights (such as a bucket of grape bunches or apples) are
available instead. While treating such bucket averages as if they were
individual observations is tempting, it may introduce bias particularly with
respect to population variance. In this paper we provide an elegant solution
which enables estimation of individual weights using Dirichlet priors within
Bayesian inferential framework. | Elena Moltchanova, Daniel Gerhard, Rory Ellis | 2023-03-24T01:02:31Z | http://arxiv.org/abs/2303.13730v1 | # Bayesian modeling of population variance for aggregated measurements.
###### Abstract
Growth curves are commonly used in modeling aimed at crop yield prediction. Fitting such curves often depends on availability of detailed observations, such as individual grape bunch weight or individual apple weight. However, in practice, aggregated weights (such as a bucket of grape bunches or apples) are available instead. While treating such bucket averages as if they were individual observations is tempting, it may introduce bias particularly with respect to population variance. In this paper we provide an elegant solution which enables estimation of individual weights using Dirichlet priors within Bayesian inferential framework.
## 1 Introduction
Crop yield prediction is important. It enables the growers to make decision about crop management throughout the season as well as to allocate resources for the final harvesting and to make decisions about eventual processing and sales. Entering the string "crop yield prediction" into Google Scholar results in over 900 publications for the year 2020 alone. Approximately one tenth of these mention grapes. Other often studied crops include apples (106 mentions), strawberries (50 mentions), peaches (31 mentions), and kiwifruit (13 mentions). The fruit growth is usually modeled using some variety of a non-linear curve (with Fernandes et al. (2017), Lakso et al. (1995), and Coombe and McCarthy (2000) being just a few examples). For grapes, a double-sigmoidal curve Coombe and McCarthy (2000) is a common way to model both the berry and the bunch growth.
The models such as the ones in de la Fuente et al. (2015), and more recently (Ellis et al., 2020), are based on grape bunch/cluster weight data collected throughout the season. Although there is an increased push towards
using sensors and computer vision in identifying fruit sizes and weight (see Bulanon et al. (2020) for a recent review), many models are still based on physical measurements of grape bunch or cluster weights. However, in many instances, measuring each individual grape bunch (or an apple or any other fruit) becomes too arduous and expensive. Therefore in practice, bucket weights are obtained instead of individual weights. Thus, for example, rather than be provided with 20 individual bunch weights, we can instead be informed that the aggregate weight of 20 bunches was 3000 g.
A quick-and-dirty solution is to simply obtain the average weight, in this case, \(3000/20=150g\), and treat is as a single observation. However, such simplification completely disregards the uncertainty associated with each average obtained in such a way. Although such aggregation may not impact the estimate of the mean bunch weight, it does affect the estimation of variance, and thus the perceived accuracy of the prediction, which is often the ultimate goal of the exercise. Moreover, in more complex modeling, the parameter estimates may in fact be biased as well especially if the bucket sizes differ wildly.
Therefore, in this manuscript, we propose an elegant solution to this problem. We use a Bayesian framework to assign a Dirichlet prior to the distribution of individual bunch weights within the bucket, and construct an MCMC algorithm to estimate them. We then use simulation studies to illustrate the performance of the proposed method.
## 2 Theory
Consider a continuous random variable
\[x\sim N(\mu,\tau),\]
where \(\mu\) is the mean and \(\tau\) is the precision (i.e. inverse variance) of the normal distribution. An average of a sample of size \(n\), \(\bar{x}_{n}\) will then also have a normal distribution:
\[\bar{x}\sim N(\mu,n\tau).\]
Assume a conjugate normal prior for the normal mean \(\mu\):
\[\mu\sim N(\mu_{0},\tau_{0}),\]
and a conjugate Gamma prior for the normal precision \(\tau\):
\[\tau\sim Gamma(a,b).\]
If our data consist of \(K\) samples of sizes \(n_{1},...,n_{K}\) with the respective averages \(\bar{x}_{1},...,\bar{x}_{K}\), we can derive
\[\bar{x}_{k}\sim N(\mu,n_{k}\tau),\]
and use Bayes' formula to derive the conditional posterior distributions as:
\[\mu|\cdot\sim N\left(\frac{\tau\sum_{k}n_{k}\bar{x}_{k}+\tau_{0}\mu_{0}}{\tau\sum_ {k}n_{k}+\tau_{0}},\tau\sum_{k}n_{k}+\tau_{0}\right) \tag{1}\]
and
\[\tau|\cdot\sim Gamma\left(a+0.5K,b+0.5\sum_{k}n_{k}(\bar{x}_{k}-\mu)^{2} \right). \tag{2}\]
These conditional distributions can then be used in the context of Gibbs sampling to produce the posterior distributions for the parameters \(\mu\) and \(\tau\).
However, it is not always possible to derive analytically the distribution of a sum (or a sample average) of observations.
Consider the following model, where the observations have some parametric p.d.f. \(g(x_{ij}|\theta)\) with \(j=1,...,n_{i}\) and \(i=1,...,K\) dependent on parameter \(\theta\). Let the observations be aggregated into \(K\) groups of perhaps different sizes \(n_{1},n_{2},...,n_{K}\) where only the sums
\[y_{i}=\sum_{j=1}^{n_{i}}x_{ij}\qquad j=1,...,n_{i},\text{ and }i=1,...,K,\]
are reported.
Given the prior distribution \(p(\theta)\), within the Bayesian framework, one can construct an MCMC algorithm to produce a sample from the posterior distribution
\[p(\theta|\mathbf{x})\propto\prod_{i}\prod_{j}g(x_{ij}|\theta)p( \theta). \tag{3}\]
When \(x_{ij}\) are not observed directly, one can attempt to derive the distribution of the aggregate \(y_{i}\) and substitute it into Equation 3. However, since this is not always possible, we propose to amend the Metropolis-Hastings algorithm to recover the \(x_{ij}\) observations. In this case, the latent observations \(x_{ij}\) are treated as parameters.
To make this explicit, we add the following equation to our model:
\[p(y|x)=Pr(y_{i}=y^{\prime}|x_{i1},...,x_{in_{i}})=\begin{cases}1 &\text{if }\sum_{j=1}^{n_{i}}x_{ij}=y^{\prime},\\ 0&\text{otherwise}.\end{cases} \tag{4}\]
for \(i=1,...,K\).
We can thus write out the joint posterior distribution for the model as
\[p(\theta,\mathbf{x}|\mathbf{y})\propto p(\mathbf{y}|\mathbf{x})p(\mathbf{x}| \theta)p(\theta).\]
Now, for a Metropolis-Hastings step, we can propose new values of \(\mathbf{x}^{*}\) given the current ones, using a proposal distribution \(q(\mathbf{x}^{*}|\mathbf{x})\), and evaluate the rejection ratio as:
\[R = \frac{p({\bf y}|{\bf x}^{*})p({\bf x}^{*}|\theta)p(\theta)}{p({\bf y} |{\bf x})p({\bf x}|\theta)p(\theta)}\frac{q({\bf x}|{\bf x}^{*})}{q({\bf x}^{*}|{ \bf x})} \tag{5}\] \[= \frac{p({\bf x}^{*}|\theta)}{p({\bf x}|\theta)}\frac{q({\bf x}|{ \bf x}^{*})}{q({\bf x}^{*}|{\bf x})}\]
Note, that if \(\sum_{j=1}^{n_{i}}x_{ij}^{*}\neq y_{i}\) for all \(i=1,...,K\), then \(R=0\) and the proposal will be rejected. Thus, intuitively, a good proposal distribution would ensure the correct sums, while sampling around the current values of \(x_{ij}\). We thus propose the following sampling scheme.
1. Sample the weights \(w_{ij}\) from a Dirichlet distribution \[w_{i1},...,w_{in_{i}}\sim\mbox{Dirichlet}(\delta_{i}x_{i1},...,\delta_{i}x_{in _{i}}),\qquad i=1,...,K,\] (6) for some chosen \(\delta_{1},...,\delta_{K}\), and \(\sum_{j=1}^{n_{i}}w_{ij}=1\).
2. Evaluate the proposed values as \[x_{ij}^{*}=w_{ij}y_{i}\qquad\forall i,j.\]
Note, that since the weights for each group \(i=1,...,K\) come from a Dirichlet distribution
\[\sum_{j=1}^{n_{i}}x_{ij}^{*}=\sum_{j=1}^{n_{i}}w_{ij}y_{i}=y_{i}.\]
The expected value of each proposed element is the current value:
\[E(x_{ij}^{*}|x_{ij})=y_{i}\frac{\delta x_{ij}}{\sum_{j=1}^{n_{i}}(\delta x_{ ij})}=y_{i}\frac{x_{ij}}{y_{i}}=x_{ij},\]
and the variance
\[Var(x_{ij}^{*}|{\bf x})=y_{i}^{2}\frac{\frac{\delta x_{ij}}{\delta y_{i}}\left( 1-\frac{\delta x_{ij}}{\delta y_{i}}\right)}{\delta y_{i}+1}=\frac{x_{ij}(y_{ i}-x_{ij})}{\delta y_{i}+1}\]
increases when \(\delta\) decreases.
The density function for the proposal distribution for one group \(q({\bf x}_{\bf i}^{*}|{\bf x_{i}})\) can be written as
\[q({\bf x}_{\bf i}^{*}|{\bf x_{i}})=\frac{\Gamma(\delta y_{i})}{\prod_{j=1}^{n _{i}}\Gamma(\delta x_{ij})}\prod_{j=1}^{n_{i}}(x_{ij}^{*})^{\delta x_{ij}-1},\]
where \(\Gamma()\) is the gamma function. By substituting the relevant expressions into Equation 5 for the rejection rate, and accepting the proposed values of \({\bf x}\) with probability \(\min(R,1)\), one can estimate these latent variables.
The overall estimation algorithm consists of two parts: (i) sample \(\mathbf{x}\) given \(\mathbf{y}\) and \(\theta\), and (ii) sample \(\theta\) given \(\mathbf{x}\).
The above method is applicable whatever the likelihood \(p(x|\theta)\) is.
The choice of parameters \(\delta_{i}\) for the proposal distribution will intuitively have an effect on the convergence and acceptance ratio of the algorithm. Note, that it does not have to be the same for all groups \(i=1,...,K\). However, if the groups are of similar sizes, there is no reason to fine-tune group-specific \(\delta_{i}\).
## 3 Simulations. Log-normally distributed response.
In order to demonstrate the performance of the suggested method in recovering the original population variance from the aggregate data, we have run a simulation case study for the log-normal model. Due to the computational intensity of the algorithm with the Dirichlet step, only one simulation was run for the following set-up.
\[\log(x_{i})\sim N(\mu,\tau),\]
where \(\mu=\log(250)\) and \(\tau^{-\frac{1}{2}}=0.10\), i.e., the individual bunches are expected to be within \(20\%\) of the average bunch weight of \(250g\). The data were simulated for \(1000\) bunches aggregated into \(100\) buckets of \(10\) bunches each. We have used a vague \(Gamma(0.01,0.01)\) and informative \(Gamma(2.5,0.025)\) priors for the precision \(\tau\), and the initial values based on the actual precision and the estimate based on the aggregated sample respectively.
In each case \(10^{8}\) iterations were ran with the resulting traces shown in Figure 1 and the posterior distributions after the burn in of \(5\times 10^{6}\) thinned at the rate of \(2times10^{4}\) are shown in Figure 2. The estimated posterior predictive distributions for an individual observation \(p(\tilde{x}|x)\) are shown in Figure 3.
The results show that the initial values have effect on convergence. Because there is a relatively large amount of data involved in this example, the prior does not have an appreciable effect on the final result. The standard deviation \(\sigma\) was accurately recovered, as was the distribution of an individual observation \(\tilde{x}\).
## 4 Discussion
In this paper we have demonstrated the Bayesian method of accurately estimating the population variance based on aggregated data. While the work was motivated by an application to viticulture, it can be used more widely in any context where aggregates are recorded instead of individual observations.
Treating aggregate-based averages as if they were individual observations will clearly underestimate the underlying population variance as well as potentially bias the estimation of the population mean. It is thus clearly an easy but erroneous way to go. As we demonstrate, in some situations, such as a normally distributed response, the posterior distributions for the parameters of interest
can be obtained analytically. However, in most situations, analytic derivation is not an option. The method we suggest provides an elegant solution to the problem by recovering the individual weights.
Although the method works in a sense of recovering the original parameters, there are issues that still need to be addressed. For the log-normal model, we have found the convergence to be extremely slow, both in terms of the number iterations required and in terms of computing time need to complete those iterations. The two are obviously related, but sometimes a great number of iterations can be completed within a reasonable period of time. Unfortunately, that was not the case here and it was also the reason why our simulation study of log-normal response was very limited.
One obvious solution is to find a more efficient way to code the algorithm. Another is to make the algorithm itself more efficient. The aspect which is the most relevant to the efficient exploration of the parameter space is the proposal distribution for the Dirichlet weights, expressed via the parameter \(\sigma\). It should be noted, that with each new proposal, new values for every single unobserved individual bunch weight is provided. In our simulation study, we've had total of 1000 bunches. However, in practice, it can easily be many more. Sampling so many parameters simultaneously often means that the deviations from the status quo need to be tiny for the proposal to be accepted; especially so one the
Figure 1: Log-normal Model Performance: Posterior sample traces for the standard deviation \(\tau^{-1/2}\) and an individual bunch \(x_{1}\) for a single simulated data set for informative and vague priors with the starting point for \(\tau\) being either the true value or the estimate based on the aggregated data..
sampler has converged. However, it may also result in slow mixing and a very long burn-in, particularly if the starting values are far away from the truth. This need to have larger steps during the burn-in phase and shorter steps during the convergence phase may be accommodated via an adaptive MCMC algorithm (Andrieu and Thoms (2008), Roberts and Rosenthal (2009)), and will be the next obvious step in this work.
Despite the large space for improvement in terms of computational efficiency, we believe that the proposed method demonstrates a nice way to solve the problem of incorporating the aggregated measurements into the model. Its immediate practical applications include extending the double sigmoidal grape bunch growth model of Ellis et al. (2020), but we are certain it will find use in other agricultural applications and beyond.
Figure 2: Log-normal Model Performance: Estimated posterior distributions for the mean \(\mu\) for a single simulated data set for informative and vague priors with the starting point for \(\tau\) being either the true value or the estimate based on the aggregated data.
Figure 3: Log-normal Model Performance: Estimated posterior predictive distributions for a random individual observation \(\tilde{x}\) for a single simulated data set for informative and vague priors with the starting point for \(\tau\) being either the true value or the estimate based on the aggregated data. |
2307.10916 | Enhanced photo-excitation and angular-momentum imprint of gray excitons
in WSe$_{2}$ monolayers by spin-orbit-coupled vector vortex beams | A light beam can be spatially structured in the complex amplitude to possess
orbital angular momentum (OAM), which introduces a new degree of freedom
alongside the intrinsic spin angular momentum (SAM) associated with circular
polarization. Moreover, super-imposing two twisted lights with distinct SAM and
OAM produces a vector vortex beam (VVB) in non-separable states where not only
complex amplitude but also polarization are spatially structured and entangled
with each other. In addition to the non-separability, the SAM and OAM in a VVB
are intrinsically coupled by the optical spin-orbit interaction and constitute
the profound spin-orbit physics in photonics. In this work, we present a
comprehensive theoretical investigation, implemented on the first-principles
base, of the intriguing light-matter interaction between VVBs and WSe$_{2}$
monolayers (WSe$_{2}$-MLs), one of the best-known and promising two-dimensional
(2D) materials in optoelectronics dictated by excitons, encompassing bright
exciton (BX) as well as various dark excitons (DXs). One of the key findings of
our study is the substantial enhancement of the photo-excitation of gray
excitons (GXs), a type of spin-forbidden dark exciton, in a WSe$_2$-ML through
the utilization of a twisted light that possesses a longitudinal field
associated with the optical spin-orbit interaction. Our research demonstrates
that a spin-orbit-coupled VVB surprisingly allows for the imprinting of the
carried optical information onto gray excitons in 2D materials, which is robust
against the decoherence mechanisms in materials. This observation suggests a
promising method for deciphering the transferred angular momentum from
structured lights to excitons. | Oscar Javier Gomez Sanchez, Guan-Hao Peng, Wei-Hua Li, Ching-Hung Shih, Chao-Hsin Chien, Shun-Jen Cheng | 2023-07-20T14:40:12Z | http://arxiv.org/abs/2307.10916v2 | Enhanced photo-excitation and angular-momentum imprint of gray excitons in WSe\({}_{2}\) monolayers by spin-orbit-coupled vector vortex beams
###### Abstract
A light beam can be spatially structured in the complex amplitude to possess orbital angular momentum (OAM), which introduces a new degree of freedom alongside the intrinsic spin angular momentum (SAM) associated with circular polarization. Moreover, super-imposing two twisted lights with distinct SAM and OAM produces a vector vortex beam (VVB) in non-separable states where not only complex amplitude but also polarization are spatially structured and entangled with each other. In addition to the non-separability, the SAM and OAM in a VVB are intrinsically coupled by the optical spin-orbit interaction and constitute the profound spin-orbit physics in photonics. In this work, we present a comprehensive theoretical investigation, implemented on the first-principles base, of the intriguing light-matter interaction between VVBs and WSe\({}_{2}\) monolayers (WSe\({}_{2}\)-MLs), one of the best-known and promising two-dimensional (2D) materials in optoelectronics dictated by excitons, encompassing bright exciton (BX) as well as various dark excitons (DXs). One of the key findings of our study is the substantial enhancement of the photo-excitation of gray excitons (GXs), a type of spin-forbidden dark exciton, in a WSe\({}_{2}\)-ML through the utilization of a twisted light that possesses a longitudinal field associated with the optical spin-orbit interaction. Our research demonstrates that a spin-orbit-coupled VVB surprisingly allows for the imprinting of the carried optical information onto gray excitons in 2D materials, which is robust against the decoherence mechanisms in materials. This observation suggests a promising method for deciphering the transferred angular momentum from structured lights to excitons.
gray exciton; two-dimensional materials; transition-metal dichalcogenide; twisted light; vector vortex beam; WSe\({}_{2}\).
## I Introduction
A spatially structured light beam with a cylindrically twisted phase front introduces quantized orbital angular momenta (OAM), \(L_{z}=\ell\hbar\), which serves as a novel degree of freedom for light alongside spin angular momentum (SAM), \(S_{z}=\pm\hbar\), associated with polarization of light. [1; 2; 3; 4] Such a cylindrically structured beam, also referred to as twisted light (TL) or optical vortex (OV), characterized by an unbounded quantum number \(\ell\) has been demonstrated advantageous in a variety of advanced photonic and quantum applications, ranging from optical tweezers, [5; 6] optical trapping, [7; 8] high-resolution optical microscope, [9; 10; 11] optical communication, [12; 13] to high dimensional quantum information. [14; 15; 16; 17] Besides, the co-existence of SAM and OAM in a structured light beam gives rise to intriguing optical spin-orbit-coupled phenomena, [18; 19; 20] including photonic spin Hall effect, [21; 22; 23; 24] spin-based plasmonics, [25] photonic wheel, [26; 27] optical transverse spin, [28] and longitudinal field of light. [29; 30; 31; 32; 33]
Furthermore, a structured light beam can be tailored by the controlled superposition of TLs with distinct SAM and OAM, forming a vector vortex beam (VVB) in non-separable states, where not only the complex amplitude but also the polarization of light are spatially structured and entangled with each other. [34; 35; 36; 37; 38] The exceptional characteristics of VVBs as light sources have been demonstrated to enable advanced photonics applications, [39] particle acceleration, [40; 41] vector beam multiplexing communication, [42; 43] high dimensional quantum entanglement, [44; 45] and vector vortex quantum steering. [46] The non-separability of the SAM and OAM, further coupled by the optical spin-orbit interaction (SOI), in a VVB embodies the profound spin-orbit physics of optics and naturally affect its interaction with matters, which, however, remain largely unexplored so far. Following the rapid advancement in the TL-based optics, [47] it is timely crucial to investigate the physics of the interaction between structured lights and the emergent nano-materials suited for the prospective TL-based optoelectronics.
Atomically thin transition-metal dichalcogenide monolayer (TMD-ML) is one of the most promising optoelectronic 2D materials with superior light-matter interactions that are dictated by excitons. [48; 49; 50; 51; 52] In TMD-MLs, excitons are strongly bound by the enhanced Coulomb interaction, leading to the atypical band dispersion and exciton fine structures associated with the diverse degrees of freedom inherent in excitons, including spin and valley properties as well as the center-of-mass motion of exciton. [53; 54; 55; 56; 57] The remarkable exciton fine structure of a TMD-ML enables the unambiguous spectral resolution of diverse exciton complexes, such as the BX and various DX states, [58] each possessing distinct degrees of freedom. In darkish W-based TMD-MLs, e.g. WSe\({}_{2}\), [59] the
intravalley repulsive exchange energy combined with the conduction band splitting shifts the dipole-allowed bright exciton states upwards by tens of meV and leave the spin-forbidden dark exciton doublet as the excitonic ground states. Furthermore, the lowest doublet of dark excitons undergoes valley-mixing, resulting from weak intervalley exchange interaction, and exhibits a slight energy splitting, yielding a completely dark exciton and a slightly optically active state known as a gray exciton (GX). [60, 61, 62]
Notably, GXs have recently garnered significant attention due to their possession of the both advantages from bright excitons (BXs) as well as dark excitons (DXs), i.e. long lifetime and brightness. [63, 56] These characteristics are highly desirable for future dark-exciton-based quantum technologies and devices. [64, 65] Nevertheless, optically accessing the GX states remains a non-trivial task and usually needs the additional aid of external fields or post-processed structures of samples, such as in-plane magnetic fields, [66, 59, 62] plasmonic fields, [67] or photonic crystals in close proximity. [68, 56] The fascinating attributes of twisted light have recently stimulated a few pioneering investigations concerning their interactions with bright excitons in 2D systems. [69, 70, 71, 72, 73, 74, 75, 76, 77, 78] However, the exploration of the interplay between twisted light and GXs remains an area that is still largely unexplored.
In this study, we present a comprehensive theoretical investigation based on first principles, focusing on the interaction between spin-orbit-coupled VVBs and exciton states in a WSe\({}_{2}\) monolayer, including both BX and GX. We reveal that structured lights can serve as an exceptional light source enabling optically enhance the photo-excitation of GXs in a WSe\({}_{2}\)-ML through the coupling of the longitudinal field component associated with the SOI. Furthermore, we show that a spin-orbit-coupled VVB enables the imprinting of optical information onto the optical transitions of GXs in the 2D materials.
In Section II, we begin by reviewing the electromagnetic theory of structured light and introducing the formalism for twisted lights in the Laguerre-Gaussian (LG) modes, and present the generalized theory for the light-matter interaction between generic structured light and excitons in 2D materials.
In Section III, we present the calculated results and engage in a thorough physical discussion. We calculate the momentum-dependent optical matrix elements of the twisted-light-excited exciton states in a WSe\({}_{2}\)-ML. Specifically, we focus on the photo-excitation of GXs in a WSe\({}_{2}\)-ML by spin-orbit-coupled VVBs that are formed by the controlled superposition of two twisted lights with distinct angular momenta. Finally, in Section IV, we conclude our work.
## II Theory
### Theory of Laguerre-Gaussian
#### ii.1.1 Vector potentials in the real space
To describe a twisted light with SAM (\(\sigma\hbar\)) and OAM (\(\ell\hbar\)), we begin with the ansatz of the vector potential in the Lorentz gauge, \(\mathbf{A}^{\sigma\ell pq_{0},L}(\mathbf{r})=\hat{\mathbf{\varepsilon}}_{\parallel}^{ \varphi}u_{\ell p}(\mathbf{r})e^{iq_{0}z}\), that satisfies the paraxial Helmholtz equation, [79] where \(\mathbf{r}=(x,y,z)=(\mathbf{\rho},z)\) is the 3D coordinate position, \(\hat{\mathbf{\varepsilon}}_{\parallel}^{\sigma}=\frac{1}{\sqrt{2}}(\hat{\mathbf{x}}+i \sigma\hat{\mathbf{y}})\) is the transverse polarization labelled by the optical helicity \(\sigma=\pm 1\), \(\ell\) (\(p\)) is the index of the azimuthal (radial) mode of light, and \(q_{0}\) is the wave number of light propagating along the \(z\)-direction. [80] Next, the solved vector potential of a TL in the LG mode from the vectorial Helmholtz equation in the paraxial approximation is transformed to that in the Coulomb gauge, which is normally adopted by the standard theory of light-matter interaction, [81, 82] via the transformation equation,
\[\mathbf{A}^{\sigma\ell pq_{0},C}(\mathbf{r})=\mathbf{A}^{\sigma\ell pq_{0},L}(\mathbf{r})+ \frac{\mathbf{\nabla}\left(\mathbf{\nabla}\cdot\mathbf{A}^{\sigma\ell pq_{0},L}(\mathbf{r}) \right)}{q_{0}^{2}}, \tag{1}\]
which is established by equalizing the electric field expressed in terms of the vector potential in the Coulomb gauge and that in the Lorentz gauge as shown by Refs. [83, 84]. For brevity, hereafter we shall remove the superscript \(C\), \(q_{0}\), and \(p\) and preserve only the indices of SAM (\(\sigma\)) and OAM (\(\ell\)) for the vector potential of a twisted LG beam in the fundamental radial mode (\(p=0\)), which will be under the main discussion of this work. In the Rayleigh range where the amplitude of light remains nearly constant along the \(z\) coordinate, the vector potential of a circularly polarized LG TL in the Coulomb gauge is solved as \(\mathbf{A}^{\sigma,\ell}(\mathbf{r})=e^{iq_{0}z}\mathbf{A}^{\sigma,\ell}(\mathbf{\rho})=e^{iq_ {0}z}[\hat{\mathbf{\varepsilon}}_{\parallel}^{\sigma}A_{\parallel}^{\ell}(\mathbf{ \rho})+\hat{\mathbf{z}}A_{z}^{\sigma,\ell}(\mathbf{\rho})]\), being a 3D-structured light with the both transverse and longitudinal components, [84] which are, respectively, given by
\[A_{\parallel}^{\ell}(\mathbf{\rho}) \approx A_{0}f_{|\ell|p}(\rho)e^{i\ell\phi}\,, \tag{2}\] \[A_{z}^{\sigma,\ell}(\mathbf{\rho}) \approx i\frac{A_{0}}{\sqrt{2}q_{0}\rho}\left((|\ell|-\sigma\ell)-\frac{2 \rho^{2}}{w_{0}^{2}}\right)f_{|\ell|p}(\rho)e^{i(\sigma+\ell)\phi}\,, \tag{3}\]
where \(\rho=\sqrt{x^{2}+y^{2}}\), \(\phi=\tan^{-1}(y/x)\) is the azimuthal angle, \(A_{0}\) is the amplitude of light, \(\ell=0,\pm 1,\pm 2,\pm 3,...\) (\(p=0,1,2,...\)) is the index of azimuthal (radial) mode, \(f_{|\ell|p}(\rho)=C_{p}^{|\ell|}L_{p}^{|\ell|}\left(\frac{2\rho^{2}}{w_{0}^{2}} \right)\left(\frac{\sqrt{2}\rho}{w_{0}}\right)^{|\ell|}\exp\left(-\frac{\rho^{2 }}{w_{0}^{2}}\right)\) is the radial distribution function, \(C_{p}^{|\ell|}=\sqrt{2\ p^{\dagger}/\pi\left(|\ell|+p\right)!}\) is the normalization constant, \(L_{p}^{|\ell|}(x)\) is the associated Laguerre polynomial, and \(w_{0}\) is the beam waist of light beam. Throughout this work, we consider the beam waist, \(w_{0}=1.5\mu\)m, and the wavelength,
532nm, of the twisted light with the wave number \(q_{0}\) that is resonant to the exciton transition, \(E_{B\mathbf{0}}^{X}\)=\(1.7eV\). [85; 86] In Eq.(3), one notes that the strength of the longitudinal field in a TL increases with reducing \(w_{0}\) and critically depends on the signs of \(\sigma\) and \(\ell\). The product of \(\sigma\ell\) appearing Eq.(3) manifests the effect of optical SOI in the longitudinal field component.
Remarkably, the longitudinal field, \(A_{z}^{\sigma,\ell}(\mathbf{\rho})\), in Eq.(3) is imposed by the phase term of total angular momentum (TAM), \(e^{i(\sigma+\ell)\phi}\), while the transverse field, \(A_{\parallel}^{\ell}(\mathbf{\rho})\), in Eq.(2) is structured with the OAM only. As the electric field of a light beam is \(\mathbf{E}=i\omega\mathbf{A}\parallel\mathbf{A}\) in the Coulomb gauge, those 3D-structured TLs with longitudinal field components naturally enable the photo-excitation of the exciton states with out-of-plane dipole moments, such as the GX state of a TMD-ML, as shown by Fig.1(e).
#### ii.2.2 Vector potentials in the momentum space
For the integration with the light-matter interaction based on the exciton band structures of 2D materials, it is necessary to transform the vector potentials into the angular spectrum representation through a 2D Fourier transform. Following Eqs.(2) and (3), the Fourier transforms of the complex transverse component of \(\mathbf{\mathcal{A}}^{\sigma,\ell}(\mathbf{q}_{\parallel})\equiv\frac{1}{\Omega}\int d ^{2}\mathbf{\rho}\,\mathbf{A}^{\sigma,\ell}(\mathbf{\rho})e^{-i\mathbf{q}_{\parallel}\cdot\bm {\rho}}=\mathbf{\hat{\varepsilon}}_{\parallel}^{\sigma}\mathcal{A}_{z}^{\ell}(\bm {q}_{\parallel})+\hat{\mathbf{z}}\mathcal{A}_{z}^{\sigma,\ell}(\mathbf{q}_{\parallel})\) is derived as
\[\mathcal{A}_{\parallel}^{\ell}(\mathbf{q}_{\parallel})=\tilde{F}_{|\ell|}(q_{ \parallel})e^{i\ell\phi_{\mathbf{q}}}\,, \tag{4}\]
and the longitudinal one as
\[\mathcal{A}_{z}^{\sigma,\ell}(\mathbf{q}_{\parallel})\approx-(\mathbf{\hat{\varepsilon }}_{\parallel}^{\sigma}\cdot\hat{\mathbf{q}})\mathcal{A}_{\parallel}^{\ell}(\mathbf{q }_{\parallel})\,, \tag{5}\]
as detailed in Section SII of Supplemental Material, where \(\hat{\mathbf{q}}=\mathbf{q}/|\mathbf{q}|\) with \(\mathbf{q}=(\mathbf{q}_{\parallel},q_{0})\), \(\mathbf{q}_{\parallel}=(q_{x},q_{y})\) and \(\phi_{\mathbf{q}}=\tan^{-1}(q_{y}/q_{x})\). The complex-valued radial function is \(\tilde{F}_{|\ell|}(q_{\parallel})=(-i)^{|\ell|}F_{|\ell|}(q_{\parallel})\) with \(F_{|\ell|}(q_{\parallel})=\frac{2\pi}{\Omega}\,A_{0}^{LG}\mathbb{H}_{|\ell|}[ f_{|\ell|}(\rho,q_{\parallel})]\), [77] where \(f_{|\ell|}(\rho)\equiv f_{|\ell|,p=0}(\rho)\) and \(\mathbb{H}_{|\ell|}[f_{|\ell|}(\rho),q_{\parallel}]\) is the Hankel transformation of order \(|\ell|\) for \(f_{|\ell|}(\rho)\), defined as \(\mathbb{H}_{|\ell|}[f_{|\ell|}(\rho),q_{\parallel}]\equiv\int_{0}^{\infty}d \rho\rho J_{|\ell|}(q_{\parallel}p)f_{|\ell|}(\rho)\), where \(J_{|\ell|}(q_{\parallel}\rho)\) represents the Bessel function of the first kind of order \(|\ell|\). [87] In turn, the vector potential as a function of coordinate position in the real space can be expressed as \(\mathbf{A}^{\sigma,\ell}(\mathbf{\rho})=\sum_{\mathbf{q}_{\parallel}}\mathbf{\mathcal{A}}^{ \sigma,\ell}(\mathbf{q}_{\parallel})e^{i\mathbf{q}_{\parallel}\cdot\mathbf{\rho}}\), via the inverse Fourier transform. [73; 77]
The appearance of \(\mathcal{A}_{\parallel}^{\ell}(\mathbf{q}_{\parallel})\) in Eq.(5) accounts for that the longitudinal field in a TL fully inherits the OAM-encoded transverse spatial structures described by Eq.(4). Notably, the term \((\mathbf{\hat{\varepsilon}}_{\parallel}^{\sigma}\cdot\hat{\mathbf{q}})=(\mathbf{\hat{ \varepsilon}}_{\parallel}^{\sigma}\cdot\mathbf{q}_{\parallel})/q\) appearing in Eq.(5) manifests itself as the optical SOI that couples the optical spin \((\mathbf{\hat{\varepsilon}}_{\parallel}^{\sigma})\) and the in-plane momentum
Figure 1: (a) Schematics of a vector vortex beam (VVB) formed by the superposition of two twisted lights with distinct angular momenta as a light source for the photo-generation of excitons in a WSe\({}_{2}\)-ML. BS is the abbreviation of beam splitter. In a VVB, not only the complex amplitude but also the polarization of light are structured spatially. The left square inset shows the spatially varied polarization of a VVB considered in Fig.4. The right rectangular inset presents the vector field of a circularly polarized twisted light propagating along the \(z\)-axis located at some in-plane position. (b) The exciton band structure of a WSe\({}_{2}\)-ML sandwiched by semi-infinite hBN layers calculated by solving the BSE in the Wannier tight-binding scheme established on the first-principles base. (c) The exciton fine structure of the low-lying exciton states, comprising the valley-split bright exciton bands (green circles) and the lowest gray (blue circles) and dark exciton ones (un-filled circles). (d) [(e)] shows the transverse component, \(\mathbf{D}_{S\mathbf{Q}}^{X,\parallel}\) [longitudinal component, \(\mathbf{D}_{S\mathbf{Q}}^{X,z}\) ], of the BSE-calculated transition dipole moments of the bright (green lines) and gray exciton (blue lines) states at the edge of the light cone, \(Q_{c}\approx q_{0}\equiv 2\pi/\lambda_{0}\). \(D_{B\mathbf{0}}^{X}=|\mathbf{D}_{B\mathbf{0}}^{X}|\) represents the magnitude of the dipole momentum of the bright exciton state at \(\mathbf{Q}=0\).
component (\(\mathbf{q}_{\parallel}\)) carried by the longitudinal field. Alternatively, \((\hat{\mathbf{\varepsilon}}_{\parallel}^{\sigma}\cdot\hat{\mathbf{q}})=\frac{\sin\theta_ {\mathbf{q}}}{\sqrt{2}}e^{i\sigma\phi_{\mathbf{q}}}\) with \(\sin\theta_{\mathbf{q}}\equiv\frac{q_{\parallel}}{\sqrt{q_{\parallel}^{2}+q_{ \parallel}^{2}}}\) can be expressed in the spherical coordinates, showing that the optical spin \(\sigma\) is fully transferred to the longitudinal field and the strength of optical SOI increases with increasing \(q_{\parallel}\). Combining \(\mathcal{A}_{\parallel}^{\ell}(\mathbf{q}_{\parallel})\) and \((\hat{\mathbf{\varepsilon}}_{\parallel}^{\sigma}\cdot\hat{\mathbf{q}})\), the longitudinal field expressed by Eq.(5) is shown imprinted by \(\hbar(\sigma+\ell)\equiv\hbar J\), which is the total angular momentum of TL in the paraxial regime. [18]
Fig.2(b) and (c) [(d) and (e)] show the squared magnitude, the real part, and the imaginary part of the complex vector potential \(\mathcal{A}_{\parallel}^{\ell}(\mathbf{q}_{\parallel})\) [\(\mathcal{A}_{z}^{\sigma,\ell}(\mathbf{q}_{\parallel})\)], as functions of \(\mathbf{q}_{\parallel}\) for the polarized TLs in the LG modes with \(p=0\) and the optical angular momenta, \((\sigma,\ell)=(1,1)\) and \((\sigma,\ell)=(-1,-1)\), respectively. Basically, the squared magnitudes of the vector potentials of the TLs carrying finite OAM (\(|\ell|>0\)) in the fundamental radial mode (\(p=0\)) present ring-shaped distributions over the \(\mathbf{q}_{\parallel}\) plane, whose ring sizes increase with increasing \(\ell\). [77] This indicates that the TLs with greater \(\ell\) comprise the more components of large \(\mathbf{q}_{\parallel}\) and, according to the momentum-conservation law, likely couple the more exciton states with large in-plane momentum, \(\mathbf{Q}\). Moreover, the effects of optical SOI become more important in the TLs with greater \(\ell\). One also notes that the ring size of the \(\mathbf{q}_{\parallel}\)-dependent magnitudes of the longitudinal component \(\left|\mathcal{A}_{z}^{\sigma,\ell}(\mathbf{q}_{\parallel})\right|^{2}\) is unequal but slightly larger than that of the transverse one, \(\left|\mathcal{A}_{\parallel}^{\ell}(\mathbf{q}_{\parallel})\right|^{2}\). With no effects of SOI, the _transverse_ component of vector potential is decoupled from SAM (see Eq.(4)) and remains the same for \(\sigma=+1\) and \(\sigma=-1\). Indeed, the patterns of \(\text{Re}\!\left(\mathcal{A}_{\parallel}^{\ell=1}\left(\mathbf{q}_{\parallel} \right)\right)\) and \(\text{Im}\!\left(\mathcal{A}_{\parallel}^{\ell=1}\left(\mathbf{q}_{\parallel} \right)\right)\) of Fig.2(a.2)-(a.3) are shown dumbbell-like to reflect the OAM \(\ell=1\) carried by the TL. As pointed out previously, the longitudinal field in a TL inherits the total angular momentum, \(J=\sigma+\ell\), of the light. Thus, as seen in Fig.2(d.2) and (d.3) [(e.2) and (e.3)], the in-plane patterns of the real and imaginary parts of \(\mathcal{A}_{z}^{\sigma=\pm 1,\ell=\pm 1}(\mathbf{q}_{\parallel})\) are double-dumbbell-like to reflect the TAM, \(J=\sigma+\ell=\pm 2\).
### Exciton fine structures of TMD monolayers: DFT-based studies
For the studies of exciton, we employ the theoretical methodology developed by Ref.[57; 88] to solve the Bethe-Salpeter equation (BSE) established in first-principles for the exciton fine structure spectra of encapsulated 2D materials. First, we calculate the quasi-particle band structures, \(\epsilon_{n\mathbf{k}}\), and the Bloch wave functions, \(\psi_{n\mathbf{k}}(\mathbf{r})\), of WSe\({}_{2}\)-MLs by using the first principles Quantum Espresso package [89; 90] in the density-functional theory (DFT) with the consideration of SOI. Figure S1 in Supplemental Material shows the calculated quasi-particle band structure of WSe\({}_{2}\)-ML (See Section SI of Supplemental Material for details). In terms of the calculated Bloch states, the exciton states of a 2D material is expressed as \(\left|S,\mathbf{Q}\right\rangle=\frac{1}{\sqrt{\Omega}}\sum_{vck}\Lambda_{S\mathbf{Q}} (vck\mathbf{)}\,\hat{c}_{ck+\mathbf{Q}}^{\dagger}\hat{h}_{v-\mathbf{k}}^{\dagger}\left|GS\right\rangle\), where \(\Omega\) is the area of the 2D material, \(\hat{c}_{ck}^{\dagger}\) (\(\hat{h}_{v-\mathbf{k}}^{\dagger}\)) is defined as the particle operator creating the electron (hole) of wavevector \(\mathbf{k}\) (\(-\mathbf{k}\)) in conduction band \(c\) (valence band \(v\)), \(\left|GS\right\rangle\) denotes the ground state of the material, \(\Lambda_{S\mathbf{Q}}(vc\mathbf{k})\) is the amplitude of the electron-hole configuration \(\hat{c}_{ck+\mathbf{Q}}^{\dagger}\hat{h}_{v-\mathbf{k}}^{\dagger}\left|GS\right\rangle\) and corresponds to the solution of the Bethe-Salpeter equation (BSE) for the exciton in momentum space, \(S\) is the band index of the exciton state, \(\mathbf{Q}\) is the center-of-mass momentum of exciton and \(\Omega\) denotes the area of the 2D material. By using the Wannier90 package, [91] we transform the calculated Bloch states into a complete set of maximally localized Wannier function (MLWF) basis, in which the Kohn-Sham Hamiltonian in DFT is reformulated as a tight-binding matrix with small dimension. In the Wannier tight binding scheme, we establish and are able to efficiently solve the BSE with the Coulomb kernel consisting of the screened _e-h_ direct interaction and unscreened exchange interaction to calculate the momentum-space wave function, \(\Lambda_{S\mathbf{Q}}(vc\mathbf{k})\), and the energy, \(E_{S\mathbf{Q}}^{X}\), of the exciton state, \(\left|S,\mathbf{Q}\right\rangle\) (See Section SI of Supplemental Material for more details ). With the enhanced _e-h_ Coulomb interaction in a 2D material, the low-lying exciton fine structure spectrum of a TMD-ML is featured with significant fine structure splitting, spectrally resolving the BX and various DX states. For a WSe\({}_{2}\)-ML, the DX states as the exciton ground states are spectrally significantly lower than the bright ones by \(\sim 48.8\)meV, as shown in Fig.1(b) and (c). [92; 93] Carefully examining the lowest DX states, one notes a small splitting between the DX doublet resulting. Combined with the spin-orbit interaction of quasi-particle, the inter-valley exchange interaction splits the lowest DX doublet and turns one of them, referred to as gray exciton (GX), to be slightly bright. [61] With finite \(\mathbf{Q}\), the inter-valley _e-h_ exchange interaction splits the valley exciton BX bands into a quasi-linear upper, \(\left|B+,\mathbf{Q}\right\rangle\), and parabolic lower band \(\left|B-,\mathbf{Q}\right\rangle\). [94; 95] At the light cone edge where \(\left|\mathbf{Q}\right|=q_{0}\equiv Q_{c}\), the valley splitting between the upper and lower BX bands is merely 1-2 meV, much smaller than the energy separation of BX and DX/GX states.The transition dipole moment of an exciton state is evaluated by \(\mathbf{D}_{S\mathbf{Q}}^{X}=\frac{1}{\sqrt{\Omega}}\sum_{vck}\Lambda_{S\mathbf{Q}}\left(vc \mathbf{k}\right)\mathbf{d}_{vck,\mathbf{k}}\) (Eq.S3), where \(\mathbf{d}_{v\mathbf{k},c\mathbf{k}}\equiv e\left\langle\psi_{vk}\right|\mathbf{r}\left|\psi_{ c\mathbf{k}}\right\rangle=\frac{c\hbar}{im_{0}\left(c\mathbf{k}_{v-\mathbf{c}_{v\mathbf{k}}} \right)}\psi_{v\mathbf{k}}\mathbf{\left|p\right|}\psi_{c\mathbf{k}}\) is the dipole moment of single-electron transition evaluated by using the theoretical method in Section SI of Supplemental Material, [95; 96; 97; 98] with \(\mathbf{p}\) the operator of linear momentum and \(m_{0}\) (\(\left|e\right|\)) the mass (the magnitude of the charge) of free electron. Figure 1(d) shows the in-plane and out-of-plane projections of the transition dipole of the exciton states in the fine structure of WSe\({}_{2}\)-ML. The transition dipole moments of the BX (GX) states are shown mainly in-plane
(out-of-plane) oriented. Neglecting the very slight variation of dipole moments with respect to \(\mathbf{Q}\), the transition dipoles of the upper and lower BX, and the GX states are described by \(\mathbf{D}^{X}_{+\mathbf{Q}}=D^{X}_{B}\mathbf{\hat{Q}}\), \(\mathbf{D}^{X}_{-\mathbf{Q}}=iD^{X}_{B}\mathbf{\hat{Q}}_{\perp}\), and \(\mathbf{D}^{X}_{G\mathbf{Q}}=D^{X}_{G}\mathbf{\hat{z}}\), respectively, where \(\mathbf{\hat{Q}}=\mathbf{Q}/|\mathbf{Q}|\) and \(\mathbf{\hat{Q}}_{\perp}\equiv|\mathbf{Q}|\left(-\sin\phi_{Q}\hat{x}+\cos\phi_{Q}\hat{y }\right)\) (\(\mathbf{\hat{Q}}_{\perp}\cdot\mathbf{\hat{Q}}=\mathbf{\hat{Q}}_{\perp}\cdot\mathbf{\hat{z}}=0\)), and \(D^{X}_{B}\) (\(D^{X}_{G}\)) is the magnitude of dipole moment of BX (GX). In addition to the strong exciton-photon interaction, the fine structure spectrum of WSe\({}_{2}\)-ML consisting of various exciton states with distinctly oriented dipoles serves as an excellent test bed to explore the distinct field components in the 3D-structured lights. In turn, twisted lights carrying controlled SAM and OAM enable us selectively access and distinguish a variety of exciton states of 2D materials.
### Exciton-light interaction
In the time-dependent perturbation theory, the Hamiltonian of light-matter interaction with respect to a light described by the vector potential \(\mathbf{A}(\mathbf{r})\) is given by \(H_{LMI}\approx\frac{|c|}{2m_{0}}\mathbf{A}(\mathbf{r})\cdot\mathbf{p}\) in the weak field and rotating wave approximations. [98] Accordingly, the optical matrix element of an exciton state, \(|S,\mathbf{Q}\rangle\), is derived as \(\tilde{M}^{\sigma,\ell}_{S\mathbf{Q}}=\frac{1}{\sqrt{\Omega}}\sum_{vc\mathbf{k}}\mathbf{ \Lambda}_{S}Q(vc\mathbf{k})\langle\psi_{\mathbf{k}+\mathbf{Q}}|\frac{|c|}{2m_{0}}\mathbf{A}( \mathbf{r})\cdot\mathbf{p}|\psi_{\mathbf{ck}}\rangle\), [77] which measures the amplitude of the optical transition of the exciton state, \(|S,\mathbf{Q}\rangle\), induced by the incident TL carrying the angular momenta \(\sigma\) and \(\ell\), In terms of the optical matrix element, the Fermi's golden rule forbulates the rate of incoherently photo-exciting the finite-momentum exciton state, \(|S,\mathbf{Q}\rangle\), by using a TL with \((\sigma,\ell)\), as \(\Gamma_{S\mathbf{Q}}=\frac{2\pi}{\hbar}|\tilde{M}^{\sigma,\ell}_{S\mathbf{Q}}|^{2} \rho(\hbar\omega=E^{X}_{S\mathbf{Q}})\), where \(\rho(\hbar\omega)\) is the density of states of light in the range of angular frequency between \(\omega\) and \(\omega+d\omega\). In the electric dipole approximation, one derives
\[\tilde{M}^{\sigma,\ell}_{S\mathbf{Q}}\approx\frac{E_{g}}{2i\hbar}\mathbf{A}^{\sigma, \ell}(\mathbf{Q})\cdot\mathbf{D}^{X*}_{S\mathbf{Q}}\,, \tag{6}\]
where \(\mathbf{A}^{\sigma,\ell}(\mathbf{Q})=\mathbf{\hat{\varepsilon}}^{\sigma}_{\parallel}\mathbf{A }^{\ell}_{\parallel}(\mathbf{Q})+\mathbf{\hat{z}}\mathbf{A}^{\sigma,\ell}_{z}(\mathbf{Q})\) is the Fourier transform of the vector potential of structured light with the transverse and longitudinal components as given by Eqs.(4) and (5), and \(E_{g}=\epsilon_{c_{1}\mathbf{K}}-\epsilon_{v_{1}\mathbf{K}}\) is the energy gap of the material, where \(c_{1}\) (\(v_{1}\)) is the lowest conduction (topmost valence) band. The optical matrix elements of Eq.(6) for BX and GX states under the excitation of a TL in the LG mode with \((\sigma,\ell)\) are derived in the cylindrical coordinate and explicitly shown as below,
\[\tilde{M}^{\sigma,\ell}_{B\pm\mathbf{Q}}\approx\sigma^{(1\mp 1)/2}\frac{E_{g}}{2 \sqrt{2}i\hbar}\tilde{F}_{|\ell|}\left(Q\right)D^{X}_{B}e^{i(\sigma+\ell)\phi \mathbf{Q}} \tag{7}\]
and
\[\tilde{M}^{\sigma,\ell}_{G\mathbf{Q}}\approx-\frac{E_{g}}{2\sqrt{2}i\hbar}\tilde{ F}_{|\ell|}\left(Q\right)D^{X}_{G}e^{i(\sigma+\ell)\phi\mathbf{Q}}\sin\theta_{\mathbf{Q}}\,, \tag{8}\]
Figure 2: (a) The polarization field (pink arrows) of a TL with the SAM \(\hbar\sigma\) and OAM \(\hbar\ell\) coupled by the spin-orbital interaction (SOI). Because of the SOI, the polarization field is not purely transverse but possesses also the longitudinal field component. In the Coulomb gauge, the transverse (gray arrows) and longitudinal (red arrows) fields are parallel to the transverse and longitudinal components of the vector potential, \(\mathcal{A}^{\ell}_{\parallel}(\mathbf{q}_{\parallel})\) and \(\mathcal{A}^{\sigma,\ell}_{z}(\mathbf{q}_{\parallel})\), respectively. The gray circular arrow represents the projection of circular polarization onto the \(x\)-\(y\) plane. (b.1)-(b.3): The distributions of the squared magnitude, real part, and imaginary part of the transverse component, \(\mathcal{A}^{\ell=1}_{z}(\mathbf{q}_{\parallel})\), of the vector potential for the TL with \((\sigma,\ell)=(1,1)\) over the \(\mathbf{q}_{\parallel}\)-plane. The dumbbell-like pattern of Re\((\mathcal{A}^{\ell=1}_{\parallel}(\mathbf{q}_{\parallel}))\) and Im\((\mathcal{A}^{\ell=1}_{\parallel}(\mathbf{q}_{\parallel}))\) reflects the optical OAM, \(\ell=1\), carried by the TL. The length of the white scale bar is, \(q=0.1q_{0}\), for reference. (c.1)-(c.3): Same as (b.1)-(b.3) but for the TL with \((\sigma,\ell)=(-1,-1)\). Note that the _transverse_ components of the vector potentials for the TLs with the opposite angular momenta remain the same in the squared magnitudes, as shown by (b.1) and (c.1). (d.1)-(e.3): Same as (b.1)-(c.3) but for the longitudinal components, \(\mathcal{A}^{\sigma=1,\ell=1}_{z}(\mathbf{q}_{\parallel})\) and \(\mathcal{A}^{\sigma=-1,\ell=-1}_{z}(\mathbf{q}_{\parallel})\), of the vector potentials of the same TLs. Differing from the transverse components, the distribution patterns of Re\((\mathcal{A}^{\pm 1,\pm 1}_{z}(\mathbf{q}_{\parallel}))\) and Im\((\mathcal{A}^{\pm 1,\pm 1}_{z}(\mathbf{q}_{\parallel}))\) over the the \(\mathbf{q}_{\parallel}\)-plane are double-dumbbell-like, resulting from the TAM, \(J=\sigma+\ell=\pm 2\) carried by the longitudinal components.
where the exponential term \(e^{i(\sigma+\ell)\phi_{\mathbf{Q}}}\) accounts for the TAM transfer from a TL to a GX and the term \(\sin\theta_{\mathbf{Q}}\equiv\frac{Q}{\sqrt{Q^{2}+q_{0}^{2}}}\) arises from the SOI, which makes a normally incident TL forbidden to excite a GX with \(Q=0\) but enhances the photo-generate of GX states with large \(Q\) as increasing \(\ell\). Examining the \(\mathbf{Q}\)-dependence of the optical matrix element of an exciton allows us to infer its angle-dependent optical properties [63; 77; 99] thereby inferring the optically transferred TAM in the excited GX state. Since the valley splitting between the lower and upper BX bands is merely of \(\sim 1\) meV and normally spectrally unresolvable, as seen in Fig.1(c), \([100]\) the total transition rate of the BX doublet, \(|B\pm,\mathbf{Q}\rangle\) under the photo-excitation of a TL can be counted by \(\Gamma_{B,\mathbf{Q}}^{\sigma,\ell}\equiv\sum_{S=B\pm}\Gamma_{S\mathbf{Q}}^{\sigma,\ell }\propto\sum_{S=B\pm}|\tilde{M}_{S\mathbf{Q}}^{\sigma,\ell}|^{2}\). By contrast, the transition rate of a GX state that is spectrally well apart from the BX states can be evaluated by the optical matrix element of the specific state alone, \(\Gamma_{G\mathbf{Q}}^{\sigma,\ell}\propto|\tilde{M}_{G\mathbf{Q}}^{\sigma,\ell}|^{2}\).
## III Results and Discussion
### Photo-excitation of exciton by a single twisted light
Figure 3(a) and (b) shows the contour plots of the optical transition rates, \(\Gamma_{S\mathbf{Q}}^{\sigma,\ell}\), as functions of \(\mathbf{Q}\) for the finite-momentum BX and GX states of a WSe\({}_{2}\)-ML incident by polarized TLs with \((\sigma,\ell)=(1,1)\), \((1,5)\) and \((1,15)\). Overall, the \(\Gamma_{S\mathbf{Q}}^{\sigma,\ell}\) for the non-zero \(\ell=1,5,15\) exhibit similar ring-shaped patterns over the \(\mathbf{Q}\)-plane, with the ring sizes increasing with increasing \(\ell\). This indicates that a TL with greater \(\ell\) enables the photo-generation of the exciton states (both BX and GX ones) with larger \(\mathbf{Q}\), whose superposition forms a spatially more localized wave packet as previously pointed out by Ref.[77]. Analytically, one can show that the a TL with \(\ell\) mostly likely excite the finite momentum BX state with \(Q=q_{\parallel}^{\ell}=\sqrt{2(\ell+1)}/w_{0}\), where the square of the magnitude of \(A_{\parallel}^{\ell}(Q)\) is maxima so that \(\frac{d|\mathcal{A}_{\parallel}^{\ell}(Q)|^{2}}{dQ}|_{Q=q_{\parallel}^{\ell}}\) = 0. Figure 3(c) shows the total transition rates of \(\Gamma_{S}^{\sigma,\ell}\propto\sum_{\mathbf{Q}}\Gamma_{S\mathbf{Q}}^{\sigma,\ell}\), which take into account the all finite-momentum states of BX and GX excited by the TLs with \(\ell=0,1,...15\). Notably, the rate of photo-exciting the GX superposition states, \(\Gamma_{G}^{\sigma,\ell}\), using a TL with \(\ell\) is shown linearly increasing with increasing \(\ell\), while the rate of photo-exciting the BX ones, \(\Gamma_{B}^{\sigma,\ell}\), remain nearly unchanged against \(\ell\). Increasing the OAM of the incident TL from \(\ell=1\) to \(\ell=15\), \(\Gamma_{G}^{\sigma,\ell}\) is enhanced by over one order of magnitude. The \(\ell\)-enhanced photo-generation of GX is associated with the term of SOI, \((\mathbf{\hat{e}}_{\parallel}^{\sigma}\cdot\mathbf{\hat{q}})=\frac{q_{\parallel}}{ \sqrt{2(q_{\parallel}^{2}+q_{0}^{2})}}e^{i\sigma\vartheta_{\mathbf{q}}}\approx \frac{1}{\sqrt{2}}\frac{q_{\parallel}}{q_{0}}e^{i\sigma\phi_{\mathbf{q}}}\propto q _{\parallel}\), in the longitudinal field of TL as expressed by Eq.(5). Recall that \(q_{\parallel}^{\ell}\propto\sqrt{\ell+1}\). Thus, with increasing \(\ell\) of a TL, the in-plane component of momentum, \(q_{\parallel}\), carried by the TL increases, and so do the strength of the optical SOI and the magnitude of the longitudinal field, \(A_{z}^{\sigma,\ell}(\mathbf{Q})\), of Eq.(5).
Despite the phase term of TAM, \(e^{i(\sigma+\ell)\phi_{\mathbf{Q}}}\), encoded in the complex optical matrix elements of BX and GX states as shown in Eqs.(7) and (8), the phase information is not preserved in the squared magnitude of the optical matrix elements. These squared magnitudes, which measure the optical transition rates of the exciton states under incoherence conditions, cannot show the transferred angular momenta to the exciton states. However, we will demonstrate that the combination of TLs with different optical angular momenta, forming so-called vector vortex beams (VVBs), serves as an exceptional light source, which enables the revelation of the transferred optical angular momenta from TLs to the GXs even when incoherence conditions are present.
### Photo-excition of exciton by using a VVB
Generally, the superposition of two TLs, denoted by \(|\sigma\ell\rangle\) and \(|\sigma^{\prime}\ell^{\prime}\rangle\), respectively, can be expressed by
\[|\sigma\ell,\sigma^{\prime}\ell^{\prime};\alpha,\beta\rangle= \cos\left(\beta/2\right)|\sigma\ell\rangle+e^{i\alpha}\sin\left(\beta/2 \right)|\sigma^{\prime}\ell^{\prime}\rangle\,, \tag{9}\]
in terms of the azimuthal angles \(\alpha\) and the polar angle \(\beta\) in the representation of higher order Poincare sphere. [101; 102; 35; 36] As presented in Fig.4. In Fig.4, the north and south poles represent the TL basis in the single LG modes, which are \(|\sigma\ell\rangle\) and \(|\sigma^{\prime}\ell^{\prime}\rangle\), respectively. The superposition state, \(|\sigma\ell,\sigma^{\prime}\ell^{\prime};\alpha,\beta\rangle\), with \(\beta\neq 0,\pi\) is represented by points located on the sphere surface in between the poles.
In fact, the superposition state of structured light, \(|\sigma\ell,\sigma^{\prime}\ell^{\prime};\alpha,\beta\rangle\), with distinct SAM (\(\sigma^{\prime}=-\sigma\)) and OAM (\(\ell\neq\ell^{\prime}\)) forms a VVB, [37; 34; 103] which is structured in both polarization and amplitudes over the 3D space, [104] and is prospective in the frontier photonic applications, [105] e.g. laser material processes, [106; 107], optical encoding/decoding in communication, [108] and microscopy. [109]
The vector potential of such a VVB is given by \(\mathbf{\mathcal{A}}^{\sigma,\ell,-\sigma,\ell^{\prime}}(\mathbf{q}_{\parallel};\alpha, \beta)=\cos\left(\beta/2\right)\mathbf{\mathcal{A}}^{\sigma,\ell}(\mathbf{q}_{ \parallel})+e^{i\alpha}\sin\left(\beta/2\right)\mathbf{\mathcal{A}}^{-\sigma,\ell^ {\prime}}(\mathbf{q}_{\parallel})\) and leads to the corresponding complex optical matrix element for an exciton in the state \(|S,\mathbf{Q}\rangle\), \(\tilde{M}_{S\mathbf{Q}}^{\sigma,\ell-\sigma,\ell^{\prime}}(\alpha,\beta)=\cos\left( \beta/2\right)\tilde{M}_{S\mathbf{Q}}^{\sigma,\ell}+e^{i\alpha}\sin\left(\beta/2 \right)\tilde{M}_{S\mathbf{Q}}^{-\sigma,\ell^{\prime}}\). Following Eq.(4), one can show that the squared magnitude of the transverse component of the vector potential in angular spectrum representation is \(\left|\mathbf{\mathcal{A}}_{\parallel}^{\sigma,\ell,-\sigma,\ell^{\prime}}(\mathbf{q}_ {\parallel};\alpha,\beta)\right|^{2}=\cos^{2}(\beta/2)F_{|\ell|}(q_{\parallel} )^{2}+\sin^{2}(\beta/2)F_{|\ell|}(q_{\parallel})^{2}\). One notes that \(\left|\mathbf{\mathcal{A}}_{\parallel}^{\sigma,\ell-\sigma,\ell^{\prime}}(\mathbf{q}_{ \parallel};\alpha,\beta)\right|^{2}\) is independent of the azimuthal angle, \(\phi_{\mathbf{q}}\). Hence, \(\left|\mathbf{A}_{\parallel}^{11\,,\,-11}(\mathbf{q}_{\parallel};\alpha,\beta)\right|^{2}\) exhibits the isotropic contours over the \(\mathbf{q}_{\parallel}\)-plane, as shown in
Fig.4(a), and does not preserve the optical information of \(\ell\) carried by the TL basis that is encoded in the phase term, \(e^{i\ell\phi_{q}}\), of Eq.(4). In Fig.4, the dark circular panels present the spatially varying polarizations of the VVBs over the \(\mathbf{q}_{\parallel}\)-plane. [35; 105; 110]
By contrast, the squared magnitude of the longitudinal component of the vector potential of the same VVB is derived as \(|\mathbf{\mathcal{A}}_{z}^{\sigma,\ell,-\sigma,\ell^{\prime}}(\mathbf{q}_{\parallel}; \alpha,\beta)|^{2}=\frac{\sin^{2}\theta_{q}}{2}[\cos^{2}\frac{\beta}{2}F_{| \ell|}\left(q_{\parallel}\right)^{2}+\sin^{2}\frac{\beta}{2}F_{|\ell^{\prime}| }\left(q_{\parallel}\right)^{2}+\sin\beta F_{|\ell|}\left(q_{\parallel}\right) \left.F_{|\ell^{\prime}|}\right)\left(q_{\parallel}\right)\cos\left(\Delta J \,\phi_{q}+\left(\alpha-\frac{\pi}{2}(|\ell^{\prime}|-|\ell|)\right)\right)\] and shown \(\phi_{q}\)-dependent, as long as \(\beta\neq 0,\pi\) and \(\Delta J\equiv J^{\prime}-J=(\sigma^{\prime}+\ell^{\prime})-(\sigma+\ell)\neq 0\).
Interestingly, the \(\phi_{q}\)-dependence of \(|\mathbf{\mathcal{A}}_{z}^{\sigma,\ell,-\sigma,\ell^{\prime}}(\mathbf{q}_{\parallel}; \alpha,\beta)|^{2}\) is featured with the winding number \(\Delta J=\ell^{\prime}-\ell-2\sigma\) that reflects the difference of TAM between the two TL basis. Fig.4(b) shows the distribution of the squared magnitude of the longitudinal component of the vector potential, \(|\mathbf{\mathcal{A}}_{z}^{1,\,1,\,-1,\,-1}(\mathbf{q}_{\parallel};\alpha,\beta)|^{2}\), over the \(\mathbf{q}_{\parallel}\)-plane for the VVB superposed by the TLs with \((\sigma,\ell)=(1,1)\) and \((\sigma^{\prime},\ell^{\prime})=(-1,-1)\), respectively. Indeed, we observe the anisotropic patterns of \(|\mathbf{\mathcal{A}}_{z}^{1,\,1,\,-1,\,-1}(\mathbf{q}_{\parallel};\alpha,\beta=\pi/2 )|^{2}\) in the four-fold rotational symmetry, matching \(\Delta J=-4\) of the VVB.
Further, from Eq.(7) one can derive the total transition rate of the spectrally unresolvable BX doublet with \(\mathbf{Q}\) under the excitation of a VVB, \(\Gamma_{B,\mathbf{Q}}^{\sigma,\ell,-\sigma,\ell^{\prime}}\propto\sum_{S=\pm}| \tilde{M}_{S\mathbf{Q}}^{\sigma,\ell,-\sigma,\ell^{\prime}}|^{2}=\left(\frac{E_{ \mathbf{Q}}D_{\mathbf{Q}}^{X}}{2\hbar}\right)^{2}\left[\cos^{2}\frac{\beta}{2}F_{| \ell|}(Q)^{2}+\sin^{2}\frac{\beta}{2}F_{|\ell^{\prime}|}(Q)^{2}\right]\). As expected, the transition rate of BX doublet \(\Gamma_{B\mathbf{Q}}^{\sigma,\ell,-\sigma,\ell^{\prime}}\) excited by a VVB is shown \(\phi_{Q}\)-irrelevant and exhibit an isotropic distribution over the \(\mathbf{Q}\) plane, as shown by Fig.5(a) for the VVB with \((\sigma,\ell)=(1,1)\) and \((\sigma^{\prime},\ell^{\prime})=(-1,-1)\).
For a GX, the transition rate, \(\Gamma_{G,\mathbf{Q}}^{\sigma,\ell,\sigma^{\prime},\ell^{\prime}}(\alpha,\beta) \equiv|M_{G,\mathbf{Q}}^{\sigma,\ell,\sigma^{\prime},\ell^{\prime}}(\alpha,\beta) |^{2}\), is derived as
\[\Gamma_{G,\mathbf{Q}}^{\sigma,\ell,\sigma^{\prime},\ell^{\prime}}(\alpha,\beta)= \left(\frac{E_{\mathbf{Q}}D_{\mathbf{Q}}^{X}}{2\hbar}\right)^{2}\frac{\sin^{2}\theta_ {Q}}{2}\left[\cos^{2}\frac{\beta}{2}F_{|\ell|}(Q)^{2}+\sin^{2}\frac{\beta}{2}F _{|\ell^{\prime}|}(Q)^{2}+F_{|\ell|}(Q)F_{|\ell^{\prime}|}(Q)\sin\beta\cos \left(\Delta J\,\phi_{Q}+\left(\alpha-\Delta|\ell|\frac{\pi}{2}\right)\right) \right]\,, \tag{10}\]
where \(\Delta|\ell|\equiv|\ell^{\prime}|-|\ell|\).
The first two terms in Eq.(10) can be viewed as the sum of the squared magnitude of the optical matrix element of GX under the excitation of the two non-interfered TL-basis of the VVB, which depends only on the magnitude of \(\mathbf{Q}\) and remains invariant with varying \(\phi_{\mathbf{Q}}\). The last cross-term arises from the coherent interference between the two TL basis and explicitly shows the \(\phi_{Q}\)-dependence, which is importantly associated with the difference of TAM between the TL basis, \(\Delta J\). As \((\alpha-\Delta|\ell|)\,\pi/2\) is simply a constant phase offset, the cross-term \(\propto\cos\left(\Delta J\,\phi_{\mathbf{Q}}+\alpha-\Delta|\ell|\frac{\pi}{2}\right)\), is varied sinusoidally with the winding number, i.e. \(n=|\Delta J|\), by rotating \(\phi_{\mathbf{Q}}\).
Figure 3: (a1)-(a3): Density plots of the optical transition rates, \(\Gamma_{B\mathbf{Q}}^{\sigma\ell}\) as functions of \(\mathbf{Q}\) for the finite momentum BX states of a WSe\({}_{2}\)-ML under the excitation of polarized TLs with the SAM and OAM, \((\sigma,\ell)=(1,1),(1,5)\) and \((1,15)\), respectively. (b1)-(b3): Density plots of \(\Gamma_{G\mathbf{Q}}^{\sigma\ell}\) for the TL-excited finite-momentum GX states. All of the contour plots follow the same colormap on the leftmost side. For reference, the length of the horizontal bar in white color represents the magnitude of \(0.1Q_{c}\). (c) The total transition rate of all TL-excited finite-momentum BX (green) and GX states (blue) as a function of \(\ell\) of TL. Note that the transition rate of a GX linearly increases with increasing \(\ell\), while that of a BX remains nearly unchanged against \(\ell\).
Therefore, by utilizing non-separable VVBs as light sources, one can decode the angular momentum difference (\(\Delta J\)) within the VVB by analyzing the angle-dependent optical spectrum that is correlated with the \(\mathbf{Q}\)-dependence of \(\Gamma^{\sigma,\ell,\sigma^{\prime}\ell^{\prime}}_{G,\mathbf{Q}}\), [63] thereby inferring the optically transferred TAM in the excited GX state. The cross-term is especially crucial as the functional product of \(F_{|\ell|}(Q)F_{|\ell^{\prime}|}(Q)\) and the factor \(\sin\beta\) are significantly valued. The two form factors are maximized as \(\ell=-\ell^{\prime}\) and \(\beta=\pi/2\). For a VVB with \(\ell=-\ell^{\prime}\) and \(\sigma=-\sigma^{\prime}\), the winding number for the cross term, \(n=|\Delta J|=|\ell^{\prime}+\sigma^{\prime}-\ell-\sigma|=2|\ell+\sigma|\). Figure 5(a) shows the squared magnitudes of the optical matrix elements, \(\Gamma^{+1,1,-1,-1}_{B,\mathbf{Q}}(\alpha,\beta)\), of the BX doublet under the excitation of the VVBs formed by the superposition of the TLs, \(|1,1\rangle\) and \(|-1,-1\rangle\), with the different geometric angles, \((\alpha,\beta)=(0,0)\), \((0,\pi)\), \((0,\pi/2)\) and \((\pi,\pi/2)\). The four selected vector vortex beams are indicated by the north pole, south pole, and the two positions at the equator of the high-order Poincare sphere. Under the excitation of the same vector vortex beams, Figure 5(b) shows the squared magnitudes of the \(\mathbf{Q}\)-dependent optical matrix elements, \(\Gamma^{+1,1,-1,-1}_{G,\mathbf{Q}}\), for the GX states.
As expected from the preceding analysis, the donut-like distribution of the \(\Gamma^{+1,1,-1,-1}_{B,\mathbf{Q}}(\alpha,\beta)\) over the \(\mathbf{Q}\)-space for the BX doublet under the excitation of the superposition TLs remains invariant against the varied \(\alpha\) and \(\beta\) (see SIII). By contrast, the distribution of the \(\Gamma^{+1,1,-1,-1}_{G,\mathbf{Q}}(\alpha,\beta)\) over the \(\mathbf{Q}\)-plane for the GX states varies with changing the geometric angles, \(\alpha\) and \(\beta\). In particular, at the equator (\(\beta=\pi/2\)) where the VVB is the maximal superposition of TLs, the \(\phi_{Q}\)-varying
patterns of \(\Gamma_{G,\mathbf{Q}}^{+1,1,-1,-1}(\alpha,\pi/2)\), exhibits the anisotropic patterns with the four-fold (\(n=4\)) rotation symmetry that directly reflect \(|\Delta J|=4\) carried by the incident VBB. Generalizing the analysis for the TLs carrying arbitrary OAM, one can show that the transferred OAM to a GX can be inferred from the \(n-\)fold pattern of \(\Gamma_{G,\mathbf{Q}}^{+1,\ell,-1,-\ell}(0,\pi/2)\) according to the formulation,
\[|\ell|=(n-2)/2 \tag{11}\]
The calculated \(\mathbf{Q}\)-dependent patterns of \(\Gamma_{G,\mathbf{Q}}^{+1,\ell,-1,-\ell}(\alpha,\pi/2)\) for the GX states excited by the VVBs in the higher order modes with \(\ell=2,3,4\) are presented in Fig.S2 of Supplemental Material, confirming the formalism for extracting the transferred angular momentum from the \(n\)-fold rotational symmetry of the \(\mathbf{Q}\)-dependent pattern of the magnitudes of the optical matrix elements of the GXs by VVBs. Note that the angular-momenta are encoded in the \(n\)-fold petal-like pattern of the magnitude of transition rate of GX and should be robust against decoherence in materials.
## IV Conclusion
In conclusion, we present a comprehensive investigation based on first principles, focusing on the light-matter interaction between structured lights carrying optical angular momenta and tightly bound excitons in 2D materials. We show that the photo-excitation of a specific type of spin-forbidden dark excitons, i.e. gray exciton, is greatly enhanced by the incident twisted lights that carry orbital angular momentum and possess the longitudinal field component associated with the interaction between spin and orbital angular momenta. Moreover, we investigate the superposition of two twisted lights with distinct SAM and OAM, resulting in the formation of a vector vortex beam (VVB) that is spatially engineered in both complex amplitude and polarization as well. Our research demonstrates that a spin-orbit-coupled VVB in a non-separable form surprisingly allows for the imprinting of the carried optical information onto gray excitons in 2D materials, which is robust against the decoherence
Figure 5: Density plots of the \(\mathbf{Q}\)-dependent transition rates, (a) \(\Gamma_{B,\mathbf{Q}}^{+1,1,-1,-1}(\alpha,\beta)\), of the finite-momentum BX doublet, \(|B\pm,\mathbf{Q}\rangle\), (b) \(\Gamma_{G,\mathbf{Q}}^{+1,1,-1,-1}(\alpha,\beta)\), of the finite-momentum GX state, \(|G,\mathbf{Q}\rangle\), under the excitation of the VVBs in the TL-superposition states, \([1,1;-1,-1;\alpha,\beta)\), with the different geometric angles, \((\alpha,\beta)=(0,0)\), \((0,\pi)\), \((0,\pi/2)\) and \((\pi,\pi/2)\), in the higher order Poincaré sphere representation. The schematic inset in each panel illustrates the optical transition corresponding to each case. For reference, the length of the horizontal bar in white color represents the magnitude of \(Q=0.1Q_{c}\). Notably, \(\Gamma_{G,\mathbf{Q}}^{+1,-1,-1}(\alpha,\beta=\pi/2)\) exhibits the anisotropic patterns with the four-fold rotation symmetry (\(n=4\)), from which the angular momentum, \(\ell\) and \(\ell^{\prime}\)\((=\ell)\), carried by the TL basis can be inferred according to Eq.(11)
mechanisms in materials. These studies unveil the indispensable role of gray excitons in twisted-light-based optoelectronics and suggest the utilization of VVB for transferring optical information onto 2D materials.
## Acknowledgement
This study is supported by National Science and Technology Council of Taiwan, under contracts, MOST 109-2112 -M-009 -018 -MY3.
|
2304.09917 | Power law cosmology in modified theory with thermodynamics analysis | In this paper, we consider a cosmological model in $ f(R, G) $ gravity in a
flat space-time, where $ R $ is the Ricci scalar and $ G $ is the Gauss-Bonnet
invariant. The function $ f(R, G) $ is taken as a linear combination of $ R $
and an exponential function of $ G $. We analyze the observational constraints
under a power law cosmology which depends on two physical parameters: the
Hubble constant $ H_0 $ and the deceleration parameter $ q $. We constrain
these two dependent parameters using the latest 77 points of the OHD data, 1048
points of the Pantheon data, and the joint data OHD+Pantheon and compare the
results with the $ \Lambda $CDM. Also, we speculate constraints using a
simulated data set for the future JDEM (Joint Dark Energy Mission)/Omega,
supernovae survey. We see that $ H_0 $ is in very close agreement with some of
the latest results from the Planck Collaboration that assume the $ \Lambda $CDM
model. Our work in power law cosmology better fits the Pantheon data than the
earlier analysis \cite{Kumar:2011sw, Rani:2014sia}. However, the constraints
obtained on $ H $ average, $ <H_0> $ and $ q $ average, $ <q> $ using the
simulated data set for the future JDEM/Omega, supernovae survey are found to be
inconsistent with the values obtained from the OHD and the Pantheon data.
Additionally, we discuss statefinder diagnostics and see that the power law
models approach the standard $\Lambda $CDM model ($ q\rightarrow -1 $). This
model satisfies the Generalized Second Law of Thermodynamics. Finally, we
conclude that the power law cosmology in $ f(R, G) $ gravity explains most of
the distinguished attributes of evolution in cosmology. | J. K. Singh, Shaily, Anirudh Pradhan, Aroonkumar Beesham | 2023-04-19T18:44:20Z | http://arxiv.org/abs/2304.09917v2 | # Power law cosmology in modified theory with higher order curvature term
###### Abstract
In this paper, we consider a cosmological model in \(f(R,G)\) gravity in flat space-time, where \(R\) is the Ricci scalar and \(G\) is the Gauss-Bonnet invariant. Here, the function \(f(R,G)\) is taken as a linear combination of \(R\) and an exponential function of \(G\). We analyze the observational constraints under a power law cosmology which depends on two parameters, viz., the Hubble constant \(H_{0}\) and the deceleration parameter \(q\). We examine the three sets of constraints \(H_{0}=68.119^{+0.028}_{-0.12}\ kmS^{-1}Mpc^{-1}\), \(q=-0.109^{+0.014}_{-0.014}\); \(H_{0}=70.5^{+1.3}_{-0.98}\ kmS^{-1}Mpc^{-1}\), \(q=-0.25^{+0.15}_{-0.15}\) and \(H_{0}=69.103^{+0.019}_{-0.10}\)\(KmS^{-1}Mpc^{-1}\), \(q=-0.132^{+0.014}_{-0.014}\), obtained by using the latest 77 points of the \(H(z)\) data, 1048 points of the \(Pantheon\) data and the joint data of \(H(z)+Pantheon\) at the \(1\sigma\) level, respectively, We compare our results with the results of the \(\Lambda\)CDM model. We find that our estimate of \(H_{0}\) is in very close agreement with some of the latest results from the Planck Collaboration that assume the \(\Lambda\)CDM model. Our work in power law cosmology provides a better fit to the \(Pantheon\) data than the earlier analysis. We also discuss statefinder diagnostics and see that the power law models approach the standard \(\Lambda\)CDM model (\(q\rightarrow-0.5\)). Finally, we conclude that in \(f(R,G)\) gravity, power law cosmology explains most of the distinguished attributes of evolution in cosmology.
Keywords: FLRW universe, Power law, Cosmological parameters, MCMC method, Om diagnostic.
## I Introduction
Current standard observations including type Ia Supernovae (\(SNeIa\)), the cosmic microwave background (\(CMB\)) radiation, large scale structure (\(LSS\)), the Planck satellite, baryon acoustic oscillations (\(BAO\)) and the Wilkinson microwave anisotropy probe (\(WMAP\)) provide strong evidence about the accelerated expansion of the universe. It is noticed that modified gravity may describe the accelerated expansion of the universe in a better way. As we know, the model of modified gravity is a simple gravitational alternative to the dark energy model. The idea behind these approaches to dark energy consists of adding additional gravitational terms to the Einstein-Hilbert action. This results in changing the evolution of the universe at early or late times. Many examples of such models in modified gravity abound in the literature [1; 2; 3]. During the inflationary era, the Universe expanded at an extremely rapid rate. The inflationary era came to light during the late \(70^{\prime}\)sin the early \(80^{\prime}s\), which solved some of the problems of the big bang model. Bouncing cosmological models may be an acceptable an acceptable description of the universe at early and late times and fit observations. This can be described by modified gravity in a unified way.To explain the accelerated expansion in standard general relativity, a phantom fluid or field is required. This phantom field leads eventually to a big rip, i.e., to a crushing type singularity. [4].
The late-time acceleration of the universe can also be described by modified gravity. If we replace the scalar curvature \(R\) in the Einstein-Hilbert action by \(f(R)\), where \(f(R)\) is arbitrary, then we get \(f(R)\) gravity. This theory is simple, viable and quite successful. There are many modifications of general relativity. If the Lagrangian is a function of both \(R\) and the trace \(T\) of the energy momentum tensor, then we get \(f(R,T)\) theory [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17]. The \(T\) term is introduced to take into account quantum effects, viscosity and heat conduction. The late time cosmic acceleration can also be explained. \(f(R,T)\) gravity has been subjected to observational constraints. On the other hand an interesting alternative to \(f(R)\) gravity is \(f(R,G)\). Here \(G\) is the Gauss-Bonnet invariant constructed from the invariants \(R_{\mu\nu}R^{\mu\nu}\) and \(R_{\mu\nu\alpha\zeta}R^{\mu\nu\alpha\zeta}\), where \(R^{\mu\nu}\) is the Ricci tensor, and \(R_{\mu\nu\alpha\zeta}\) is the Riemann tensor. In the literature, there are several works have been done that show that \(f(R,G)\) gravity is capable of describing inflation and late-time acceleration [18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31]. Here, our main interest is to analyse the physical parameters of universe in \(f(R,G)\) gravity.
The standard cosmological model plays a big role to understand the inflationary phase of the universe. Nevertheless, in the literature there are many different models to explain the main features of the universe. Models based on a power-law of the scale factor are quite successful to solve the age, horizon and flatness problems whihc occur in the standard model [32; 33; 34; 35; 36; 37; 38; 39]. Sethi _et al._ discussed an open linear coasting cosmological model based on a power law model [40]. Shafer used observation data and studied a robust model using a power law [41]. Some remarkable works on other types of modified theories have been carried out by several authors [42; 43; 44; 45].
The present work is organised as follows: In Sec.II, we evaluate the Einstein field equations for \(f(R,G)\) gravity. Using a power law for the scale factor, we have calculated the pressure and energy density in terms of the deceleration parameter \(q\). In Sec. III, we constrain the best fit values of \(H_{0}\) and \(q\) using MCMC simulation. In Sec. IV, we discuss the cosmological parameters one by one, and also observe the viability of the energy conditions. In the same section, the statefinder and \(Om\) diagnostics are studied. Finally, we summarize the outcomes of the obtained model.
## II The action and cosmological solutions
### Field Equations
The action of modified Gauss-Bonnet gravity in four dimension space-time is: [31; 4]
\[S=\int\left[\frac{f(R,G)}{2\kappa}\right]\sqrt{-g}d^{4}x+S_{m}, \tag{1}\]
where \(\kappa=8\pi G\), and \(S_{m}\) is the matter Lagrangian, which depends upon \(g_{\mu\nu}\) and matter fields. The Gauss-Bonnet invariant \(G\) is defined as \(G=R^{2}+R_{\mu\nu\alpha\zeta}R^{\mu\nu\alpha\zeta}-4R_{\mu\nu}R^{\mu\nu}\). The Gauss-Bonnet invariant is obtained from \(R_{\mu\nu\alpha\zeta}\), \(R_{\mu\nu}=R_{\mu\zeta\nu}^{\zeta}\) and \(R=g^{\alpha\zeta}R_{\alpha\zeta}\). From the equation of action (1), the gravitational field equations are derived as
\[R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}F(G)+(2RR_{\mu\nu}-4R_{\mu\alpha }R_{\nu}^{\alpha}+2R_{\mu}^{\alpha\zeta\tau}R_{\nu\alpha\zeta\tau}-4g^{i\alpha }g^{j\zeta}R_{\mu i\nu j}R_{\alpha\zeta})F^{\prime}(G)+4[\nabla_{\alpha} \nabla_{\nu}F^{\prime}(G)]R_{\mu}^{\alpha}\\ -4g_{\mu\nu}[\nabla_{\alpha}\nabla_{\zeta}F^{\prime}(G)]R^{ \alpha\zeta}+4[\nabla_{\alpha}\nabla_{\zeta}F^{\prime}(G)]g^{i\alpha}g^{j \zeta}R_{\mu i\nu j}+2g_{\mu\nu}[\square F^{\prime}(G)]R-2[\nabla_{\mu}\nabla_ {\nu}F^{\prime}(G)]R\\ -4[\square F^{\prime}(G)]R_{\mu\nu}+4[\nabla_{\mu}\nabla_{\alpha }F^{\prime}(G)]R_{\nu}^{\alpha}=\kappa T_{\mu\nu}^{m}, \tag{2}\]
where \(T_{ij}^{m}\) is the energy momentum tensor arising from \(S_{m}\). The flat FLRW space-time metric is:
\[ds^{2}=-dt^{2}+a^{2}(t)(dx^{2}+dy^{2}+dz^{2}), \tag{3}\]
where the symbols have their usual meanings. Now, we calculate the Einstein field equations using Eqs. (2) and (3) as:
\[F(G)+6H^{2}-GF^{\prime}(G)+24H^{3}\dot{G}F^{\prime\prime}(G)=2\kappa\rho, \tag{4}\]
\[6H^{2}\ +\ 4\ddot{H}\ +\ F(G)\ +\ 16H\dot{G}(\dot{H}\ +\ H^{2})F^{\prime \prime}(G)\ -\ GF^{\prime}(G)\ +\ +8H^{2}\ddot{G}F^{\prime\prime}(G)\ +\ 8H^{2}\dot{G}^{2}F^{\prime\prime \prime}(G)\ =\ \ -2\kappa p, \tag{5}\]
Here \(H=\frac{\dot{a}(t)}{a(t)}\) is the Hubble parameter and \(\dot{a}(t)\equiv\frac{da}{dt}\). Also, we have
\[R=6(2H^{2}+\dot{H}), \tag{6}\]
\[G=24H^{2}(H^{2}+\dot{H}). \tag{7}\]
In the present model, we are taking \(F(R,G)=R+\alpha e^{-G}\) and this term denotes the difference with general relativity. Here \(\alpha\) is an arbitrary positive constant.
### Power law cosmology
To implement power law cosmology, we take the scale factor \(a(t)\) as [46; 47]
\[a(t)=a_{0}(\frac{t}{t_{0}})^{\zeta}, \tag{8}\]
where \(a_{0}\) is the value of \(a(t)\) at present, and \(\zeta\) is a parameter that is dimensionless and parameter. Now, the Hubble parameter can be described by means of the scale factor as
\[H\equiv\frac{\dot{a}}{a}=\frac{\zeta}{t}. \tag{9}\]
Also
\[H_{0}=\frac{\zeta}{t_{0}}. \tag{10}\]
Also, since we know the relation between the scale factor and redshift i.e. \(a(t)=\frac{a_{0}}{1+z}\), where \(z\) is the redshift, \(H\) can be in terms of \((z)\) as
\[H(z)=H_{0}(1+z)^{\frac{1}{\zeta}}. \tag{11}\]
To understand the history of the universe, we consider the cosmological parameters like the pressure, energy density, EoS parameter, Hubble parameter, deceleration parameter, etc. The acceleration or deceleration phase of the universe can be measured by a dimensionless quantity which is known as the deceleration parameter. The deceleration parameter \(q\) is defined as:
\[q=-\frac{\ddot{a}}{aH^{2}}. \tag{12}\]
Now if \(q>0\), we have a decelerating universe, if \(q<0\), then it indicates acceleration and if \(q=0\), then we have expansion at a constant rate. Eqs. (8), (9) and (12) yield
\[q=\frac{1}{\zeta}-1. \tag{13}\]
Thus, we represent the Hubble parameter in terms of deceleration parameter \(q\) an redshift as
\[H(z)=H_{0}(1+z)^{(1+q)}. \tag{14}\]
The energy density and the pressure can be obtained by solving Eqs. (4) and (5) which are given as
\[\rho=\frac{\alpha e^{24H_{0}^{4}q(z+1)^{4q+4}}\left(24H_{0}^{4}q(z+1)^{4q+4} \left(96H_{0}^{4}(q+1)(z+1)^{4q+4}-1\right)+1\right)+6H_{0}^{2}(z+1)^{2q+2}}{ 2\kappa}, \tag{15}\]
\[p=\frac{\alpha e^{24H_{0}^{4}q(z+1)^{4q+4}}\left(24H_{0}^{4}q(z+1)^{4q+4} \left(3072H_{0}^{8}q(q+1)^{2}(z+1)^{8q+8}+16H_{0}^{4}(q+1)(9q+5)(z+1)^{4q+4}+1 \right)-1\right)}{2\kappa}\\ +\frac{2H_{0}^{2}(2q-1)(z+1)^{2q+2}}{2\kappa}, \tag{16}\]
\[\omega=\frac{p}{\rho}. \tag{17}\]
For further analysis, we take \(\alpha\) and \(\kappa\) equal to unity and constrain the model parameters \(H_{0}\) and \(q\) using recent observational data sets.
## III Observational constraints
In this section, observational data sets are used to constraint the value of \(H_{0}\) and \(q\) which appear in the tilted Hubble parametrization. In the present model, we use the \(H(z)\), Pantheon data sets, and their joint data set.
### H(z) Data set
Here, we use OHD (77 points) as compiled by Shaily [48]. Now, best fit values of \(H_{0}\) and \(q\) are obtained from the usual chi-square test. Chi-square is given by:
\[\chi^{2}_{HD}(H_{0},q)=\sum_{i=1}^{77}\frac{[H(H_{0},q,z_{i})-H_{obs}(z_{i})]^{ 2}}{\sigma^{2}_{z_{i}}}, \tag{18}\]
where \(H_{obs}\) and \(H(H_{0},q,z_{i})\) are the observed and theoretical values, respectively and \(\sigma_{(z_{i})}\) is the standard deviation at \(H(z_{i})\).
### Pantheon Data set
We use Pantheon data compilation data, which consists of 1048 points for the redshift range \(0.01<z<2.26\). This data is collected from different supernovae survey _e.g._\(CfA1-4\), \(CSP\),\(SDSS\), \(SNLS\), \(PS1\), \(high-z\) gives 147, 25, 335, 236, 279, 26 SNeIa for each Sample for the range \(0.01<z<0.07\), \(0.01<z<0.06\), \(0.03<z<0.40\), \(0.12<z<1.06\), \(0.02<z<0.63\), \(0.73<z<2.26\) respectively [49; 50; 51; 52; 53; 54]. SNeIa plays a key role to investigate the expansion rate. Therefore, to analyse the theoretically predicted apparent magnitude (m) and absolute magnitude (M) _w.r.t._ the colour and stretch, we compute the value of distance modulus \(\mu_{Th}(z_{i})\) as,
\[\mu(z)=-M+m=\mu_{0}+5LogD_{L}(z), \tag{19}\]
where \(D_{L}(z)\) is the luminosity distance, and \(\mu_{0}\) is the nuisance parameter. These are given by:
\[D_{L}(z)=cD_{n}(1+z)\int_{0}^{z}\frac{1}{H(z^{*})}dz^{*}, \tag{20}\]
where
\[D_{n}(z)=\begin{cases}\frac{\sinh(\sqrt{\Omega_{m}})}{H_{0}\sqrt{\Omega_{m}}},\text{for}&\Omega_{m}>0\\ 1,\text{for}&\Omega_{m}=0\\ \frac{\sin(\sqrt{\Omega_{m}})}{H_{0}\sqrt{\Omega_{m}}},\text{for}&\Omega_{m}< 0\end{cases} \tag{21}\]
and
\[\mu_{0}=5Log\Big{(}\frac{H_{0}^{-1}}{1Mpc}\Big{)}+25, \tag{22}\]
respectively.
Now, the minimum \(\chi^{2}\) function is given as
\[\chi^{2}_{Pan}(H_{0},q)=\sum_{i=1}^{1048}\left[\frac{\mu_{th}(H_{0},q,z_{i})- \mu_{obs}(z_{i})}{\sigma_{\mu(z_{i})}}\right]^{2}. \tag{23}\]
where PAN stands for the observational Pantheon data set, \(\sigma_{\mu}(z_{i})\) indicates the observed value's standard error, \(\mu_{th}\) the theoretical distance modulus, and \(\mu obs\) the model's observed distance modulus.
### Joint Data set (H(z)+Pantheon)
By performing joint statistical analysis using \(H(z)\) and Pantheon data sets, we can obtain stronger constraints. Therefore, the chi-square function for joint data sets can be written as
\[\chi^{2}_{joint}=\chi^{2}_{HD}+\chi^{2}_{PAN}. \tag{24}\]
Figure 1: Figs. show the \(H_{0}\)-\(q\) likelihood contours for \(H(z)\), \(Pantheon\) and \(H(z)+Pantheon\) data set.
## IV Results
For a flat universe, we evaluate the best values of fit of \(H_{0}\) and \(q\) for the \(H(z)\), Pantheon, and their joint data sets, respectively. For this purpose, we perform the coding in Python, where we use the Monte Carlo Markov chain (MCMC) method which is given in the Python module "emcee", and plot the 2-D plots with \(1-\sigma\), \(2-\sigma\) likelihood contours. For the \(H(z)\) data set, the value of best fit is \(H_{0}=68.119^{+0.028}_{-0.12}\) and \(q=-0.109^{+0.014}_{-0.014}\) (see Fig. 1a). In Fig. 1b, we can see that for the \(Pantheon\) data set, the value of best fit is \(H_{0}=70.5^{+1.3}_{-0.8}\) and \(q=-0.25^{+0.15}_{-0.15}\). For the joint data set, we obtain the value of best fit of \(H_{0}=69.103^{+0.019}_{-0.10}\) and \(q=-0.132^{+0.014}_{-0.014}\), which is observed in Fig. 1c. In our work, we notice that \(q_{0}\) differs from is not very close to -0.5, which is the approximate value for the \(\Lambda\)CDM model. In the refs [55; 56], it is pointed out that modified gravity theories could admit different values for \(q_{0}\). With these best fit values, we plot the error bar plots for the Hubble data set and SNeIa data set. In Fig. 2, one can compare the present model with the \(\Lambda\)CDM model.
### Physical Parameters \(\rho\), \(p\) and \(\omega\)
In this section, we analyse the changing behaviour of energy density \(\rho\) and pressure \(p\). From Eqs. (15) and (16), it is clear that the values of \(\rho\) and \(p\) are in the terms of \(z\) and the value of \(\omega\) is in terms of \(q\) only, _i.e_, a constant. So to understand the evolution of these parameters we plot the graphs.
Fig. 3a shows the evolution of the energy density against the redshift. For high redshift \(\rho\) is very large, and as \(z\) decreases, \(\rho\) also decreases for the whole range of \(z\). As \(z\rightarrow-1\), the energy density \(\rho\to 0\). Further, Fig. 3b shows the evolution of pressure \(p\) against redshift \(z\), and we observe that the pressure is negative, which corresponds to accelerated expansion.
From Eqs. (15), (16) and (17), we calculate the present value of the EoS parameter \(\omega\) as -0.406, -0.5, -0.421 for the \(H(z)\), \(Pantheon\) and their joint data sets, respectively. These values of the EoS parameter show that at present our model is in the quintessence region (see Fig. 3c).
Figure 2: The error bar plots for \(OHD\) and SNeIa data sets show the alikeness between our model and \(\Lambda\)CDM.
### Energy Conditions
Energy conditions (EC's) have relevance to singularities in general relativity. We wish to deduce the energy conditions for \(f(R,G)\) modified gravity [31]. Using the expressions for the energy density and pressure that we derived earlier, we find that the NEC (null energy condition), WEC (weak energy condition), SEC (strong energy condition), and DEC (dominant energy condition) are given by:
* NEC \(\Leftrightarrow\)\(\rho+p\geq 0\),
* WEC \(\Leftrightarrow\)\(\rho\geq 0\), \(\rho+p\geq 0\),
* SEC \(\Leftrightarrow\)\(\rho+3p\geq 0\), \(\rho+p\geq 0\),
* DEC \(\Leftrightarrow\)\(\rho\geq 0\), \(\rho\pm p\geq 0\),
These conditions are illustrated in fig. 4. The NEC and DEC are satisfied, but the SEC is violated, in keeping with the idea of accelerated expansion of the universe.
Figure 3: The plots of \(\rho\), \(p\) and \(\omega\) against redshift \(z\)
### Cosmographic Parameters
To understand the universe's expansion history, many cosmological parameters are studied, which are expressed in the form of higher order derivatives of the scala factor. Therefore, to explore the dynamics of the universe, these parameters are very helpful. For example Hubble parameter \(H\) shows the expansion rate of the universe, the deceleration parameter \(q\) tells about the phase transition of the universe, the jerk parameter \(j\), snap parameter \(s\) and lerk parameter \(m\) investigate dark energy models and their dynamics. These are defined as:
\[H=\frac{\dot{a}}{a};\ \ q=-\frac{\ddot{a}}{aH^{2}};\ \ j=\frac{\dddot{a}}{aH^{3}}; \ \ s=\frac{\dddot{a}}{aH^{4}};\ \ l=\frac{\dddot{a}}{aH^{5}};\ \ m=\frac{\dddot{a}}{aH^{4}}. \tag{25}\]
These parameters may also be written in terms of \(q\) as
\[j=q(1+2q);\ \ s=-q(1+2q)(2+3q);\ \ l=q(2+3q)(1+2q)(3+4q);\ \ m=-q(2+3q)(1+2q)(3+4 q)(4+5q). \tag{26}\]
Figure 4: Plots of NEC, WEC, DEC and SEC
Here \(j\), \(s\), \(l\) and \(m\) are known as the cosmographic parameters. Using the obtained best fit value of \(q\), we find that the present value of \(j=-0.085238,\ \ -0.125,\ -0.097152\) for the \(H(z)\), \(Pantheon\) and \(H(z)+Pantheon\) data sets, respectively.
### Statefinder Diagnostic
In the literature, we find that to understand the universe's dynamics, geometric parameters play a very important role. When we study the deceleration parameter, we get information about the phase transition of the universe from deceleration to acceleration or vice versa. Therefore, it is required to study an additional higher order derivatives of \(a\), viz. \(r\). To study dark energy models, a statefinder diagnostic technique (SFD), is available [57; 58]; the pair is denoted by \(\{r,s\}\). This pair, written in terms of \(q\) is:
\[r=\frac{\dddot{a}}{H^{3}a}=2q^{2}+q,\ \ \ s=\frac{-1+r}{3(-\frac{1}{2}+q)}, \tag{27}\]
where \(q\neq\frac{1}{2}\).
Now, by using the statefinder diagnostic approach, we comment upon the behaviour of the model and also the diverging or converging behaviour of our model with respect to the \(SCDM\) and \(\Lambda CDM\) models. From observed values of \(q\), we can calculate the values of \(s\), and \(r\) parameters. We notice that our model does not fit well for the \(H(z)\) and \(Pantheon\) data sets, and therefore we obtain the notable changes in the values's best-fit for \(r\) and \(s\) (see Fig. 5a). Here, we see that our model approaches the \(\Lambda CDM\) model as \(q\rightarrow-0.5\).
### Om Diagnostic
Here, we use the \(Om\) diagnostic technique known to compare our model with the \(\Lambda CDM\) model. This helps us to separate various models of dark energy without calculating the energy density and the EoS parameter. The pattern of the trajectories in the \(Om\) diagnostic plot gives an indication of the various dark energy models. The definition of the \(Om\) diagnostic in terms of \(z\) is:
Figure 5: The \(s-r\) and \(q-r\) plots.
\[Om(z)=\frac{\{\frac{H(z)}{H_{0}}\}^{2}-1}{z^{3}+3z^{2}+3z}. \tag{28}\]
The plots of the \(Om\) diagnostic helps us to explain the nature of dark energy models. We know that if the curvature is positive \(z\), then the model is a ghost dark energy model, if the curvature is negative with respect to \(z\), we have quintessence, and if it has zero curvature, then the model represents the \(\Lambda CDM\) model. Fig. 6 shows that in late times, we have quintessence since the curvature is negative _w.r.t._\(z\)[59; 60].
## V Conclusions
The late-time behaviour of our flat FLRW model in \(f(R,G)\) gravity has been studied, where \(F(R,G)=R+\alpha e^{-G}\) and the invariant \(G\) is \(G=R^{2}-4R_{\mu\nu}R^{\mu\nu}+R_{\mu\nu\alpha}R^{\mu\nu\alpha\zeta}\). Since the field equations are difficult to solve in general, we assume a power law for the scale factor \(a\). We constrain \(H\) and \(q\) using observational datasets that are recent with the \(MCMC\) methodology and then proceeded to study the behaviour of the obtained model, and the universe's evolution. The model has exhibits a singularity that is of the point-type. The volume increases as \(t\) increases. The Hubble parameter monotonically decreases as \(z\rightarrow-1\). The model is expanding with constant acceleration. The energy density of the model monotonically decreases with an increase in time, starting from infinity and at late times, it tends to zero. Fig. 3, shows that our model is a quintessence dark energy model for all observational datasets.
The deceleration parameter is considered as a free model parameter and its present values are constrained as \(q=-0.109^{+0.014}_{-0.014}\), \(q=-0.25^{+0.15}_{-0.15}\), and \(q=-0.132^{+0.014}_{-0.014}\) using \(H(z)\), \(Pantheon\), and \(H(z)+Pantheon\) data sets, respectively, which deviate from the present day best fit values (see Fig. 1). The NEC and DEC energy conditions are satisfied, but the SEC does not hold for all the observational data sets, which support quintessence. The stability of this model is verified by the parameters, as illustrated in (see Fig. 4). The late time acceleration of the universe in supported by a violation of the SEC. The present value of the jerk cosmographic parameter is \(j=-0.085238,\ -0.125,\ -0.097152\) for the data sets \(H(z)\), \(Pantheon\) and \(H(z)+Pantheon\), respectively. This deviates from that of the \(\Lambda\)CDM. Our model has a quintessence type behaviour at late times. This is shown in the figure of \(Om(z)\) which is convex with respect to the \(z\)-axis and shows stability during the evolution of the universe up to late times except at present (see Fig. 6). Now, by using the statefinder diagnostic approach, we investigated the behaviour of the model and also checked the and divergence and convergence of our model with respect to the \(SCDM\) and \(\Lambda CDM\) models. From the observed values of \(q\), we can calculate the values of \(s\), and \(r\) parameters. It
Figure 6: The plot for om diagnostic.
is noticed that the power law scale faactor does not fit well for \(H(z)\) and \(Pantheon\) data, and therefore we get the various changes in the values that are best-fit for \(s\) and \(r\) (see Fig. 5a). Here, it can also be seen that our model approaches \(\Lambda CDM\) model as \(q\rightarrow-0.5\) (see Fig. 5).
Thus, after reviewing the obtained results of our model, we see that our model starts with a singularity that is point like, and behaves like a expanding accelerated dark energy model that is of the quintessence type now but tends to the to the \(\Lambda\)CDM model at late times.
**Acknowledgements** The authors express their thanks to Prof. Sushant G. Ghosh, CTP, Jamia Millia Islamia, New Delhi, India for fruitful discussions and suggestions.
|
2302.04165 | IRTCI: Item Response Theory for Categorical Imputation | Most datasets suffer from partial or complete missing values, which has
downstream limitations on the available models on which to test the data and on
any statistical inferences that can be made from the data. Several imputation
techniques have been designed to replace missing data with stand in values. The
various approaches have implications for calculating clinical scores, model
building and model testing. The work showcased here offers a novel means for
categorical imputation based on item response theory (IRT) and compares it
against several methodologies currently used in the machine learning field
including k-nearest neighbors (kNN), multiple imputed chained equations (MICE)
and Amazon Web Services (AWS) deep learning method, Datawig. Analyses comparing
these techniques were performed on three different datasets that represented
ordinal, nominal and binary categories. The data were modified so that they
also varied on both the proportion of data missing and the systematization of
the missing data. Two different assessments of performance were conducted:
accuracy in reproducing the missing values, and predictive performance using
the imputed data. Results demonstrated that the new method, Item Response
Theory for Categorical Imputation (IRTCI), fared quite well compared to
currently used methods, outperforming several of them in many conditions. Given
the theoretical basis for the new approach, and the unique generation of
probabilistic terms for determining category belonging for missing cells, IRTCI
offers a viable alternative to current approaches. | Adrienne Kline, Yuan Luo | 2023-02-08T16:17:20Z | http://arxiv.org/abs/2302.04165v1 | # IRTCI: Item Response Theory for Categorical Imputation
###### Abstract
Most datasets suffer from partial or complete missing values, which has downstream limitations on the available models on which to test the data and on any statistical inferences that can be made from the data. Several imputation techniques have been designed to replace missing data with stand in values. The various approaches have implications for calculating clinical scores, model building and model testing. The work showcased here offers a novel means for categorical imputation based on item response theory (IRT) and compares it against several methodologies currently used in the machine learning field including k-nearest neighbors (kNN), multiple imputed chained equations (MICE) and Amazon Web Services (AWS) deep learning method, Datawig. Analyses comparing these techniques were performed on three different datasets that represented ordinal, nominal and binary categories. The data were modified so that they also varied on both the proportion of data missing and the systematization of the missing data. Two different assessments of performance were conducted: accuracy in reproducing the missing values, and predictive performance using the imputed data. Results demonstrated that the new method, Item Response Theory for Categorical Imputation (IRTCI), fared quite well compared to currently used methods, outperforming several of them in many conditions. Given the theoretical basis for the new approach, and the unique generation of probabilistic terms for determining category belonging for missing cells, IRTCI offers a viable alternative to current approaches.
**Keywords:** categorical imputation, item response theory (IRT), missing completely at random (MCAR), missing at random (MAR)
## 1 Introduction
The purpose of this investigation was to introduce a new approach to imputing missing data for categorical variables - Item Response Theory for Categorical Imputation (IRTCI). Imputing missing values for categorical data has proven problematic, much more so than for continuous, normally distributed data [1]. When data include large numbers of categorical data, multiple imputation techniques are challenging, as the space of potential models is enormous [2]. Several attempts to deal with this problem have been introduced, including multinomial and log-linear models [3], clustering [4], [5] and a variety of multiple imputation methods such as expectation-maximization with bootstrapping, correspondence, latent class analysis, hot deck, and chained equations [6]. Borrowing from psychometric theory, Item Response Theory (IRT) offers a family of models that have been designed specifically to handle categorical data. The process results in a series of probabilities to determine whether the missing value belongs to a particular category. Demonstrating how to leverage these models for use in imputing missing data is the purpose of the current study.
### Missing Data
Many datasets suffer from being incomplete, in that they have missing data points in some or all variables. Missing data can occur for many reasons including, but not limited to: hardware limitations (i.e. sensor drop out), subject loss at follow-up (e.g. patient who did not return or dies), data entry errors, rare events, non-response (i.e. surveys), or the data were intentionally not collected for a case specific reason. How to best handle missing data can be difficult to resolve, especially when the causal reason for it remains unknown. Some statistical procedures cannot function with missing values and automatically eliminate cases with missing data, such as factor analysis, Cronbach's alpha and many feed forward neural networks. Even if only a few data points are missing from each variable, the effect of dropout, if performed case-wise, may result in a reduction of power of the statistical test, not having enough data to perform the analysis, or misleading findings if the remaining cohort is not a random sample of all cases. Similarly, many machine learning (ML) models cannot handle missing values, such as support vector machines, GLMnet, and neural networks. The few models that are able to tolerate missing values are Naive Bayes and some tree based models under the CART methodology [7].
Missing data can be classified into three categories [8]; missing completely at random (MCAR), missing at random (MAR) and missing not at random (MNAR). MCAR data follows the logic that the probability of an observation being missing does not depend on observed or unobserved measurements.
MAR missing data is conditional on one or more covariates, where the probability of an observation being missing depends only on observed variables in the dataset. Because of the characteristics of MCAR and MAR data, they are amenable to data-driven approaches to handling them. However, when observations are neither MCAR nor MAR, they are classified as MNAR, meaning the probability of an observation being missing depends on unobserved variables/information not available in the analysis. The missing mechanism, then, needs to be theoretically justified and incorporated into the data. Because of the 'top-down' nature of handling MNAR data, this type of missing date will not be discussed in the current study.
### Traditional Imputation Techniques
When more than 40% of data from important variables are missing, it is recommended that any inferences drawn should be exploratory; conversely, with less than 5% of the data missing dropping cases or simple scalar substitutions are warranted [9]. However, if only smaller portions of data are missing, preserving as much of the information becomes important, particularly with smaller data sets, leading to the need of imputing values to substitute into the missing cells. Several imputation techniques have been developed to do so. Some common examples are forward fill, backward fill, mean or most frequent, and Bidirectional Recurrent Imputation for Time Series (BRITS) [10]. Forward and backward fill work by carrying the most recent value forward or backward, respectively, filling in where appropriate. Imputing with the mean, median or mode works by computing the value of the mean, median or mode in relation to the column and filling this value in where missing. BRITS substitution is specific to time series data.
### Item Response Theory (IRT)
The concept of IRT for imputation was introduced by [11]. However, this does not perform a comparison with current state-of-the-art (SOTA) methods, how this impacts downstream predictive tasks and the various algorithmic adaptations for ordinal, nominal and binary imputation. The purpose of this study is to demonstrate how this technique can be used for imputation and compare its effectiveness with three of the more traditional imputation methods and how this has impacts downstream machine learning tasks. IRT is a family of mathematical models that link underlying unobserved (latent) traits of individuals/cases to the pattern of their responses to a series of observed variables (i.e., items or features) [12]. This linkage is manifested as one or more logistic functions that specify the probability of obtaining a specific value on any feature as a function of a case's underlying trait value. These logistic functions are generated using a maximum likelihood iterative approach that analyzes the entire pattern of all feature values for all cases simultaneously. IRT assumes
that the latent trait is organized along a continuum called theta (\(\theta\)) and all individual cases are placed along that continuum. Higher values of \(\theta\) are associated with higher levels of the underlying trait. It is assumed that higher values on the features are also associated with higher values of \(\theta\).
As part of the analysis process, characteristics of the features, such as their difficulty and discrimination, are estimated as well as an estimate of each case's standing along the underlying trait - their theta (\(\theta\)) score. Because IRT is an individual measurement theory, it was developed [13][14] and is currently used primarily in the psychological and educational literatures to assess the psychometric properties of items and tests. However, IRT has been used in the machine learning literature to assess the utility of features [15], natural language processing systems [16]; and classifiers [17]; [18].
The current study assesses how well IRT performs as a mechanism for imputation of missing feature data. IRT focuses on the pattern of all the available observed feature values to generate each case's overall \(\theta\) score. Then the imputed missing values are based on each individual case's \(\theta\) score. Because IRT uses all the feature information available for all cases, it is possible to impute valid values for those cases with missing data. One important result, then, of IRT imputed values is that they do not incorporate the outcome variable values in the protocol, as do many other imputation methods. In doing so, IRT avoids the circularity of using the classification outcome to impute missing values. This avoids the problem of overly optimistic findings in predictive modeling studies, when using the outcome to set values for a predictor that is then used to predict that same outcome. Such outcome information would not be available to classify/predict prospective new cases.
Three members of the family of IRT models will be used in the current study. One is the 2-parameter logistic model (2-PL) [19] used when the features are coded in a binary (0, 1) way. Another is the Graded Response Model (GRM) [20] used when features have ordinal-level values. Since IRT analyses do not handle continuous interval level data, such data can be converted into multiple ordinal level categories and run using the GRM. The third IRT model is the Nominal Response Model (NRM) used when feature values are nominal/categorical [21]. Salient attributes of these imputation methods are listed in Table 1.
## 2 Methods
### Datasets
Three different data sets were selected for this study: Diamonds [22]; Housing [23] and Heart Disease [24]. These were selected because they: 1) use different types of categorical data to be imputed (ordinal, binary and nominal), 2) have an outcome to allow for predictive utility assessment, and 3) are complete (no missing values), so the ground truth for the missing cases were available to
compare different imputation methods. Thus, they provided a broad comparative field regarding how IRT performs relative to other imputation methods. Within each data set, a single predictor variable was selected to be missing. Null values substituted in each of these specified predictor variables in four different amounts (missing 5, 10, 30 and 50%), each following two different structures (MCAR vs MAR). Therefore, each dataset gave rise to eight unique datasets for imputation. To generate the MCAR type data sets, values were randomly replaced with null values. Generating MAR data was performed on a per dataset instance by first identifying a conditional variable on which to generate the MAR data sets. The files were then sorted on the conditional variable and 5, 10, 30 and 50% of the target missing variable was removed from the top of the dataset. To verify MCAR versus MAR missing data structures, Little's test was used [25]. Little's test is a modified chi-Square test to determine if one or more systematic relationships between the missing data and other variables exist and is expected to be significant in MAR data sets and non-significant in MCAR data sets.
One issue that arose was that since IRT does not accommodate continuous data, such features had to be re-coded into ordinal-level categories, as this is required for use in the GRM analyses. To do so, histograms of the data were generated for each continuous feature and cut points made to preserve the original shape of the distribution, as many of the feature variables were non-normally distributed. Data that were affected in such a way were split into quartiles, providing four-level ordinal variables. This conversion was only done when running the IRT imputations.
#### Ordinal Imputation Data
The diamonds data is a set of 53,920 diamond cases with a continuous outcome (price). The eight features are a combination of ordinal (e.g., clarity)
\begin{table}
\begin{tabular}{l l l l l} \hline \hline \multirow{2}{*}{Attribute} & \multicolumn{3}{c}{Imputation Method} \\ \cline{2-5} & KNN & MICE & Datawig & IRT \\ \hline Categorical imputation & \(\checkmark{}^{a}\) & \(\checkmark{}^{a}\) & \(\checkmark{}\) & \(\checkmark{}\) \\ Scalable & x & \(\checkmark{}^{b}\) & \(\checkmark{}\) & \(\checkmark{}\) \\ Uses outcome & x & x & \(\checkmark{}\) & x \\ Works for time series & x\({}^{c}\) & \(\checkmark{}\) & \(\checkmark{}\) & \(\checkmark{}\) \\ Works for small datasets & \(\checkmark{}\) & \(\check{}\) & \(\check{}\) & \(\check{}\) \\ \hline \hline \end{tabular} \({}^{a}\)both KNN and MICE require categorical to be ordinal or be transformed into one-hot encoded if nominal
\({}^{b}\)Depending on the length of the dataset
\({}^{c}\)Theoretically possible, however, computationally difficult
\({}^{d}\)Theoretically possible, however, likely unreliable
\end{table}
Table 1: Imputation Types and Attribute Comparison
and continuous (e.g., dimensions along x, y, z). The feature that was selected to be missing for purposes of this study was color (an ordinal variable with 8 different levels). Other variables included price in US dollars (SS326-SS18,823), carat weight of the diamond (0.2-5.01), cut quality of the cut (Fair, Good, Very Good, Premium, Ideal), color; from J (worst) to D (best), clarity; (I1 (worst), SI2, SI1, VS2, VS1, VVS2, VVS1, IF (best)), x length in mm (0-10.74), y width in mm (0-58.9), z depth in mm (0-31.8), depth total depth percentage = z / mean(x, y) = 2 * z / (x + y) (43-79) and table width of top of diamond relative to the widest point (43-95). A list of the variables and their codes are shown in Table 2. The conditional variable to generate the MAR data sets was 'carat size' in this data set.
#### Binary Imputation Data
The heart disease data is a set of 253,680 responses from Behavioral Risk Factor Surveillance System (BRFSS) 2015, generated by the CDC to be used for the binary classification of heart disease/attack (no heart disease - coded 0, heart disease - coded 1). 23,893 of the cases had heart disease. An equivalent number were randomly selected from the non-heart disease cases, producing a final and balanced data set of 47,786 cases. A list of the variables and their codes are shown in Table 3. The feature that was selected to be missing for purposes of this study was high blood pressure (a binary variable). The conditional variable to generate the MAR data sets was 'age' in this data set.
#### Nominal Imputation Data
The housing data set is made up of 10,692 unique rental units and their features with a continuous outcome (rental price). Other features included whether the space was furnished or not, number of rooms, square footage, number of bathrooms and the city in which it was located. The feature that was selected to be missing for purposes of this study was city (a nominal categorical variable with 5 unique values). The conditional variable to generate the MAR data sets was 'number of rooms' in this data set.
\begin{table}
\begin{tabular}{l l} \hline Feature & Feature Type \\ \hline Carat & numeric - continuous \\ Cut & continuous \\ Color & ordinal \\ Depth & numeric - continuous \\ X (mm) & numeric - continuous \\ Y (mm) & numeric - continuous \\ Z (mm) & numeric - continuous \\ Price (outcome) & numeric - continuous \\ \hline \end{tabular}
\end{table}
Table 2: Diamond dataset
### Imputation Methods
#### 2.2.1 Existing methods
Three commonly used, robust imputation methods were employed in this study, k-NN, MICE, and a deep learning method called DataWig. K-NN works very much like the algorithm for classification. The substituted value is based on a specified number 'k' of the closest point estimates in an n-dimensional space. MICE also known as Sequential Regression Imputation, was developed by Rubin [26] and leverages a series (chain) of regression equations to obtain imputation values. MICE starts with a simple imputation method such as mean substitution. However, the process is repeated several times on different portions of the data and regressed on other variables, where the final imputed value is one that converges to stability. DataWig is a deep learning imputation method developed by Amazon Web Services (AWS) [27] that uses a Long Short Term Memory network (LSTM). It follows a similar approach as that of MICE that can be extended to allow for different types of data (categorical, numerical, text) to be used when imputing missing values. For categorical variable imputation, an EmbeddingFeaturizer is used, where training data are comprised of rows of complete data and the training supplies the remaining structured dataset. The predicted outcome is the value to be imputed and is subsequently substituted into the final dataset.
\begin{table}
\begin{tabular}{l l} \hline Feature & Feature Type \\ \hline City & categorical - nominal \\ Area & numeric - continuous \\ Rooms & numeric - continuous \\ Bathrooms & numeric - continuous \\ Furniture & numeric - binary \\ Rent price (outcome) & numeric - continuous \\ \hline \end{tabular}
\end{table}
Table 4: Housing Dataset
\begin{table}
\begin{tabular}{l l} \hline Feature & Feature Type \\ \hline BMI & numeric - continuous \\ Age & numeric - continuous \\ Smoker & 0,1 - binary \\ Stroke & 0,1 - binary \\ Diabetes & 0,1 - binary \\ No Physical activity & 0,1 - binary \\ No Vegetables & 0,1 - binary \\ Difficulty Walking & 0,1 - binary \\ High Cholesterol & 0,1 - binary \\ High Blood pressure & 0,1 - binary \\ Heart Disease or Attack (outcome) & 0,1 - binary \\ \hline \end{tabular}
\end{table}
Table 3: Heart Disease Dataset
#### IRT Imputation
IRT provides an alternative approach to imputation, as described earlier. The IRTPRO (Vector Psychometric Group, 2021) program was used to estimate the IRT feature and case parameters in all data sets. Data can be imported into the program from a number of different file formats, including the type used in this study (.csv). All missing data cells were coded with -1. The interface allows for a mixture of different types of features within the same analysis (i.e., a mix of binary, ordinal, or categorical features can be used in the same analysis). Each model was specified to be based on one group of cases using a unidimensional set of features. IRTPRO uses maximum likelihood to estimate feature parameters and expected a posteriori (EAP) to generate a \(\theta\) score for each case. Parameters are estimated in the logistic metric. Some programs have historically rescaled the parameters to approximate the normal to give function, but this is not done in IRTPRO as has been suggested more recently [28]. Each feature was specified to follow a 2-PL, GRM or NRM.
If the response by examineee \(j\) to item \(i\) is denoted by a random variable \(U_{ij}\), it is convenient to code the two possible scores as \(U_{ij}=1\) (correct) and \(U_{ij}=0\) (incorrect). To model the distribution of this variable, or, equivalently, the probability of a correct response, the ability of the examineee is presented by a parameter \(\theta\in\) (-\(\infty\),+\(\infty\)), and it is assumed in a two-parameter model that the properties of item i that have an effect on the probability of a success are its difficulty and discriminating power, which are represented by parameters \(b_{i}\in\) (-\(\infty\),+\(\infty\)) and \(a_{i}\in\) (0,+\(\infty\)), respectively. The probability of success on a given item \(i\) is usually represented s \(P_{i}(\theta)\). The 2-PL generates a threshold (\(b\)) and slope (\(a\)) for each feature and a \(\theta\) for each case. Using these estimated parameters, the linking function between the underlying trait and particular feature can be described as follows:
\[Pij(U_{ij}=1\mid\theta)=\frac{e^{a_{i}(\theta-b_{ij})}}{1+e^{a_{i}(\theta-b_{ ij})}} \tag{1}\]
Using the representation in Eq. 1, the joint likelihood function estimation for simultaneously computing ability \(\theta\) and parameters \(a_{i}\) and \(b_{i}\) in the case of N examinees and n items associated with the 2-PL model can be written as:
\[L(\theta,a,b;\;u)=\prod_{i}\prod_{j}P_{i}(\theta_{j};\;a_{i},b_{i})^{u_{ij}}[ 1-P_{i}(\theta_{j};\;a_{i},b_{i})]^{1-u_{ij}} \tag{2}\]
Where \(\theta\equiv\) (\(\theta_{1}\),...,\(\theta_{n}\)), \(a\equiv(a_{1},...,a_{n})\), \(b\equiv(b_{1},...,b_{n})\), \(u\equiv(u_{ij})\), and \(u_{i}\) and \(u_{j}\) are the marginal sums of the data matrix, based on the response pattern \(x\) across items (variables) and across an examineee (row case) [29]. Maximizing the logarithm of the likelihood function results in the following set of estimation equations:
\[\sum_{i}(a_{i}(u_{ij}-P_{i}(\theta_{j};\ a_{i},b_{i}))=0,j=1,...,N \tag{3}\] \[\sum_{j}(a_{i}(u_{ij}-P_{i}(\theta_{j};\ a_{i},b_{i}))=0,i=1,...,n\] \[\sum_{j}(u_{ij}-P_{i}(\theta_{j};\ a_{i},b_{i}))(\theta_{j}-b_{i} ),j=1,...,n\]
The binary model in equation 1 has the simple interpretation of the probability of success being equal to the value of the person parameter \(\theta\) relative to the value of the item parameters. The probability of being in the "1" category on a particular item \(i\) can be ascertained for any case with a specific \(\theta\)-value. Using this model, a missing binary variable can be imputed - cases with probabilities below 50% are imputed as 0 and those with probabilities above 50% are imputed as 1. Figure 1a) showcases the curve for this model, where ability \(\theta\) is a row/case characteristic and parameter values are associated with the variable (item).
The graded response model (GRM) represents a family of mathematical models that deals with ordered polytonomous categories, seen in Figure 1B. It uses a two-step process in to link the trait to features [20]. In the first step, a series of 2-PL functions for each of the category option boundaries are generated. For example, if one has a 5-option feature (coded 0, 1, 2, 3, 4), there would be 4 boundary functions: above 0 but less than 1, above 1 but less than 2, above 2 but less than 3 and above 3 but less than 4. In this first step, threshold parameters for each of the features' option boundaries and an overall slope parameter for the feature are generated. If \(\theta\) is the latent ability, and \(U_{i}\) is a random variable to denote the graded item response to item i, and \(u_{i}=(0,1,...,m_{i})\) denotes the actual responses. The category response function, \(P_{ui}(\theta)\), is the probability with which an examinee with ability \(\theta\) receives a score \(u_{i}\) is:
\[P_{ui}(\theta)\equiv P[U_{i}=u_{i}\ |\ \theta] \tag{4}\]
Figure 1: a) 2 Parameter Logistic Model b) Graded response model c) Nominal response model
Probabilities based on the other combinations, given \(\theta\), are computed by subtracting the adjacent \(P_{ik}^{*}(\theta)\):
\[P_{ik}^{*}(\theta)=P_{ik}^{*}(\theta)-P_{ik+1}^{*}(\theta) \tag{5}\]
Therefore, in expanding Eq. 5 for a 5 category GRM, we would get:
\[Option0:P_{i0}(\theta) = 1.0-P_{i1}(\theta)\] \[Option1:P_{i1}(\theta) = P_{i1}(\theta)-P_{i2}(\theta)\] \[Option2:P_{i2}(\theta) = P_{i2}(\theta)-P_{i3}(\theta) \tag{6}\] \[Option3:P_{i3}(\theta) = P_{i3}(\theta)-P_{i4}(\theta)\] \[Option4:P_{i4}(\theta) = P_{i4}(\theta)-0.0\]
And the marginal likelihood solution (used for GRM) in can be written as:
\[L(\theta,a,b;\;u)=\prod_{j=1}^{N}P_{x_{j}} \tag{7}\]
Where \(x_{j}\) is the response pattern obtained by an examinee \(j\), \(P_{xj}(\theta)\) for an examinee \(i\) equals the joint likelihood function, \(L(\theta_{j},a_{i},b_{ui})\). In this equation, the probability of being coded "1" on a particular category \(j\) of a feature \(i\) can be ascertained for any case with a specific \(\theta\)-value. These functions generate probabilities associated with \(m\) dichotomies. Continuing with the example of 5 categories the dichotomies would refer to the probability of being coded 1: 1) in category 0 contrasted with categories 1, 2, 3, and 4; 2) in categories 0, 1 contrasted with categories 2, 3, and 4; 3) in categories 0, 1, 2 contrasted with categories 3 and 4; 4) in categories 0, 1, 2, 3 contrasted with category 4. The second step of the process uses subtraction between the probabilities for each option boundary of that feature to estimate the probabilities for each option. The probability of responding at the lowest option or above is 1.0 and the probability of responding above the highest alternate is 0.0. The option probabilities are generated for each alternative in the 5-point scale as above: Using this model, missing ordinal cells are imputed and categories assigned for each case based on the category with the highest probability.
The nominal response model (NRM) also uses a two-step process (divide-by-total) to link the ability with features [30]. In a typical nominal response model, a person \(N\) responds to each of \(n\) items, where the item \(i\) admits responses in \(m_{i}\) mutually exclusive categories, as in the case with a multiple choice exam, seen in Figure 1C. In the first step, functions for each of the category options are generated by estimating the slopes \(a\) and intercepts \(c\) for each option. Based on a case's score, the probability of being coded "1" on a particular category \(j\) of a feature \(i\) is calculated as the ratio of the probability
of being in that category divided by the sum of the probabilities of falling into any of the categories on that feature, see Eq. 8.
\[P_{ij}(\theta)=\frac{\exp(a_{ij}\theta+c_{ij})}{\sum_{x=0}^{m}\exp(a_{ij}\theta+c _{ij})} \tag{8}\]
The probability of the response pattern \(U_{i}v=[U_{1\ell},U_{2\ell},...,U_{n\ell}]\) as a function of \(\theta\) can be represented by:
\[L_{\ell}(\theta)=P(U_{\ell}\mid\theta)=\prod P_{ih}(\theta) \tag{9}\]
where \(h=U_{i\ell}\) is the item score designating the category to which the response to item \(i\) in pattern \(\ell\) corresponds. \(L_{\ell}\) is called the likelihood of \(\theta\), given the pattern \(Ue\), and \(P_{\ell}\) is called the marginal probability of \(U_{\ell}\). To ensure model identification in NRM, one of two constraints must be set for parameter estimation. Either the sum across feature slopes and feature intercepts must be set to zero (\(\theta a_{ij}=\theta c_{ij}=0\)), or the lowest response category for each feature must be set to zero (\(a_{i1}=c_{i1}=0\)). The IRTPRO program opts for the latter of these two constraint options, as has been suggested to be more plausible [31]. As with the GRM, this analysis estimates the category into which the case is most likely to fall. For imputation, each category is calculated based on parameters \(a_{i}\) and \(c_{i}\) and ability \(\theta\), and the category with the highest probability becomes the corresponding imputed value for nominal level data.
#### 2.2.3 Imputation Assessments
Factorial Analyses of Variance (4-levels of methodology * 2-levels of missing type) were conducted on the assessments. Follow-up tests using a Bonferroni correction were used to assess any differences between imputation methodologies and MAR and MCAR findings. Only significant results are reported. To assess the imputations relative to the complete datasets, F1 scores (Eq. 10) were calculated for the cells that had been imputed.
\[F1=2*\frac{precision*recall}{precision+recall} \tag{10}\]
Additionally, machine learning models were trained to compare relative predictive utility between the different imputation methods and with the original complete data (ground truth). Several machine learning methods were trialed on the original data sets which included Linear Regression, Bayesian Ridge regression, Random Forest Regressor and XGBoostRegressor for the regression outcome data sets (Diamonds and Housing). Random Forest, neural network (NN), support vector machine (SVM) and XGBoost algorithms were
used for classification outcome (Heart Disease Data set). Hyperparameters were determined using a random search within the various algorithms. The best model for each dataset was determined using the original dataset and then used with the imputed datasets to allow for a consistent comparison.
Root Mean Square Error (RMSE) summary values for the Diamond and Housing outcome predictions were used to assess fit of the expected to observed values, where lower values are better. Area Under the Curve (AUC) was used to assess the models' capability of distinguishing between classifications for the Heart Disease outcome predictions, where higher values are better.
## 3 Results
### Tests of the MCAR and MAR data set assumptions
Table 5 showcases the results after performing Little's test to ensure each dataset was created in a MAR and MCAR fashion and in varying amounts (5, 10, 30, and 50 %). As can be seen in the last column of Table 5 the test statistic are significant when MAR data was created as conditional on another column and not significant when values were removed at random. These results are in accordance with the ultimate goal of being able to compare how data being either MAR or MCAR influences imputed methodologies.
### Testing Imputed Values Accuracy (F1)
Tables 6, 7, and 8 show the F1 values across imputed missing cells.
In the Diamond dataset, the ordinal variable (5-levels) of 'color' category was imputed. There was a significant main effect of methodologies collapsed across MAR and MCAR data sets (F(3,24)=13.3, p \(<.001\)). Follow-up tests showed that KNN, DataWig, and IRT all performed significantly better than MICE in reproducing the missing values. Overall, the F1 value for this data set across all methodologies was 0.20, indicating that this is a difficult imputation task.
In the Housing dataset, where the imputed variable was a nominal categorical variable, there was also a main effect of methodologies collapsed across MAR and MCAR data sets (F(3,24)=243.35, p \(<.001\)). Follow-up tests showed that MICE performed significantly poorer than KNN, DataWig and IRT; KNN and MICE performed significantly poorer than DataWig and IRT. DataWig and IRT performed similarly. The same pattern across methodologies emerged in the follow-up tests. Overall, the F1 value for this data set across all methodologies was 0.38, indicating that this is not as difficult a task as an ordinal categorical imputation, but is still difficult.
In the Heart Disease dataset, where the imputed variable was a binary category, there was a marginal effect of missing type of data collapsed across
methodology (F(1,24)=9.38, p\(<\).001). Overall, the F1 value for this data set across all methodologies was 0.70, indicating that this imputation task is a relatively easy one. Performing a visual inspection of tables 6-8, increasing
\begin{table}
\begin{tabular}{l l l l l l} \hline Missing Type & Num. missing & KNN & MICE1 \(\pm\) Std. err & Datawig1 \(\pm\) Std. err & IRT \\ \hline \hline MAR & 2,696 & 0.192 & 0.161 \(\pm\) 0.003 & 0.192 \(\pm\) 0.007 & 0.289 \\ MAR & 5,392 & 0.201 & 0.159 \(\pm\) 0.001 & 0.207 \(\pm\) 0.002 & 0.238 \\ MAR & 16,176 & 0.199 & 0.162 \(\pm\) 0.001 & 0.202 \(\pm\) 0.003 & 0.166 \\ MAR & 26,960 & 0.185 & 0.153 \(\pm\) 0.0005 & 0.191 \(\pm\) 0.008 & 0.185 \\ \hline & Marginal means & 0.194 & 0.159 & 0.198 & 0.212 \\ \hline MCAR & 2,696 & 0.231 & 0.164 \(\pm\) 0.003 & 0.222 \(\pm\) 0.002 & 0.208 \\ MCAR & 5,392 & 0.223 & 0.158 \(\pm\) 0.002 & 0.218 \(\pm\) 0.001 & 0.213 \\ MCAR & 16,176 & 0.219 & 0.159 \(\pm\) 0.001 & 0.217 \(\pm\) 0.002 & 0.212 \\ MCAR & 26,960 & 0.219 & 0.157 \(\pm\) 0.001 & 0.221 \(\pm\) 0.001 & 0.209 \\ \hline & Marginal means & 0.223 & 0.160 & 0.219 & 0.211 \\ \hline \end{tabular} \({}^{1}\)both MICE and Datawig have inherent randomness as part of the imputation algorithms, and thus repeated imputed datasets (5 for each) and their standard errors have been created for these methodologies throughout the text
\end{table}
Table 6: F1 values following imputation of diamond dataset, stratified by type and amount missing
\begin{table}
\begin{tabular}{l l l l l} \hline Dataset & Missing type & \% missing & Num. instances missing & test stat, (p-value) \\ \hline Diamond & MAR & 5 & 2696 & 10,693 (0.000) \\ Diamond & MAR & 10 & 5392 & 18,065.77 (0.000) \\ Diamond & MAR & 30 & 16176 & 41,009.07, (0.000) \\ Diamond & MAR & 50 & 26960 & 41,409.31, (0.000) \\ Diamond & MCAR & 5 & 2696 & 1.899, (0.984) \\ Diamond & MCAR & 10 & 5392 & 6.222, (0.622) \\ Diamond & MCAR & 30 & 16176 & 7.472, (0.487) \\ Diamond & MCAR & 50 & 26960 & 5.258, (0.730) \\ \hline Housing & MAR & 5 & 535 & 964.022, (0.000) \\ Housing & MAR & 10 & 1069 & 2017.706, (0.000) \\ Housing & MAR & 30 & 3208 & 5451.812, (0.000) \\ Housing & MAR & 50 & 5346 & 7284.074, (0.000) \\ Housing & MCAR & 5 & 535 & 1.307, (0.934) \\ Housing & MCAR & 10 & 1069 & 1.087, (0.955) \\ Housing & MCAR & 30 & 3208 & 1.867, (0.867) \\ Housing & MCAR & 50 & 5346 & 1.833, (0.872) \\ \hline Heart Disease & MAR & 5 & 2389 & 14376.354, (0.000) \\ Heart Disease & MAR & 10 & 4779 & 22067.449, (0.000) \\ Heart Disease & MAR & 30 & 14336 & 31409.706, (0.000) \\ Heart Disease & MAR & 50 & 23893 & 30324.046, (0.000) \\ Heart Disease & MCAR & 5 & 2389 & 16.019, (0.099) \\ Heart Disease & MCAR & 10 & 4779 & 14.994, (0.132) \\ Heart Disease & MCAR & 30 & 14336 & 15.230, (0.124) \\ Heart Disease & MCAR & 50 & 23893 & 5.566, (0.850) \\ \hline \end{tabular}
\end{table}
Table 5: Little’s Test Results for Datasets
the percentage of missing items most prominently impacts the F1 scores when the items are missing at random (MAR).
### Effects on Machine Learning Outcomes
Gradient Boosting Regressor and XGBoost machine learning algorithms outperformed others tested, the results of which are reported for each dataset in Tables 9-11. At the bottom of each table is the recorded performance in root-mean-square error (RMSE) in regression tasks and AUC in the classification task. In a regression and categorical ordinal model (Table 9), MICE and KNN algorithms minimized RMSE the most in both MCAR and MAR data, while Datawig was performed the poorest in MCAR data and IRT worst in MAR data. During a regression task where the imputed variable was a nominal
\begin{table}
\begin{tabular}{l l l l l l} \hline Missing Type & Num. missing & KNN & MICE\({}^{1}\)\(\pm\) Std. err & Datawig\({}^{1}\)\(\pm\) Std. err & IRT \\ \hline MAR & 535 & 0.328 & 0.203 \(\pm\) 0.008 & 0.523 \(\pm\) 0.0004 & 0.523 \\ MAR & 1069 & 0.332 & 0.206 \(\pm\) 0.008 & 0.526 \(\pm\) 0.0002 & 0.527 \\ MAR & 3208 & 0.298 & 0.204 \(\pm\) 0.004 & 0.515 \(\pm\) 0.005 & 0.522 \\ MAR & 5346 & 0.187 & 0.198 \(\pm\) 0.002 & 0.438 \(\pm\) 0.004 & 0.456 \\ \hline \multicolumn{3}{c}{Marginal means} & 0.286 & 0.203 & 0.501 & 0.507 \\ \hline MCAR & 535 & 0.263 & 0.197 \(\pm\) 0.005 & 0.566 \(\pm\) 0.003 & 0.563 \\ MCAR & 1069 & 0.264 & 0.193 \(\pm\) 0.005 & 0.545 \(\pm\) 0.0004 & 0.544 \\ MCAR & 3208 & 0.255 & 0.194 \(\pm\) 0.003 & 0.549 \(\pm\) 0.0008 & 0.550 \\ MCAR & 5346 & 0.258 & 0.194 \(\pm\) 0.002 & 0.546 \(\pm\) 0.0006 & 0.553 \\ \hline \multicolumn{3}{c}{Marginal means} & 0.260 & 0.195 & 0.551 & 0.553 \\ \hline \end{tabular} \({}^{1}\)both MICE and Datawig have inherent randomness as part of the imputation algorithms, and thus repeated imputed datasets (5 for each) and their standard errors have been created for these methodologies throughout the text
\end{table}
Table 7: F1 values following imputation of housing dataset, stratified by type and amount missing
\begin{table}
\begin{tabular}{l l l l l l} \hline Missing Type & Num. missing & KNN & MICE\({}^{1}\)\(\pm\) Std. err & Datawig\({}^{1}\)\(\pm\) Std. err & IRT \\ \hline MAR & 2,389 & 0.852 & 0.662 \(\pm\) 0.005 & 0.866 \(\pm\) 0.001 & 0.842 \\ MAR & 4,779 & 0.798 & 0.632 \(\pm\) 0.003 & 0.836 \(\pm\) 0.0003 & 0.798 \\ MAR & 14,336 & 0.664 & 0.579 \(\pm\) 0.002 & 0.753 \(\pm\) 0.001 & 0.720 \\ MAR & 23,893 & 0.655 & 0.565 \(\pm\) 0.0003 & 0.564 \(\pm\) 0.015 & 0.691 \\ \hline \multicolumn{3}{c}{Marginal means} & 0.742 & 0.610 & 0.755 & 0.763 \\ \hline MCAR & 2,389 & 0.699 & 0.570 \(\pm\) 0.008 & 0.734 \(\pm\) 0.001 & 0.709 \\ MCAR & 4,779 & 0.686 & 0.575 \(\pm\) 0.003 & 0.732 \(\pm\) 0.0008 & 0.719 \\ MCAR & 14,336 & 0.691 & 0.574 \(\pm\) 0.002 & 0.729 \(\pm\) 0.0005 & 0.716 \\ MCAR & 23,893 & 0.697 & 0.574 \(\pm\) 0.001 & 0.729 \(\pm\) 0.0003 & 0.713 \\ \hline \multicolumn{3}{c}{Marginal means} & 0.693 & 0.726 & 0.730 & 0.714 \\ \hline \end{tabular} \({}^{1}\)both MICE and Datawig have inherent randomness as part of the imputation algorithms, and thus repeated imputed datasets (5 for each) and their standard errors have been created for these methodologies throughout the text
\end{table}
Table 8: F1 values following imputation of heart disease dataset, stratified by type and amount missing
categorical one (Table 10), IRT minimized RMSE the most in both MCAR and MAR datasets, the benefits of which were most exaggerated in MCAR data. In a classification and binary imputation task (Table 11), nearly all methods performed equally well.
9-11. At the bottom of each table is the recorded performance of the full (no missing) data sets. For the Diamond data set, there were significant main effects of imputation methodology collapsed across MAR and MCAR data sets (F(3,24)=9.19, p\(<\).001) (DataWig was significantly worse than all other methodologies), missing data type collapsed across methodology (F(1,24)=16.68, p\(<\).001) (MAR was better than MCAR, 0.23 versus 0.34, respectively), and an interaction effect was also significant (F(3,24)=11.69, p\(<\).001). Post-hoc tests of the interaction showed that IRT was worse than the other methodologies in the MAR condition, and that DataWig was worse than the other methodologies in the MCAR condition.
For the House data set, there were significant main effects of imputation methodology collapsed across MAR and MCAR data sets (F(3,24)=12.40, p \(<\).001) (DataWig and IRT were superior to the other methodologies), and an interaction (F(3,24)=9.48, p\(<\).001. The interaction showed the that the effect of methodology was with the MCAR data and not the MAR data.
There were no effects of methodology or missing data type in the Heart Disease data sets. There was generally no notable consistent change, using a visual inspection of the tables, when increasing the percentage of missing data in these results. The one exception to this pattern was that of the DataWig MCAR for the Diamond data set. These rather large values are much higher than the other imputation methodologies or the original data set value of 0.2241. As the amount of missing data increased, the RMSE values increased quite substantially (0.37, 0.47, 0.71 to 0.84).
\begin{table}
\begin{tabular}{l l l l l l} \hline Type & Missing (\%) & KNN & MICE1 \(\pm\) Std. err. & Datawig1 \(\pm\) Std. err. & IRT \\ \hline MAR & 2,389 & 0.832 & 0.831 \(\pm\) 0.0001 & 0.831 \(\pm\) 0.0001 & 0.830 \\ MAR & 4,779 & 0.830 & 0.830 \(\pm\) 0.0002 & 0.831 \(\pm\) 0.0002 & 0.832 \\ MAR & 14,336 & 0.827 & 0.828 \(\pm\) 0.0001 & 0.829 \(\pm\) 0.0003 & 0.827 \\ MAR & 23,893 & 0.827 & 0.827 \(\pm\) 0.0003 & 0.828 \(\pm\) 0.0002 & 0.829 \\ \hline MCAR & 2,389 & 0.829 & 0.831 \(\pm\) 0.0003 & 0.832 \(\pm\) 0.0004 & 0.832 \\ MCAR & 4,779 & 0.831 & 0.830 \(\pm\) 0.0005 & 0.831 \(\pm\) 0.0002 & 0.831 \\ MCAR & 14,336 & 0.829 & 0.828 \(\pm\) 0.0005 & 0.829 \(\pm\) 0.0002 & 0.828 \\ MCAR & 23,893 & 0.828 & 0.826 \(\pm\) 0.0003 & 0.828 \(\pm\) 0.0002 & 0.829 \\ \hline \multicolumn{5}{c}{Original data AUC:} & 0.8313 \\ \hline \end{tabular}
\end{table}
Table 11: AUC for Heart Disease Dataset
## 4 Discussion
The results suggest that IRT-based imputation is a viable alternative to some of the more established methods for categorical imputation. It returned more accurate values than DataWig for the Diamond data (ordinal) and more accurate values than KNN and MICE for the Housing data (nominal).
In terms of the predictive utility of these substitutions, IRT was superior to DataWig for the Diamond data and superior to KNN and MICE for the Housing data. More nuanced findings indicated that IRT missing value replacement resulted in poorer predictive utility for the Diamond MAR data than other methodologies, but was better than DataWig in predictive utility for the Diamond MCAR data.
There were somewhat mixed findings regarding the effect of and underlying structure to the'missingness' (i.e., whether the data were MCAR versus MAR). For the housing data, MCAR data were more accurately reproduced by MICE, DataWig and IRT than were MAR data. However, in the Heart Disease data, MAR missing data were more accurately reproduced than MCAR missing data for all imputation methodologies. In the Diamond and Housing data sets, there was better predictive utility for the MAR than MCAR data. Thus, on balance, there seems to be a somewhat better result when the substitutions are based on some sort of structure within the data. This makes sense insofar as the methodologies are utilizing other information in the data sets to impute missing values. If the missingness is completely random, then there is very little for the methodologies to capitalize on when converging on a specific value for the missing cells.
While the amount of missing data was manipulated, there did not seem to be a very large effect of this variable on the results and was not tested empirically. The one exception was the predictive utility of the Diamond data when MCAR missing data were imputed by DataWig; as the proportion of missing data increased, there was obviously an impediment to the features' overall predictive utility.
One notable finding was that the ordinal categorical data were most difficult for all imputation techniques, followed by the nominal imputations, with the binary imputations most easily addressed. This intriguing finding is quite possibly a result of the one-hot encoding limitation required by algorithms such as KNN and MICE, and distributional effects of ordinal categories. Binary imputation with two distinct classes leaves fewer available options, and thus being correct by chance is higher as a result.
There were no effects of imputation on high blood pressure (binary) from the heart disease data (binary missing data), indicating that none were super- or/inferior with this type of data, with accuracy or predictive utility. This may be due to blood pressure existing in 2 distinct states. On closer examination of the heart disease where 23,893 blood pressure values were missing (MCAR: 1-13,694, 0-10,209) and (MAR: 1-16,660, 0-7,233), where '1' denote high blood pressure and '0' does not, there exists significant imbalance in the two data sets. However, results were very similar.
Although DataWig is often described as being superior to other imputation methods in that it handles different types of data, it did not perform as well as the others in this study on some data sets - more poorly on the ordinal data than all the others, no better than IRT on the nominal data, and no better than any of the others on the binary data. There is also the circularity issue in using DataWig as it uses the outcome variable when estimating missing values. As per DataWig's documentation, it requires at least 10 times more rows than the unique number of categories to impute missing values for categorical variables. In the current study, it had difficulty imputing a category that appeared infrequently within a categorical variable.
Although not shown here, a strength of IRTCI is when continuous feature values have a non-linear relationship to the outcome, or are highly skewed, modifying the variable to be a categorical estimate may be a very useful alternative. For instance, many lab values in healthcare data are associated with poor health outcomes if they are 'out of range' - abnormally high or abnormally low. Hypo- and hypernatremia are examples of this. These pose a unique challenge for linear imputation methods. Employing IRTCI, cut points could be made that delineate the normal range (135-145 mmol/L) from abnormally high or abnormally low. Missing values could be imputed under GRM or NRM methodology in IRT. In addition, IRTCI methods can be used with supervised or unsupervised data sets.
The IRTCI method was tested on multiple data sets, but as with any novel methodology, there are limitations to the work. One was that the data from a single variable was missing; this was done to control the effects. It is possible that if the structure of the missing data was modified, the results might change. This remains an open invitation to researchers in other disciplines to perform, as our attempt was to control as many variables as possible to ensure the internal validity of the findings. Second, IRTCI requires the movement between two different software platforms, and moving between them can be a deterrent. Lastly, IRTCI is useful primarily for categorical imputations (binary, nominal, and ordinal) as demonstrated in this study. Another opportunity for future research into this method include adapting it for use with continuous features. IRT protocols allow for categorization of continuous data into many ordinal-level groups (e.g., 10-15). Such a set of would be a 'near continuous' approximation of the data. Imputed missing values could be mapped back to the distribution from whence it came, allowing for a point estimate of the data. Such an approach would require large data sets to ensure adequate numbers of cases in each group. Additional future work is warranted to demonstrate how this method would perform.
## 5 Conclusion
Our findings showcased a novel categorical imputation method - Item Response Theory for Categorical Imputation (IRTCI). Categorical imputation poses
some unique problems, unlike multiple imputation based on for continuous, normally distributed data, categorical multiple imputations with many variables results in large numbers of higher order interactions [1]. Most imputation methods used in machine learning require transformation to one-hot encoded values and do not have native methods for handling nominal categories. The IRTIC technique uses a theoretically justified probabilistic approach to imputing the most likely value for a categorical variable. As it is outlined in this study IRTIC presents a viable alternative to existing methods.
## Acknowledgments
This work was supported by grants U01TR003528 and R01LM013337.
## Conflicts of Interest
All authors declare that they have no conflicts of interest.
|
2306.16096 | Generative Causal Inference | In this paper we propose the use of the generative AI methods in
Econometrics. Generative methods avoid the use of densities as done by MCMC.
They directrix simulate large samples of observables and unobservable
(parameters, latent variables) and then using high-dimensional deep learner to
inform a nonlinear transport map from data to parameter inferences. Our themed
apply to a wide verity or econometrics problems, including those where the
latent variables are updates in deterministic fashion. Further, paper we
illustrate our methodology in the field of causal inference and show how
generative AI provides generalization of propensity scores. Our approach can
also handle nonlinearity and heterogeneity. Finally, we conclude with the
directions for future research. | Maria Nareklishvili, Nicholas Polson, Vadim Sokolov | 2023-06-28T10:56:25Z | http://arxiv.org/abs/2306.16096v1 | # Generative Causal Inference
###### Abstract
In this paper we propose the use of the generative AI methods in Econometrics. Generative methods avoid the use of densities as done by MCMC. They directrix simulate large samples of observables and unobservable (parameters, latent variables) and then using high-dimensional deep learner to inform a nonlinear transport map from data to parameter inferences. Our themed apply to a wide verity or econometrics problems, including those where the latent variables are updates in deterministic fashion. Further, paper we illustrate our methodology in the field of causal inference and show how generative AI provides generalization of propensity scores. Our approach can also handle nonlinearity and heterogeneity. Finally, we conclude with the directions for future research.
Keywords. Generative AI, Causal inference, deep learning, neural networks,
## 1 Introduction
Generative AI methods are proposed to solve problems of inference and prediction in econometrics. Generative methods require a use of large simulated training dataset which are prevalent in econometrics. The goal of such methods is to use deep neural networks to find a stochastic mapping between the parameters and data. Causal inference provides a natural testing grand for this methods. We develop NN architectures for this types
of problems and future research is required for other problems such as DSGE models, auction models, IO models and others. There is a number of advantages over the simulation based techniques such as MCMC. First GAI avoid using densities. Second, they can be applied to high dimensions problems. Third that cen be easily be extended to solve decision-making problems using reinforcement learning methods.
### Causal AI
In studies of causality and treatment effects, each unit from \(U\) (sample) has one of \(k\) possible treatments. Thus a single treatment is assigned to each units. In a controlled experiment, the treatment is assigned randomly. However, we study the case of an observational data, when the treatment is not assigned randomly and the treatment effect may occur due to confounding (a.k.a. selection bias). Selection bias is simply the dependency between the treatment assignment and the outcome \(y\). The goal is to estimate the treatment effect. The treatment effect is defined as the difference between the outcome under treatment and the outcome under control. The outcome is a random variable \(Y\) and the treatment is a random variable \(Z\). The treatment effect is a random variable \(\tau=Y(1)-Y(0)\). We assume that we observe the confounding predictors \(x\), meaning that \(x\) and \(z\) are conditionally independent, given \(x\). The observational study can be represented as a dataset with missing values as show in figure below
\begin{tabular}{l|l l} \(u\) & \(z=1\) & \(z=2\) \\ \hline \(u_{1}\) & 3.1 &? \\ \(u_{2}\) &? & 4.3 \\ \(u_{3}\) &? & 6.4 \\ \(u_{3}\) & 4.5 &? \\ \end{tabular}
One way to approach the problem of estimating the treatment effect is to construct a counterfactual sample. The counterfactual sample is a hypothetical sample, where each unit has all possible treatments. More generally, the counterfactual process is then \(Y(x,z)\) for all possible combinations of units \(x\) and treatments \(z\). The realized sample (observed) on the other hand has only one observation per unit-treatment pair. In other words, the counterfactual process allows us to compute the conditional distribution of the response for the same unit under different treatments. The observed sample only allows us to compute the conditional distribution of the response for the same unit under the same treatment. One approach to causal inference is to estimate the counterfactual process from the observed sample. However, not everybody is enthusiastic about the approach of designing a counterfactual sample McCullagh (2022). For example, The Dawid (2000) argues, is that the counterfactual framework adds much to the vocabulary but brings nothing of substance to the conversation regarding observables. Dawid (2000) presents an alternative approach, based on Bayesian decision analysis. The main criticism is that there are multiple ways to construct the counterfactual samples and none of them are checkable.
The propensity score can be used to fill in some of the missing counterfactual values. Propensity Score is a summary statistic function \(\pi(x)\to R\). A typical approach is to
estimate this function by running a logistic regression of \(Z\) on \(x\).
\[\pi(x)=g\left(p(z=1\mid x)\right),\]
where \(g\) is a logit function. This approach guarantees that when the \(x\)'s from treatment group and \(x\)'s from control group are similar distributionally (histograms are similar), the propensity scores are close. Other approaches include inverse weighting, stratifaication, matching, and subclassification.
In regression setting, the propensity score is a function of the conditional probability of treatment given the covariates. The propensity score is a sufficient statistic for the treatment assignment. Then to estimate the treatment effect, we find "similar" units in the control group and compare their outcomes to the treatment group. The similarity is defined by the difference in propensity scores. In the controlled experiments the distributions over \(x\mid z=1\) and \(x\mid z=0\) should be the same. In observational studies, this is not the case. The main difference between a traditional predictive model and the propensity score model \(\pi(x)\) is that observed \(y\)'s are not used for training the propensity score model.
Let \(Y\) denote a scalar response anz \(Z\) denote a binary treatment, and \(x\in R^{d}\) be the covariates. We observe sample \((Y_{i},Z_{i},x_{i})\), for \(i=1,\ldots n\). We use \(Y_{i}(0)\) and \(Y_{i}(1)\) to denote the outcome (hypothetical) with treatment zero or one. The observed outcome is given by
\[Y_{i}=Y_{i}(0)+Z_{i}(Y_{i}(1)-Y_{i}(0)).\]
We assume that the outcome is conditionally independent of the assigned treatment given the covariates, i.e., \(Y_{i}(0)\) and \(Y_{i}(1)\) are independent of \(Z_{i}\) given \(x_{i}\). We also assume that
\[P(Z_{i}=1\mid x_{i})>0.\]
The first condition assumes we have no unmeasured confounders. Given the two assumptions above, we can write the conditional mean of the outcome as
\[\tau(x_{i})=E[Y_{i}(1)-Y_{i}(0)\mid x_{i}].\]
The goal is build a predictive model
\[Y_{i}=f(x_{i},Z_{i},\pi(x_{i}))+\epsilon_{i},\ \epsilon_{i}\sim N(0,\sigma_{i}^{2}),\]
where \(\pi(x_{i})\) is the propensity score function. Then
\[\tau(x_{i})=f(x_{i},1)-f(x_{i},0).\]
In frequentist approaches, adjustment is conducted by estimating parameters independently in the propensity score model \(\pi\) and the outcome model \(f,\epsilon\). However, this two-step analysis is leads to inefficiencies. Instead, it is more intuitive to develop a single joint model that encompasses both the treatment and outcome variables. As a result, there has been a discussion regarding the applicability of Bayesian methods in causal analysis. The literature on advanced techniques for conducting Bayesian causal analysis is expanding, but certain aspects of these methodologies appear unconventional.
Further, a common feature of the real-life problems is that the response function \(f\) and the propensity score function \(\pi\) are highly-non linear. Which makes many Bayesian methods inapplicable. For example, the propensity score matching is a popular method for causal inference. However, it is not clear how to apply this method in the Bayesian setting. The propensity score matching is a non-parametric method, which means that it does not require any assumptions about the functional form of the propensity score. However, the Bayesian approach requires a parametric model for the propensity score. Yet, another complicating factor can be deterministic relationships between the covariances and the treatment/outcome. In this case, sample-based Bayesian methods are not applicable.
We address this issues in this paper. Section 1.2 provides a brief overview of the existing literature. Section 2 describes the proposed method. Section 3 provides theoretical justification for our approach. Section 4 presents the results of the simulation study. Section 5 concludes.
### Connection to Existing Literature
Our work builds on ideas from earlier papers that proposed using Bayesian non-linear models to analyze treatment effects and causality Hill and McCulloch (2007); Hahn et al. (2020) and work on using deep learning for instrumented variables Narekishvili et al. (2022a). We study the implicit quantile neural networks Polson and Sokolov (2023). Further, we investigate a long standing debate of causal inference on weather the propensity score is necessary for estimating the treatment effect.
While some researchers Banerjee et al. (2020); Duflo et al. (2007) argue that randomized experiments can and should be used to estimate the treatment effect, it is the case that randomised experiments are not always possible and that observational studies can be used to estimate the treatment effect. Rubin (1974) provides a good discussion of the difference in the estimation procedures for randomized and non-randomized studies.
The intersection of the Bayesian methods, machine learning (Bhadra et al., 2021; Xu et al., 2018) and causal inference in the context of observational data is a relatively new area of research. Both Bayesian and machine learning techniques provide intuitive and flexible tools for analyzing complex data. Specifically, non-linearity's and heterogeneous effects can be modeled using both Bayes and ML techniques. Some authors propose a "marriage" between frequentist and Bayesian methods, for example, Antonelli et al. (2022) consider using Bayesian methods to estimate both a propensity score and a response surface in the high-dimensional settings, and then using a doubly-robust estimator by averaging over draws from the posterior distribution of the parameters of these models. Stephens et al. (2023) argue that pure Bayesian methods are more suitable for causal inference.
It is contended that propensity score is not needed to estimate the treatment effects Hahn (1998a); Hill and McCulloch (2007). On the other hand, Rubin and Waterman (2006) argues that estimating propensity score, it is hard to distinguish the treatment effect from the change-over-time effect. Another debate is wether Bayesian techniques or traditional frequentist approaches are more suitable for the econometrics applications Stephens et al. (2023).
The overview and importance of propensity score is discussed by Rosenbaum and Rubin (1983), The case of binary treatments (Splawa-Neyman et al., 1990) and propensity score approach have been thoroughly studied Rubin (1974), Holland (1986). The counterfactual approach due to Rubin (1974) is similar to the do-operator Pearl (2009), in fact the two approaches are identical, when \(Z\) is independent of \(x\).
Bayesian techniques: Xu et al. (2018)
Machine learning techniques provide flexible approaches to more complex data generating processes, for example when networks are involved Puelz et al. (2022). Tree based techniques are popular Wager and Athey (2018).
Deep learning: Vasilescu (2022)
## 2 Generative AI
Let \((y,\theta)\in\Re^{n+k}\) be observable data and parameters. The goal is to compute the posterior distribution \(p(\theta\mid y)\). The underlying assumptions are that \(\theta\sim p(\theta)\) a prior distribution. Our framework allows for many forms of stochastic data generating processes. The dynamics of the data generating process are such that it is straightforward to simulate from a so-called forward model or traditional stochastic model, namely
\[y=f(\theta)\;\;\text{or}\;\;y|\theta\sim p(y|\theta). \tag{1}\]
The idea is quite straightforward, if we could perform high dimensional non-parametric regression, we could simulate a large training dataset of observable parameter, data pairs, denoted by \((y^{(i)},\theta^{(i)})_{i=1}^{N}\). Then we could use neural networks to estimate this large joint distribution.
The inverse Bayes map is then given by
\[\theta\overset{D}{=}H(S(y),\tau), \tag{2}\]
where \(\tau\) is the vector with elements from the baseline distribution, such as Gaussian, are simulated training data and \(S:\Re^{N}\rightarrow\Re^{k}\) is a \(k\)-dimensional sufficient statistic. Here \(\tau\) is a vector of standard uniforms. The function \(H:\Re^{k}\times\Re^{D}\rightarrow\Re^{k}\) is a deep neural network. The function \(H\) is again trained using the simulated data \((y^{(i)},\theta^{(i)})_{i=1}^{N}\), via \(\ell_{2}\) regression
\[\theta^{(i)}=H(S(y^{(i)}),\tau^{(i)}),\;i=1,\ldots,N.\]
Having fitted the deep neural network, we can use the estimated inverse map to evaluate at new \(y\) and \(\tau\) to obtain a set of posterior samples for any new \(y\) using (2). The caveat being is to how to choose \(N\) and how well the deep neural network interpolates for the new inputs. We also have flexibility in choosing the distribution of \(\tau\), for example, we can also for \(\tau\) to be a high-dimensional vector of Gaussians, and essentially provide a mixture-Gaussian approximation for the set of posterior. MCMC, in comparison, is computationally expensive and needs to be re-run for any new data point. Gen-AI in a simple way is using pattern matching to provide a look-up table for the map from \(y\) to \(\theta\). Bayesian computation has then being replaced by the optimisation performed by Stochastic Gradient Descent (SGD). In our examples, we discuss choices of architectures for \(H\) and \(S\). Specifically, we propose cosine-embedding for transforming \(\tau\).
Mixture Gaussians Generative ModelA flexible generative model is to assume a mixture of Gaussians representation of the generative model, namely
\[\theta^{(i)}=H(WS(y^{(i)})+\sum_{j=1}^{L}\tau_{j}\epsilon^{(i)}_{j})=H([W,\tau]^{ T}[S(y^{(i)}),\epsilon^{(i)}])\]
where \(\epsilon=(\epsilon_{1},\ldots,\epsilon_{J})\) is a vector of standard normals.
Another fundamental property of our method is that we can estimate \([W,\tau]\) simply using \(L^{2}\) methods such as OLS or PLS estimator that is proportional to the true values due to Brillinger. This is independent of the NN architecture assumed for \(H\). The accuracy depends on \(J\) which can be chosen to be very large
The coefficients on the \(\epsilon\) term can be unidentified but the important point is that we estimate the true number of normals in the mixture. This givens an OLS version of the mixture of Dirichlet processes. We are appealing to the asymptotic available for the training sample. We argue that, every form of estimation is some form of clever conditional averaging.
Gen-AI Bayes Algorithm:The idea is straightforward. A necessary condition is the ability to simulate from the parameters, latent variables, and data process. This generates a (potentially large) triple
\[\left\{y^{(i)},\theta^{(i)},\tau^{(i)}\right\}_{i=1}^{N},\]
where \(N\) is typically of order \(10^{6}\) or more.
By construction, the posterior distribution can be characterized by the von Neumann inverse CDF map
\[\theta\overset{D}{=}F_{\theta|y}^{-1}(\tau),\ \ \text{where}\ \ \tau\sim U(0,1).\]
Hence we train a summary statistic, \(S\), and a deep learner, \(H\), using the training data
\[\theta^{(i)}=H(S(y^{(i)}),\tau)\ \ \text{where}\ \ F_{\theta|y}^{-1}=H\circ S\]
Given the observed data \(y_{\text{obs}}\), we then provide the following posterior map
\[\theta\overset{D}{=}H(S(y_{\text{obs}}),\tau)\]
where \(\tau\) is uniform. This characterizes \(p(\theta|y_{\text{obs}})\). Hence, we are modeling the CDF as a composition of two functions, \(S\) and \(H\), both are deep learners.
Notice, we can replace the random variable \(\tau\) with a different distribution that we can easily sample from. One example is a multivariate Gaussian, proposed for diffusion models [20]. The dimensionality of the normal can be large. The main insight is that you can solve a high-dimensional least squares problem with non-linearity using stochastic gradient descent. Deep quantile NNs provide a natural candidate of deep learners. Other popular architectures are ReLU and Tanh networks.
Folklore Theorem of Deep Learning:_Shallow Deep Learners provide good representations of multivariate functions and are good interpolators._
Hence even if \(y_{\text{obs}}\) is not in the simulated input-output dataset \(y_{N}\) we can still learn the posterior map of interest. The Kolmogorov-Arnold theorem says any multivariate function can be expressed this way. So in principle if \(N\) is large enough we can learn the manifold structure in the parameters for any arbitrary nonlinearity. As the dimension of the data \(y\) is large, in practice, this requires providing an efficient architecture. The main question of interest. We recommend quantile neural networks. RelU and tanh networks are also natural candidates.
Jiang et al. (2017) proposes the following architecture for the summary statistic neural network
\[H^{(1)}=\tanh\left(W^{(0)}H^{(0)}+b^{(0)}\right)\] \[H^{(2)}=\tanh\left(W^{(1)}H^{(1)}+b^{(1)}\right)\] \[\vdots\] \[H^{(L)}=\tanh\left(W^{(L-1)}H^{(L-1)}+b^{(L-1)}\right)\] \[\hat{\theta}= W^{(L)}H^{(L)}+b^{(L)},\]
where \(H^{(0)}=\theta\) is the input, and \(\hat{\theta}\) is the summary statistic output. ReLU activation function can be used instead of tanh.
The following algorithms summarize our approach
```
Simulate \(\theta^{(i)}\sim p(\theta)\). Simulate \(y^{(i)}\mid\theta^{(i)}\sim p(y\mid\theta)\), \(i=1,\ldots,N\) or \(y^{(i)}=s(\theta^{(i)})\). Train \(H\) and \(S\), using \(\theta^{(i)}=H(S(y^{(i)},\epsilon^{(i)})\), where \(\epsilon^{(i)}\sim N(0,\sigma_{\epsilon})\) For a given \(y\), calculate a sample from \(p(\theta\mid y)\) by \(\theta\stackrel{{ D}}{{=}}H(y,\tau)\) where \(\tau\sim U(0,1)\)
```
**Algorithm 1** Gen-AI for Bayesian Computation (GenAI-Bayes)
## 3 Distributional Causal Inference
The first fundamental element of our approach to re-write the average treatment effect as an integral of quantiles. The key identity in this context is the Lorenz curve
\[E(Y)=\int_{-\infty}^{\infty}ydF(y)=\int_{0}^{1}F^{-1}(u)du.\]
If we set \(\tau=Y(1)-Y(0)\), we want quantiles of \(Z\) to calculate the treatment effect. We start by simulating \(\theta_{i},y_{i}\) pairs from prior and the forward model, then we reverse
\[\pi(x_{i})=P(Z_{i}=1\mid x_{i})=E(Z_{i}\mid x_{i})\]
So we have sufficient statistics and can replace the dataset with \(\pi(x_{i}),y_{i}\). We can then use the quantile regression to estimate the quantiles of \(Y_{i}(1)-Y_{i}(0)\) given \(x_{i}\).
\[p(y,z\mid x)=p(y\mid x,z)p(z\mid x)\]
\[y(x,1)=y(x,0)=H(y,x,\pi(x)),\ \pi(x)=E(Z\mid x)\]
The average treatment effect is the expectation of
\[y(x,1)=y(x,0)\]
We can calculate it from quantiles.
Having fitted the deep neural network, we can use the estimated inverse map to evaluate at new \(y\) and \(\tau\) to obtain a set of posterior samples for any new \(y\) using (2). The caveat being is to how to choose \(N\) and how well the deep neural network interpolates for the new inputs. We also have flexibility in choosing the distribution of \(\tau\), for example, we can also for \(\tau\) to be a high-dimensional vector of Gaussians, and essentially provide a mixture-Gaussian approximation for the set of posterior. MCMC, in comparison, is computationally expensive and needs to be re-run for any new data point. Gen-AI in a simple way is using pattern matching to provide a look-up table for the map from \(y\) to \(\theta\). Bayesian computation has then being replaced by the optimisation performed by Stochastic Gradient Descent (SGD). In our examples, we discuss choices of architectures for \(H\) and \(S\). Specifically, we propose cosine-embedding for transforming \(\tau\).
## 4 Applications
In this section we provide empirical examples and compare our approach with various alternatives. Specifically, we compare our method with generalized random forests Athey et al. (2019), Wager and Athey (2018) and more traditional propensity score-based methods Imbens and Rubin (2015). Our synthetic data in generated using heterogeneous treatment effects and nonlinear conditional expectation function (response surface) and a sample size \(n\) of \(1000\). We use a three-dimensional \((p=3)\) covariate with all three components drawn from standard normal distribution
\[x_{ij}\sim N(0,1),\ x\in R^{n\times p}\] \[\mu_{i}= -6+I(x_{i1}>x_{i2})+6|x_{i2}-1|\ (\text{Nonlinear effect})\] \[\tau_{i}\sim \sigma(\mu_{i})\ \ (\text{Sigmoid})\] \[z_{i}\sim \text{Bernoulli}(\tau_{i})\] \[\tau_{i}= 1-2x_{i2}x_{i3}\ (\text{Nonlinear treatment effect})\] \[E(y_{i})\sim \mu_{i}+\tau_{i}z_{i}\] \[y_{i}\sim N(E(y_{i}),\sigma^{2})\]
Figure 4 below shows the histograms of generated \(y\), \(\mu\), and \(\tau\). Notice, that we standardized \(\tau\) to be of mean zero and variance of one.
We calculate three metrics to evaluate and benchmark our method. We consider the average treatment effect (ATE) calculated from the sample and compute mean squared error (MSE) as well as coverage and average interval length. Further, we consider conditional average treatment effect (CATE), averaged over the sample.
First, we show some plots that demonstrate the quality of our fit of the response, shown in Figure 4.
We use neural networks as building blocks of our model. Each layer of a neural network is a function of the form
\[f(x;W,l)=h(Wx),\ x\in\Re^{d},\ W\in\Re^{d\times l},\]
where \(h\) is a nonlinear univariate function, such as ReLU, applied element-wise to \(x\), and \(l\) is the number of neurons in the layer. We use the following architecture for the response
Figure 1: Synthetic data histograms
Figure 2: The middle plot compares fitted reposes \(\hat{y}\) and simulated ones \(y\). Left plot compares the simulated \(y\) vs the noseless values \(E(y)\). Right plot shows \(y\) vs \(\mu\).
surface. We start by calculating a cosine embedding of of the quantile \(q\)
\[s_{i}=\cos(i\pi q),\ i=1,\ldots,32\]
\[s= f(s;W_{1},32)\] \[\tilde{\pi}= f(x;W_{2},8)\] \[\hat{\pi}= f(\tilde{\pi};W_{3},32)\] \[\hat{z}= \sigma(\hat{\pi})\] \[\mu= q\circ f([x,\tilde{\pi}];W_{3},32)\] \[\tau= q\circ f(x;W_{5},32)\] \[\hat{y}= W_{6}(\mu+\tau\circ\hat{z}),\ W_{6}\in\Re^{32\times 2}.\]
Here \(\circ\) stands for element-wise multiplication. Our model generates a two-dimensional output \(\hat{y}\), first element is the mean response and the second is the quantile response. We use the following loss function to jointly estimate the components of our model
\[q\sim U(0,1)\] \[l_{z}= (1/n)\sum_{i=1}^{n}z_{i}\log z_{i}+(1-z_{i})\log(1-z_{i})\] \[e_{i}= y_{i}-\hat{y}_{i}\] \[l_{\text{MSE}}= (1/n)\sum_{i=1}^{n}e_{i1}^{2}\] \[l_{q}= (1/n)\sum_{i=1}^{n}\max(qe_{i2},(q-1)e_{i2})\] \[l= w_{1}l_{z}+w_{2}l_{q}+w_{3}l_{\text{MSE}}\]
We add a constraint to the loss function to prevent the quantiles to cross, specifically our constraints are
\[\begin{cases}\hat{y}(\tau)<y,&\text{when $\tau<0.5$}\\ \hat{y}(\tau)>y,&\text{when $\tau>0.5$.}\end{cases}\]
We add this constraint as a penalty term to the loss function.
Figure 3: Histogram of fitted propensity scores \(\hat{\pi}(x)\)
Figure 4 shows the posterior distribution of the treatment effect \(\tau\) for randomly selected units that were assigned no treatment (\(z=0\)). The vertical red line is the true value of the treatment effect. The posterior distribution of \(\tau\) is also very tight, which is consistent with the fact that the control group is large.
Figure 4: Histogram of posterior values of treatment effect \(\tau\) for randomly selected units that were assigned no treatment (\(z=0\)).
Discussion
Generative methods differ from traditional simulation based tools in that they use a large training data set to infer predictive mappings rather than density methods The main tool is high-dimensional nonlinear nonparametric regression using deep neural networks. Inference for the observed data is then evaluation of the network and is therefore an interpolation approach to inference. There are many avenues for future research. Given wide applicability of simulation in econometrics models, designing architectures for specific problems is a a paramount interest.
|
2305.10438 | IMAGINATOR: Pre-Trained Image+Text Joint Embeddings using Word-Level
Grounding of Images | Word embeddings, i.e., semantically meaningful vector representation of
words, are largely influenced by the distributional hypothesis "You shall know
a word by the company it keeps" (Harris, 1954), whereas modern prediction-based
neural network embeddings rely on design choices and hyperparameter
optimization. Word embeddings like Word2Vec, GloVe etc. well capture the
contextuality and real-world analogies but contemporary convolution-based image
embeddings such as VGGNet, AlexNet, etc. do not capture contextual knowledge.
The popular king-queen analogy does not hold true for most commonly used vision
embeddings.
In this paper, we introduce a pre-trained joint embedding (JE), named
IMAGINATOR, trained on 21K distinct image objects level from 1M image+text
pairs. JE is a way to encode multimodal data into a vector space where the text
modality serves as the ground-ing key, which the complementary modality (in
this case, the image) is anchored with. IMAGINATOR encapsulates three
individual representations: (i) object-object co-location, (ii) word-object
co-location, and (iii) word-object correlation. These three ways capture
complementary aspects of the two modalities which are further combined to
obtain the final JEs.
Generated JEs are intrinsically evaluated to assess how well they capture the
contextuality and real-world analogies. We also evaluate pre-trained IMAGINATOR
JEs on three downstream tasks: (i) image captioning, (ii) Image2Tweet, and
(iii) text-based image retrieval. IMAGINATOR establishes a new standard on the
aforementioned down-stream tasks by outperforming the current SoTA on all the
selected tasks. IMAGINATOR will be made publicly available. The codes are
available at https://github.com/varunakk/IMAGINATOR | Varuna Krishna, S Suryavardan, Shreyash Mishra, Sathyanarayanan Ramamoorthy, Parth Patwa, Megha Chakraborty, Aman Chadha, Amitava Das, Amit Sheth | 2023-05-12T05:34:52Z | http://arxiv.org/abs/2305.10438v1 | # MAGINATOR: Pre-Trained Image+Text Joint Embeddings using Word-Level Grounding of Images
###### Abstract
Word embeddings, i.e., semantically meaningful vector representation of words, are largely influenced by the distributional hypothesis _"You shall know a word by the company it keeps"_Harris (1954), whereas modern prediction-based neural network embeddings rely on design choices and hyperparameter optimization. Word embeddings like Word2Vec, GloVe etc. well capture the contextuality and real-world analogies but contemporary convolution-based image embeddings such as VGGNet, AlexNet, etc. do not capture contextual knowledge. The popular _king-queen_ analogy does not hold true for most commonly used vision embeddings.
In this paper, we introduce a pre-trained joint embedding (JE), named IMAGINATOR, trained on 21K distinct image objects level from 1M image+text pairs. JE is a way to encode multimodal data into a vector space where the text modality serves as the grounding key, which the complementary modality (in this case, the image) is anchored with. IMAGINATOR encapsulates three individual representations: _(i) object-object co-location, (ii) word-object co-location, and (iii) word-object correlation_. These three ways capture complementary aspects of the two modalities which are further combined to obtain the final JEs.
Generated JEs are intrinsically evaluated to assess how well they capture the contextuality and real-world analogies. We also evaluate pre-trained IMAGINATOR JEs on three downstream tasks: (i) image captioning, (ii) Image2Tweet, and (iii) text-based image retrieval. IMAGINATOR establishes a new standard on the aforementioned downstream tasks by outperforming the current SoTA on all the selected tasks. IMAGINATOR will be made publicly available. The codes are available at [https://github.com/varunakk/IMAGINATOR](https://github.com/varunakk/IMAGINATOR)
## 1 Joint Modality and Contextuality
Word embeddings are learned representations such that words with similar meanings are represented similarly. Distribution-based compositional word embeddings like Word2vec Mikolov et al. (2013) and GloVe Pennington et al. (2014) are popular in modern NLP. These are used to extract the notion of relatedness across different words, and capture the overall semantic meaning of a text. Consider the _king-queen_Mikolov et al. (2013) word vector analogy (figure 1), which shows how good these word embeddings are at capturing syntactic and semantic regularities in language.
The notion of contextual similarity (i.e., words occurring together) is used in learning the representations, because of which vector arithmetic like King - Man + Woman = Queen are possible. See figure 1 Mikolov et al. (2013). Deriving an analogous representation using images is a challenging task since the concept of relatedness among images is not well-defined. Motivated by this argument, we propose creating joint embeddings (JEs) that can represent real-world analogies, which can aid in solving several multimodal tasks owing to their distributional semantics.
## 2 Contemporary Joint Embedding Methods
Canonical Correlation Analysis (CCA) based methods use similarities to project two inputs onto a
vector space. CLIP (Radford et al., 2021) utilizes contrastive pre-training and encodes aligned image and text embeddings with the help of text and visual modality encoders. Stanford's Joint Embedding (Kolluru, 2019) uses VGG-19 (Simonyan and Zisserman, 2014) and GLoVe (Pennington et al., 2014) to generate the image and text encodings using a triplet loss. Chen et al. (2020) proposed UNITER, trained on a large dataset, which uses an image and text encoder and a transformer to generate the final embeddings. Jia et al. (2021) use a noisy dataset of 1 billion (image, alt-text) pairs and propose a dual architecture for aligning and generating the visual and textual embeddings. This architecture uses contrastive loss for learning. Tan and Bansal (2019) proposed a framework to create a relation between visual and language modalities. This architecture consists of three encoders, one object relation encoder, a language encoder and a cross-modal encoder. Compared to the aforementioned prior works, illustrated in appendix figure 8, the unique differentiating factor with IMAGINATOR is that we focus on the word-level grounding (Gunti et al., 2022) of images while prior works perform embedding generation at the sentence level. Our belief is that this will help us learn rich relational features, i.e., features that are rich encapsulations of words and the corresponding objects they represent via images.
## 3 IMAGINATOR - Learning Joint Embeddings
Off-the-shelf word embeddings like Word2vec (Mikolov et al., 2013), GloVe (Pennington et al., 2014) and the embeddings generated by BERT (Devlin et al., 2018), GPT (Radford et al., 2021) are used for tacking several downstream NLP tasks. The motivation behind creating IMAGINATOR is to have similar pre-trained embeddings for vision-language tasks. Researchers can download pre-trained JEs and utilize them for any vision-language task they have in hand. Existing techniques have only explored JEs from the sentence-level perspective, which makes it less flexible to re-purpose them for other tasks, but most importantly, demands a lot more data for the model to understand and derive meaningful relationships. We thus operate at the word level rather than sentence-level, to help improve the "sharpness" of the data, with the hope that this would, in turn, help synthesize higher relational features that can offer optimal performance on downstream tasks. To that end, we make some simple assumptions and posit arguments on their choice as better alternatives.
### Object vs. Word - a Unit Hypothesis
The smallest meaningful unit of text is a word, which we assume signifies a visual object embedded in an image. Albeit, the common trend is to train end-to-end network on sentence-level, but system may not be able to learn fine grained contextual relations like _king-queen_ analogy. This design choice also aligns with our motivation to generate general-purpose JEs suited for a wide variety of downstream tasks (refer section 5).
#### 3.1.1 Number of Objects
The number of objects in available datasets like Flickr30k (Young et al., 2014) and COCO (Lin et al., 2014) is limited only to a few hundred. However, if we are interested to learn real-world analogies like _king-queen_ analogy, we require far more real-life objects to be detected by the system. Dectic (Zhou et al., 2022), a recent object detection technique, provides 21K object classes and thus, seems the most pertinent. Results shown in table 6 indicate that an increment in the number of objects leads to a corresponding increase the accuracy.
Based on the unit hypothesis, we capture three aspects of the input data while generating joint embeddings: Object-object co-location: \(v_{oo}\), Word-object co-location: \(v_{wo}\), Word-object correlation: \(v_{wor}\).
### Learning \(v_{oo}\) and \(v_{wo}\)
Figure 2 offers a visual summary of the process of generating object-object co-location embeddings \(v_{oo}\) and word-object co-location embeddings \(v_{wo}\)-\(v_{oo}\) and \(v_{wo}\) are learned using an object co-location matrix, where objects refer to the entities detected
Figure 1: CNN-based image embeddings are unable to capture contextuality like existing word embeddings. The _king-queen_ vs. _man-woman_ analogy has been popularized by (Mikolov et al., 2013), whereas drawing a similar analogy in image vector space is rather difficult. We argue joint embedding is the alternative.
using an object detection model. Object co-location matrix is a matrix where the rows and columns correspond to objects detected in our images and each cell represents the co-occurrences of the respective two objects. We then take the rows and apply dimensionality reduction techniques like SVD along with Eigenvalue weighting. The vector obtained is then used as the embedding. This yields _object-object co-location_, which encodes how frequently a detected object co-appears with other detected objects in the dataset. On the other hand, _word-object co-location_ is built using the objects from object detection on images and the words from the associated text given in the datasets. This might seem similar to object-object co-location at first glance, but a major difference is that the value in each cell represents the number of image captions having the corresponding object and word pair. With this co-location matrix, we get information on how frequently every object co-appears with other words in the dataset.
### Learning \(v_{wor}\)
Figure 2 illustrates the process of generating the word-object correlation embeddings \(v_{wor}\). \(v_{wor}\) is learnt using a different approach when compared with the other two embeddings. Co-location can be defined using the co-occurrence of two entities but correlation calls for a deeper understanding of the two entities. Therefore, we get joint embeddings for word-object correlation using word and object vectors.
We generate object embeddings by passing all detected crops of the object from the dataset to VGG19 (Simonyan and Zisserman, 2014). An average of these embeddings across all instances give us the final embedding for the object encoded as a mean representation. The word embeddings are acquired by creating a _word-word co-location_ matrix for the text in the dataset, similar to the aforementioned co-location matrices, where each cell represents the number of co-occurrences of the corresponding word pair.
To obtain the final joint embedding from these two vectors, we project the object embedding in the word embedding space instead of projecting both embeddings in a common space (Kolluru, 2019; Radford et al., 2021). The motivation behind this is to maintain the contextuality captured in word embeddings and thus enforce the object embed
Figure 2: Architecture for creating text embeddings and \(v_{oo}\) and \(v_{wo}\): the rows and columns in the co-location matrix are the words from the text or objects detected from the images from dataset. Each cell of this matrix represents the occurrence count of each row-column pair in the dataset. The two final vectors are generated using PPMI and eigenvalue weighting over the vectors from co-location matrices. (Bottom) Architecture for learning \(v_{wor}\): (left) the averaged VGG19 representation of a particular object across the whole dataset is passed; (right) word2vec representation of the word (i.e., the name of the visual object; for e.g., _horse_ in this case).
dings to learn the correlations. Once they learn a correlated vector space, we get the JEs from a weighted average of the projected word and object embedding. We perform experiments to compare several projection methods (such as CCA Thompson (2000), Kernel CCA Hardoon et al. (2004), Deep CCA Andrew et al. (2013) etc.) and loss functions (InfoNCE Oord et al. (2018), contrastive loss Hadsell et al. (2006), and triplet loss Schroff et al. (2015)). Emperically, we find that orthogonal projection and triplet loss give the best JE results. We believe CCA overfits on our data while orthogonal projection Artetxe et al. (2018) uses the features based on the dataset size. Please refer to table 6 in Appendix for more on these experiments and their results.
## 4 Lessons Learnt from NLP
Levy et al. (2015) argue that the performance gains of neural network based word embeddings are due to certain system design choices and hyperparameter optimizations, rather than the embedding algorithms themselves. Furthermore, they show that these modifications can be transferred to traditional distributional models, yielding similar gains. In contrast to prior reports, they show mostly local or insignificant performance differences between the methods, with no global advantage to any single approach over the others. Therefore, we remain grounded to count-based distributional semantics methods. Raw counts or normalized counts are not useful, rather we choose alternatives like PMI and SVD.
### PPMI and Context Distribution Smoothing
The PPMI (Positive Pointwise Mutual Information) between a word and its context is well known to be an effective association measure in the word similarity literature. Levy et al. (2015) show that the skip-gram with negative-sampling training method (SGNS) is implicitly factorizing a word-context matrix whose cell values are essentially shifted PMI. Following their analysis, we present two variations of the PMI (and implicitly PPMI) association metric, which we adopt from SGNS. In this section, \(w\) and \(c\) represent the word and context matrix.
**Shifted PMI.** The shift caused by \(1<k\) (the number of negative samples in the optimization \((w,c)\): \(PMI(w,c)-log(k)\)) can be applied to distributional methods through shifted PPMI Levy and Goldberg (2014):
The \(k\) here, firstly, estimates negative sample distribution and secondly, acts as a prior on the probability of an occurrence of \((w,c)\) in the corpus vs. a negative sample. Shifted PPMI captures the
Figure 3: Similar images and the vector space distance between them. Word pairs taken from Flickr30k datsset. The IMAGIINATOR JE vector space captures real-world analogies well.
latter, i.e, the prior aspect of \(k\).
\[SPPMI(w,c)=max(PMI(w,c)-log(k),\ 0) \tag{1}\]
**Context Distribution Smoothing (CDS).**Word2Vec Mikolov et al. (2013) samples negative samples according to a smoothed unigram distribution. This smoothing variation has an analog when calculating PMI directly:
\[PMI_{\alpha}(w,c)=log\frac{\hat{P}(w,c)}{\hat{P}(w).\hat{P}_{\alpha}(c)} \tag{2}\]
\[PMI_{\alpha}(c)=\frac{\#(c)^{\alpha}}{\Sigma_{c}\#(c)^{\alpha}} \tag{3}\]
By enlarging the probability of sampling a rare context (since \(\hat{P}_{\alpha}(c)>\hat{P}(c)\) when \(c\) is infrequent), CDS reduces the PMI of \((w,c)\) for a rare context \(c\) - thus removing PMI's bias towards rare words.
### SVD and Eigenvalue Weighting
Word and context vectors derived using SVD of co-location matrices can be represented by:
\[W^{SVD}=U_{d}\cdot\Sigma_{d}\ \ \ C^{SVD}=V_{d} \tag{4}\]
However, in this case, \(C^{SVD}\) is orthonormal while \(W^{SVD}\) is not. Factorization achieved by SGN is much more symmetric and a similar symmetry can be derived using the following factorization:
\[W=U_{d}\cdot\sqrt{\Sigma_{d}}\ \ \ C=V_{d}\cdot\sqrt{\Sigma_{d}} \tag{5}\]
Levy et al. (2015) states that while it is not theoretically clear why a symmetric approach performs better for semantic tasks, it works empirically.
For our vector-deriving implementation, we use this as a dimensionality reduction technique. It is similar to SVD but instead of the usual representation: \(W=U.\Sigma_{d}\) and \(C=V_{d}\), eigenvalue weighting uses \(W=U.\Sigma_{d}^{0.5}\) and \(C=V_{d}\). To summarize, after creating the co-location matrix, we derive vectors by initially applying SPPMI with CGS. This is followed by the SVD of the matrices with eigenvalue weighting.
### Merging \(v_{oo}\), \(v_{wo}\), and \(v_{wor}\)
The three vectors can be merged using approaches such as concatenation, averaging or autoencoding. Autoencoder is a pertinent research topic where merging of a number of vectors is learnt automatically by a trained model. This approach considers learning the embeddings by considering complementary information from it's source embeddings. In the interest of simplifying this aspect of our design, for our experiments, we use weighted average to combine the embeddings. The weights are decided empirically. The best weights we find are 10, 10, and 80 for \(v_{oo}\), \(v_{wo}\), and \(v_{wor}\) respectively.
## 5 Intrinsic Evaluation of IMAGINATOR
To be able to make vector arithmetic like King -Man + Woman = Queen in a generated word vector space is well known as the intrinsic evaluation paradigm. Contemporary image embeddings are devoid of contextuality, whereas text embeddings are much more meaningful, as shown in figure 1. With joint embeddings, we aim to add a contextual component to improve the semantic richness of the joint embeddings vector space. We use two kinds of intrinsic evaluation setup to evaluate IMAGINATOR: (i) word contextuality, and (ii) image similarity.
### Word Contextuality
We use all the \(10\) datasets mentioned in Jastrzebski et al. (2017) to evaluate the generated word embeddings intrinsically. Intrinsic means that only basic arithmetic functions are performed on the embeddings and no other models are trained. The datasets cover three tasks: (i) word similarity, (ii) word analogy, and (iii) word categorization. First, the word embeddings for given pair of similar words from the datasets are computed. Then, we use average euclidean distance to derive the final results ( as shown in table 1) for embeddings from GloVe Pennington et al. (2014), CLIP Radford et al. (2021),
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Dataset** & GloVe & CLIP & SJE & Ours \\ \hline WS353 Finkelstein et al. (2001) & 2.65 & 0.35 & 0.19 & 0.14 \\ MTurk Halawi et al. (2012) & 1.99 & 0.26 & 0.28 & 0.20 \\ RG65 Rubenstein et al. (1965) & 0.75 & 0.38 & 0.20 & 0.13 \\ RW Pilebury et al. (2018) & 0.96 & 0.19 & 0.17 & 0.25 \\ SimLe99 Hill et al. (2015) & 2.31 & 0.18 & 0.22 & 0.12 \\ MEN Bruni et al. (2014) & 0.51 & 0.22 & 0.13 & 0.11 \\ Google Analyse Mikolov et al. (2013) & 2.09 & 0.18 & 0.12 & 0.15 \\ MSR Analogy Mikolov et al. (2013) & 0.63 & 0.30 & 0.09 & 0.22 \\ SemEval2012 Jurgens et al. (2012) & 1.2 & 0.21 & 0.32 & 0.26 \\ BLESS Barooni and Lenci (2011) & 2.77 & 0.22 & 0.19 & 0.11 \\ \hline Average & 1.5 & 0.25 & 0.19 & **0.16** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Results (average euclidean distance) for intrinsic evaluation of our JEs based on notable word contextually datasets. Lower is better. Our model outperforms the baselines on most of the datasets and has the lowest overall average distance.
and IMAGINATOR. We can see that IMAGINATOR performs better than the baselines.
### Image Similarity
Analogy-making on images is relatively challenging. Our hypothesis is that vectors of the same/similar objects must be nearby in the IMAGINATOR vector space. We evaluate IMAGINATOR intrinsically on image similarity task using objects five datasets - Caltech 101 (Li et al., 2022a), Flickr 30k, MS COCO, Google CC (Sharma et al., 2018), Visual Genome (Krishna et al., 2017). We extract the list of similar objects from the the datasets, obtain features from the VGG19 and then orthogonally (Artetxe et al., 2018) project those objects to the IMAGINATOR vector space. We then calculate the pairwise-euclidean distance between such vectors and average them for the entire dataset. Table 2 shows the object similarity performance of SJE, CLIP and IMAGINATOR on a a variety of datasets. Our baseline comprehensively outperforms the baselines on all the datasets. Figure 3 shows some examples of the relation between projected JEs of these objects. From the examples we can see that IMAGINATOR captures the nuances of the images.
## 6 IMAGINATOR for Downstream Tasks
The downstream vision-language (VL) tasks chosen to test our pre-trained JEs are: (i) image captioning, (ii) Image2Tweet (Jha et al., 2021), and (iii) text-based image retrieval.
### Image Captioning
Image captioning is a common multimodal task which involves the generation of a textual description for an image. Describing the contents of an image requires visual understanding at an object level. We use JEs from IMAGINATOR to generate captions on datasets such as Flickr30k (Young et al., 2014) and COCO (Lin et al., 2014).
For an input image, we start by obtaining an image embedding using VGG19 (Simonyan and Zisserman, 2014), which is then orthogonally projected in IMAGINATOR embedding space. We use the JE of the image to find \(k\) nearest objects in the vector space. For our experiments we used \(k=10\), giving us 10 objects associated with the input image. These objects are then passed to a sequence-to-sequence module, namely, the T5 transformer (Bhatia, 2021), which generates the final caption. We use a pre-trained T5 model, fine-tuned on Flickr30k and COCO. Figure 4 describes the captioning pipeline while Figure 5 shows some output examples.
Table 3 shows the quantitative results of baseline models and IMAGINATOR. We can see that our model outperforms all the baselines in terms of BLEU score and BERTScore (Zhang et al., 2020) on both the datasets.
### Image2Tweet
Image2Tweet (Jha et al., 2021) is a task which is a notch above traditional image captioning in terms of complexity. Given an input image, the
\begin{table}
\begin{tabular}{l c c c} \hline \hline Dataset & SJE & CLIP & IMAGINATOR \\ \hline Caltech 101 & 1.9 & 1.5 & 0.13 \\ Flickr 30K & 0.8 & 0.4 & 0.06 \\ MS COCO & 0.9 & 1.3 & 0.2 \\ Google CC & 0.2 & 0.4 & 0.08 \\ Visual Genome & 1.1 & 1.4 & 0.1 \\ \hline Average & 0.98 & 1.00 & **0.11** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Average pairwise euclidean distance between similar objects from each dataset. Lower is better.
Figure 4: Architecture of Image captioning using IMAGINATOR. We use VGG19 for embedding and then we use IMAGINATOR to project into the joint embedding space. We pick K nearest objects/words and pass to seq2seq model to generate caption.
task involves generating a tweet like a human news reporter. Figure 22 shows some examples from the dataset.
The tweet is generated using a method similar to image captioning. The joint embedding of the input image is used to find the \(k\) nearest neighbouring embeddings in the projections space. These neighbours are then used to generate the tweet using a sequence-to-sequence model.
The results are based on the CIDEr metric (refer table 4). We found that using other datasets for training SoTA models yielded abysmal results, indicating that Image2Tweet is a fairly complex problem. However, IMAGINATOR performs reasonably well on the task and surpasses comparable contemporary SoTA captioning methods, namely UVLP Zhou et al. (2020) and OSCAR Li et al. (2020).
### Text-based Image Retrieval
The fundamental question that we are seeking an answer to is whether using IMAGINATOR word-object level embeddings we can achieve compositionally and achieve a vector representation for sentence-image level. For example, by passing on word vectors in a sequence to a language model we can obtain a sentence-level vector representation. To verify the compositionality of joint modality embeddings, we test our approach on the task of text-based image retrieval on the Flickr30K dataset Young et al. (2014). The main challenge of this task is to find out the appropriate content in the visual space while the input is in the text space. Another reason for introducing compositionality is
\begin{table}
\begin{tabular}{l l} \hline \hline Method & CIDEr \\ \hline Baseline of Image2tweet Jia et al. (2021) & 0.0003 \\ UVLP Zhou et al. (2020) [SoTA on Flickr] & 0.003 \\ OSCAR Li et al. (2020) [SoTA on COCO] & 0.004 \\ CLIP Radford et al. (2021) & 0.006 \\ Stanford joint embedding Kolluru (2019) & 0.007 \\
5 ensemble Luo et al. (2018) & 0.0090 \\ IMAGINATOR + \(k\) nearest objects + T5 & **0.0095** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results (CIDEr score) of various multi-modal models on the Image2Tweet task. Higher is better. Our model outperforms other all models.
Figure 5: Examples of some image captioning outputs generated by IMAGINATOR along with the original caption and the caption generated by OSCAR Li et al. (2020). IMAGINATOR gives richer and more detailed captions than OSCAR. For more examples please refer the appendix.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline method & Flickr30k & \multicolumn{2}{c}{COCO} \\ \hline & BLEU & BERTScore & BLEU & BERTScore \\ \hline UVLP & 30.1 & - & - & - \\ OSCAR & - & - & 41.7 & - \\ SJE & 30.5 & 0.78 & 35.6 & 0.8 \\ CLIP & 31.3 & 0.83 & 36.3 & 0.85 \\ BLIP & - & - & 40.4 & - \\ IMAGINATOR & **33.2** & **0.87** & **43.1** & **0.88** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of different models on the image captioning task. Higher is better. Unavailable scores are left blank.
Figure 6: Image retrieved for the query - _“several climbers climbing rock together”_ - it is evident that ALBEF Li et al. (2021) wrongly emphasized on _”rock together”_, whereas XVLM Zeng et al. (2021) is unable to comprehend plurality in the query here, while BERT\({}_{IMAGINATOR}\) can do the job well.
that each word is usually associated with multiple images. Hence, there is a need for us to learn a single image representation for a given text. Though we explore contrastive methods in section 7, to solve the above-mentioned challenges, we introduce an approach using BERT and evaluate it on text-based image retrieval.
#### 6.3.1 Compositionality of Joint Embeddings - \(BERT_{Imagiinator}\)
BERT is arguably the most successful modelling architectures in NLP. It accepts token embeddings as input and produces contextualized embeddings as output. In contrast, we propose \(BERT_{IMAGINATOR}\), which is trained to take image+text as input and output a compositional vector representation for both modalities.
We utilize BERT (Devlin et al., 2018) and CLIP (Radford et al., 2021) as our backbones to generate JEs. Instead of feeding the BERT model tokenized words obtained via a tokenizer, we use IMAGINATOR (refer section 3.3) word-object embeddings as input to the model. We process necessary tokenization, position encoding, and segment embeddings accordingly, per the BERT architecture.
We utilize CLIP (Radford et al., 2021) for generating another JE using an image-sentence pair by obtaining the image and text embeddings from CLIP encoders and concatenating them. We refer to this as the _sentence JE_. Both these embeddings, viz., the _sentence JE_ and _projected BERT\({}_{IMAGINATOR}\)_, are projected to a common space using orthogonal projection (Artetxe et al., 2018), on which we compute our loss. Figure 7 visually depicts our training process while table 5 shows BERT\({}_{IMAGINATOR}\) outperforming SoTA information retrieval (IR) baselines, namely ALBEF (Li et al., 2021) and XLVM (Zeng et al., 2021) on Recall@{1, 5, 10}. Some output examples are shown in figure 6.
## 7 Conclusion and Futurework
We proposed a new pre-trained joint embedding IMAGINATOR. Our major contribution is on adopting count-based methods for joint modality, echoing the philosophy from Levy et al. (2015). We present an in-depth intrinsic evaluation along with a new architecture \(BERT_{IMAGINATOR}\). IMAGINATOR outperformed SoTA on three tasks: _(i) image captioning, (ii) Image2Tweet, and (iii) text-based image retrieval_. In the future, we would like to explore other multimodal tasks such as VQA.
## Discussion and Limitations
While IMAGINATOR pushes the boundaries of the state-of-the-art in tasks that involve language and vision joint modelling, there are some limitations.
#### Object Detection - Limited Number of Classes
IMAGINATOR utilizes the atomic units of multimodal data - individual words for text representation and individual objects for image representation. Typically, the number of unique words (i.e., the vocabulary) is quite large in a given text relative to the number of objects in images. As such, IMAGINATOR being a joint learning technique is bottlenecked by the capabilities of existing object detection techniques since they only typically deal
Figure 7: \(BERT_{IMAGINATOR}\) - Training approach for Image Retrieval. Training happens in batches and cosine similarity between corresponding image-sentence pair is maximised while for other pairs it is minimized.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & R@1 & R@5 & R@10 \\ \hline ALBEF (Li et al., 2021) & 85.6 & 97.5 & 98.9 \\ XVLM (Zeng et al., 2021) & 86.1 & 97.3 & 98.7 \\ BLIP (Li et al., 2022b) & 87.6 & 97.7 & 99.0 \\ BERT\({}_{IMAGINATOR}\) & **89.48** & **98.1** & **99.2** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results on image retrieval: Recall@{1, 5, 10} Score for the Flickr30K dataset. Higher is better.
with a limited repertoire of objects. To enhance the richness and expressivity of JEs, object detection models that can identify the wide gamut of objects in the world would be critical.
#### Contrastive Learning
Contrastive learning is a task-independent technique that focuses in learning the similarity and differences between samples in a dataset. The objective here is to learn an embedding space where similar inputs, say samples belonging to the same class, are embedded as similar representations while samples from dissimilar classes are separated in the embedding space. IMAGINATOR performs well in several tasks, despite out our object representation being a simple average of image embeddings. However, contrastive learning might be able to learn even better vectors that capture the relations between images and their objects.
#### Vision Transformer and Positional Encoding
A Vision Transformer (ViT) is a transformer that is targeted at vision processing tasks, such as object recognition and is much more robust than CNNs. It divides an image into fixed-size patches, embeds each of them, and includes a positional embedding along with the patch embedding as an input to the transformer encoder. In our case, if we could draw meaningful cross-modal connections between sections of text and the corresponding parts of images, a significant performance uptick can be potentially reached. This can be implemented using the various positional encoding schemes in ViT.
|
2307.06423 | Bi-Touch: Bimanual Tactile Manipulation with Sim-to-Real Deep
Reinforcement Learning | Bimanual manipulation with tactile feedback will be key to human-level robot
dexterity. However, this topic is less explored than single-arm settings,
partly due to the availability of suitable hardware along with the complexity
of designing effective controllers for tasks with relatively large state-action
spaces. Here we introduce a dual-arm tactile robotic system (Bi-Touch) based on
the Tactile Gym 2.0 setup that integrates two affordable industrial-level robot
arms with low-cost high-resolution tactile sensors (TacTips). We present a
suite of bimanual manipulation tasks tailored towards tactile feedback:
bi-pushing, bi-reorienting and bi-gathering. To learn effective policies, we
introduce appropriate reward functions for these tasks and propose a novel
goal-update mechanism with deep reinforcement learning. We also apply these
policies to real-world settings with a tactile sim-to-real approach. Our
analysis highlights and addresses some challenges met during the sim-to-real
application, e.g. the learned policy tended to squeeze an object in the
bi-reorienting task due to the sim-to-real gap. Finally, we demonstrate the
generalizability and robustness of this system by experimenting with different
unseen objects with applied perturbations in the real world. Code and videos
are available at https://sites.google.com/view/bi-touch/. | Yijiong Lin, Alex Church, Max Yang, Haoran Li, John Lloyd, Dandan Zhang, Nathan F. Lepora | 2023-07-12T19:29:37Z | http://arxiv.org/abs/2307.06423v1 | # Bi-Touch: Bimanual Tactile Manipulation with Sim-to-Real Deep Reinforcement Learning
###### Abstract
Bimanual manipulation with tactile feedback will be key to human-level robot dexterity. However, this topic is less explored than single-arm settings, partly due to the availability of suitable hardware along with the complexity of designing effective controllers for tasks with relatively large state-action spaces. Here we introduce a dual-arm tactile robotic system (Bi-Touch) based on the Tactile Gym 2.0 setup that integrates two affordable industrial-level robot arms with low-cost high-resolution tactile sensors (TacTips). We present a suite of bimanual manipulation tasks tailored towards tactile feedback: bi-pushing, bi-reorienting and bi-gathering. To learn effective policies, we introduce appropriate reward functions for these tasks and propose a novel goal-update mechanism with deep reinforcement learning. We also apply these policies to real-world settings with a tactile sim-to-real approach. Our analysis highlights and addresses some challenges met during the sim-to-real application, e.g. the learned policy tended to squeeze an object in the bi-reorienting task due to the sim-to-real gap. Finally, we demonstrate the generalizability and robustness of this system by experimenting with different unseen objects with applied perturbations in the real world. Code and videos are available at [https://sites.google.com/view/bi-touch/](https://sites.google.com/view/bi-touch/).
Force and Tactile Sensing; Reinforcement Learning
## I Introduction
Bimanual robotic manipulation is a useful and natural way of manipulating large, unwieldy or coupled objects due to the better manoeuvrability, flexibility and a larger workspace compared to single-arm settings [1]. Furthermore, the higher dimensionality of the dual-arm state-action space can enable more realistic tasks for real-world application, particularly when tactile sensing is leveraged to complement vision [2]. However, there are challenges in applying bimanual touch: (1) dual-arm systems often introduce more complexity in terms of system integration and controller design [3]; and (2) the high cost of existing dual-arm systems makes them less accessible to the research community. Moreover, while vision is commonly used as the primary sensing modality for bimanual manipulation, tactile sensing complements those aspects where vision is limited, such as enabling physically-interactive control with soft contacts and ensuring robustness in scenarios where visual occlusion may occur.
Thus, tactile feedback is needed for precise, safe and reliable dual-arm manipulation, since careful regulation of the applied force on the local contact during robot-object interaction is needed for stable control while avoiding damage [3]. However, it remains an open challenge to design effective controllers for dual-arm robots using high-resolution tactile sensing to manipulate unknown objects in uncertain environments [4].
Recent advances in deep reinforcement learning (RL) for robotics indicate a plausible route to acquire sophisticated control policies for manipulation tasks with high-dimensional state-action space [5, 6, 7]. Nevertheless, progress in data-driven methods for bimanual robotics is challenging because of the sim-to-real gap, research reproducibility and inaccessible robotic hardware. Also, learning tactile dual-arm manipulation in the real world is difficult, such as preventing the arms from colliding, which is especially difficult at the start of training. Moreover, RL requires frequent manual resetting of tasks which is too laborious in the real world. Lastly, large-scale training in the real world risks damage to sensors. Therefore, the present study aims to advance recent progress with sim-to-real tactile deep RL methods applied to low-cost high-resolution tactile sensors with an affordable dual-arm manipulation system.
The main contributions of this work are as follows:
1) We adapt and extend the Tactile Gym 2.0 [8] to a low-cost dual-arm tactile robot setting with three new contact-rich bimanual tasks: bi-pushing, bi-reorienting, and bi-gathering.
2) We introduce appropriate reward functions for these tasks and show that deep RL reaches satisfactory performance using only proprioceptive and tactile feedback. To improve the robustness of the policies for real-world applications, we improve the sim-to-real transfer for bi-reorienting and propose a novel goal-update mechanism (GUM) for bi-gathering.
3) We demonstrate that the bimanual policies learned in simulation can be transferred well to the physical dual-arm robot. We further demonstrate the generalizability and robustness of the learned policies by testing the system on unseen objects.
To the best of our knowledge, this is the first framework for sim-to-real deep RL tailored to bimanual tactile manipulation.
## II Related Work
### _Bimanual Robot System with Deep RL_
Designing an effective controller for dual-arm manipulation is a long-standing challenge [9, 10], and the recent success of deep RL incentivizes robotics researchers to apply it to this problem [11, 12]. Previous work [13] successfully applied an RL policy learned in simulation to a real-world task (connecting magnets) with a dual-arm platform, but it relied on a marker-based visual tracking system and did not show any generalization ability to unseen objects. Grannen
et al. [14] proposed an RL framework to learn bimanual scooping policies for food acquisition and demonstrated its generalizability on various unseen food.
Since deep RL relies on large amounts of training, in recent years several bimanual manipulation simulators have been developed. Fan et al. developed a distributed RL framework called SURREAL for a set of dual-arm manipulation tasks [15]. Similarly, Zhu et al. [16] designed a suite of single-arm and dual-arm manipulation environments with Mujoco for algorithm development and evaluation. Chen et al. [17] developed a simulation benchmark for bimanual dexterous hands with a suite of bimanual manipulation tasks and tried solving these tasks with different RL methods. However, these benchmarks have not considered tactile sensing, which hinders their application to tasks that require direct, detailed information about the local contact for fine-grained manipulation.
### _Bimanual Robot System with Tactile Sensing_
Tactile sensing enables dual-arm robots to perceive the local contact features with a granularity that cannot be achieved exclusively with vision. Nevertheless, this topic is not widely studied. Sommer et al. [18] leveraged a dual-arm tactile robot for object exploration with a Gaussian-Process-based filter and grasp-pose selection using a Gaussian mixture model; the method required human demonstrations for particular objects and only one arm had tactile sensing. Hogan et al. [19] developed dual-arm controllers with tactile palms for pusher-slider manipulation that can explicitly control the object's trajectory. A drawback of this approach was that it relied on a set of pre-designed motion skills and assumed full knowledge of the environment, hindering its ability to generalize to uncertain environments where unseen objects and unpredictable perturbations may be encountered. Here we aim to facilitate research on bimanual tactile robotics by developing a dual-arm tactile robot platform as a benchmark to design sophisticated controllers for complex bimanual tasks.
## III Methods
### _Accessible Dual-arm Tactile Robotic System_
#### Iii-A1 Desktop Dual-arm Platform
To facilitate affordable automation and lower the entry barrier, we develop a low-cost dual-arm tactile robotic system while keeping high accuracy, which is comprised of two industry-capable desktop robotic arms (Dobot MG400) with vision-based tactile sensors mounted at the wrists as end-effectors (Fig. 1c). The proposed platform is developed with the Tactile Gym 2.0 [8] simulation (Fig. 1a) for deep RL-based policy training (see Sec. III-C).
Although the Dobot MG400 has a 4-DoF workspace with only Cartesian positions and rotations around the \(z\)-axis of the end-effector actuated, its accuracy is the same as larger industrial robot arms such as the UR5 at a considerably improved cost and convenience [8]. To maximally leverage the workspace of two robots while having one fixed configuration suitable for all the bimanual tactile robotic tasks considered here, we introduce a table (120 mm height) into the physical dual-arm platform to support objects, placed between the two arms (shown in blue in Fig. 1a,c). The two arm bases are set centrally below the board separated by a distance of 700 mm.
#### Iii-A2 High-resolution optical tactile sensing
To endow the dual-arm robot with tactile sensing, we equip it with two low-cost, high-resolution biomimetic optical tactile sensors used in previous tactile robot research with the Dobot MG400 (the TacTip) [20] as the end-effectors. The sensor features an internal array of biomimetic markers on protruding pins inside the soft tactile skin, and the sensing principle of this tactile sensor is to capture marker-based movement that amplifies the skin deformation induced by physical contact against external stimuli. We refer to references [21, 22] for more details.
### _Sim-to-Real Deep RL Framework for Bimanual Tactile Robotic Manipulation_
To apply the deep RL policies learned in simulation to the physical dual-arm tactile robotic system, we take a sim-to-real approach [7] consisting of three parts (shown in Fig. 1): **1)** An online agent training in simulation (Fig. 1a), where
Figure 1: Overview of the proposed dual-arm tactile robotic system (Bi-Touch) with sim-to-real deep RL. a) Deep RL is applied to learn policies for three simulated bimanual tactile manipulation tasks (red arrows show desired displacements) using Tactile Gym. b) Real-to-sim tactile image generator learnt for the surface feature. c) The real-world evaluation feeds real tactile images through the generator into the RL policy concatenated with proprioceptive information.
deep RL policies are learned in the Tactile Gym for three bimanual tactile robotic tasks (bi-pushing, bi-reorienting and bi-gathering) with observations comprising simulated tactile images and proprioceptive feedback. **2)** A real-to-sim domain adaption process where a translation model is learned to transfer real to simulated tactile images. **3)** A sim-to-real application with networks trained in the previous two parts, for transferring deep RL policies to the physical system.
To apply this approach to the Bi-Touch, we make several changes. First, we develop a simulated dual-arm tactile robot in the configuration described above (Sec. III-A) in Tactile Gym that is suited to the three bimanual robotic tasks. In the simulation learning phase, we concatenate two simulated tactile images (depicted in Fig. 1a) to be used as part of the observation of an RL agent. The simulated tactile images are captured by synthetic cameras embedded within simulated tactile sensors built from CAD models of the real sensors.
In the real-to-sim domain adaption phase, a real tactile image dataset paired with a simulated tactile image dataset is required for training. All three bimanual tasks considered here need to model a specific contact feature: the shape of a flat (or relatively flat) contact surface. We adopt the procedure proposed in [7] for dataset collection. However, a larger sensing space is necessary, as the bimanual tasks require the robot to learn more difficult control than for the previous single-arm tasks [7, 8]. Thus we collect surface-feature data with depth range \([0.5,8]\,\mathrm{mm}\) and rotation range \([-30^{\circ},30^{\circ}]\), broader than in [7, 8]. The training dataset comprises 5000 tactile images and the validation dataset has 2000 tactile images collected both in simulation and with a desktop tactile robot on paired random contacts. Samples are labelled with the relative poses between the sensor and a known flat surface. Finally, we use an image-to-image translation Generative Adversarial Network (GAN) [23] to learn the real-to-sim tactile image translation, with hyperparameters from [7].
Since the RL policies take concatenated tactile images as part of the observation, in the sim-to-real application phase (Fig. 1c), we first transfer the real tactile images from both TacTips separately into simulated ones. These tactile images were then concatenated as observation input for the RL agent.
### _Bimanual Tactile Manipulation Tasks_
In this study, we propose three bimanual tactile control tasks to benchmark the aforementioned dual-arm tactile system: bi-pushing, bi-gathering and bi-reorienting. The action space of the dual-arm robot in the bi-reorienting task is 6-dimensional comprising the \(x\)-, \(y\)-position and \(Rz\)-rotation angle of each robot's tool-centre-point (TCP), while the ones in the bi-pushing task (\(x\) and \(Rz\)) and the bi-gathering task (\(y\) and \(Rz\)) are 4-dimensional. Note that each end-effector (TacTip) is controlled and moves in its own TCP frame.
#### Ii-C1 Bi-Pushing
An advantage of dual-arm robots over single-arm robots is that they can move relatively large and unwieldy objects. The goal of this bi-pushing task is to move a large-size object on a planar surface collaboratively with two robot arms with end-effectors (TacTips) to achieve a sequence of goals along a given trajectory. Each goal comprises the target object's position \(p^{\text{g}}(x,y)\) and orientation \(\theta^{\text{g}}\). We express the reward \(R_{t}^{\text{BP}}\) at time step \(t\) for the bi-pushing task as:
\[R_{t}^{\text{BP}}=-w_{1}\left\|p_{t}^{\text{g}}-p_{t}^{o}\right\|_{2}-w_{2}S (\theta_{t}^{\text{g}},\theta_{t}^{o})-w_{3}\sum_{i=1}^{2}S(\theta_{t}^{e_{i }},\theta_{t}^{o}), \tag{1}\]
where \(w_{j}>0\) (\(j\in\{1,2,3\}\)) are reward weights; \(p_{t}^{\text{g}}\) and \(\theta_{t}^{o}\) are the position and orientation of the current goal respectively; \(p_{t}^{e_{i}}\) and \(\theta_{t}^{o}\) are the current position and orientation of the object; \(\theta_{t}^{e_{i}}\) is the current orientation of the TCP \(\mathrm{e_{i}}\) for robot arm \(i\in\{1,2\}\); \(S(\phi,\psi)=1-\cos(\phi-\psi)\) is the cosine distance between the angles \(\phi\) and \(\psi\). All notation is illustrated in Fig. 2a. We interpret the first and second terms of Eq. (1) as encouraging the robot to push the object towards the goal, while the third term is to encourage the robot to maintain the TacTips normal to the contact surface for stable pushing.
#### Ii-C2 Bi-Reorienting
Reorienting an object with two arms is necessary when the object size exceeds the limit of what can be held by a gripper or a robot hand. The goal of this bi-reorienting task is for two robotic arms to reorient an object located at the workspace centre to a given target angle \(\theta\)s while keeping the object centre fixed in place. The dual-arm robot should reorient the object with gentle contact while keeping the end-effectors (TacTips) normal to the contact surface. We express the reward \(R_{t}^{\text{BR}}\) at step \(t\) for the bi-reorienting task as
\[R_{t}^{\text{BR}}=-w_{1}\left\|p_{0}^{o}-p_{t}^{o}\right\|_{2}-w_ {2}S(\theta^{\text{g}},\theta_{t}^{o}) \tag{2}\] \[-w_{3}\sum_{i=1}^{2}S(\theta_{t}^{e_{i}},(-1)^{i}(\pi/2+\theta_{t }^{o}))-w_{4}\sum_{i=1}^{2}\left\|p_{\text{ctrl},i}^{o}-p_{t}^{e_{i}}\right\|_{2}\]
where \(w_{j}>0\) (\(j\in\{1,2,3,4\}\)) are reward weights; \(p_{0}^{o}\), \(p_{t}^{o}\) and \(\theta_{t}^{o}\) are the initial/current positions and orientation of the object respectively; \(p_{t}^{e_{i}}\) and \(\theta_{t}^{e_{i}}\) are the current position and orientation of each TCP \(\mathrm{e_{i}}\) (\(i\in\{1,2\}\)) respectively; \(p_{\text{ctrl},i}^{o}\) (\(i\in\{1,2\}\)) are the desired contact points to either side of an object that are used to specify the desired positions of the TCPs when in contact with the object. All notation is illustrated in
Fig. 2: Illustration of the reward functions for all three proposed bimanual tasks: (a) bi-pushing, (b) bi-reorienting, and (c) bi-gathering. The notations in red are the goals (or subgoals) and those in orange are the proprioceptive information that is part of the observation.
Fig. 2b. The second term in Eq. (2) encourages the robot to reorient the object to the target angle \(\theta^{\text{g}}\) while maintaining the object at its original position with the first term. The third term encourages the robot to maintain the TCP normal to the contact surface for stable reorienting. The final term encourages the dual-arm robot to keep both TCPs close to the desired contact points, which helps maintain the contact, especially the contact depth, of the TCP to avoid losing contact with the object.
However, a challenge we met during the sim-to-real application for this task is that a policy trained using Eq. (2) with the original simulated TacTip dynamics (stiffness, damping, etc) tended to squeeze the object when the goal angle is large in the real world, and this phenomenon became worse as the object length increased (see accompanying video). We attribute this phenomenon to a sim-to-real gap where the dual-arm robot tried to finish the task quickly for maximum rewards, but then learned to rotate the object by squeezing it in simulation to achieve this. However, in the real world, the TacTips may break if over-deformed and the objects are not always of high stiffness, so this learned policy is not suitable.
To solve this problem, we made several changes for policy learning in the simulation: 1) we tuned the simulated TacTip skin stiffness and damping coefficients to make it more elastic; 2) the dual-arm robot needed to maintain the object in the same pose for 10 time steps to achieve the goal orientation; 3) we increased the penalty coefficient if the dual-arm robot over-squeezes the object (large contact depth); 4) we trained the policy with a higher probability of larger goal angles. After conducting an ablation study for the above changes, we found that change 1) contributes most to the successful sim-to-real application, as the higher stiffness and lower damping make the TacTip skin more elastic against external contact, which enables the policy to be more sensitive to the contact depth as the tactile image are more responsive to depth in simulation.
#### Iii-B3 Bi-Gathering
Gathering objects together is a common behaviour in our daily life, from tidying our desks to moving and sorting packages in warehouses. The goal of this bi-gathering task is for the dual-arm robot to gather two objects together by pushing them towards each other on a planar surface. Thus, each end-effector of the dual-arm robot has to push an object towards a dynamically changing goal (as the goal position of each object is the other object's current position), which makes it a harder exploration problem for RL as compared to single-arm pushing [8] or bi-pushing (above) that have static goals. The task is considered completed when the distance \(d\) between the two objects becomes closer \(d<\epsilon\) than a distance threshold \(\epsilon\) (related to the size of the objects). We express the reward \(R_{t}^{\text{BG}}\) at \(t\) of the bi-gathering task as
\[R_{t}^{\text{BG}}=-w_{1}\left\|p_{t}^{o_{1}}-p_{t}^{o_{2}} \right\|_{2}-w_{2}\sum_{i=1}^{2}S(\theta_{t}^{e_{i}},\theta_{t}^{o_{i}})\] \[-w_{3}\sum_{i=1}^{2}\left\|p_{\text{ctrl}}^{o_{i}}-p_{t}^{e_{i}} \right\|_{2}, \tag{3}\]
where \(w_{j}>0\) (\(j=\{1,2,3\}\)) are reward weights; \(p_{t}^{o_{i}}\) and \(\theta_{t}^{o_{i}}\) are the current position and orientation of the object \(\o_{i}\); \(p_{t}^{e_{i}}\) and \(\theta_{t}^{e_{i}}\) are the current position and orientation of the TCP e\({}_{i}\); \(p_{\text{ctrl}}^{o_{i}}\) specifies the desired contact point position on the object \(\o_{i}\) for controlling the contact depth of the TCP e\({}_{i}\) (\(i\in\{1,2\}\)). All notation is illustrated in Fig. 2c. The first term of Eq. (3) pushes two objects closer while the second term tries to maintain the TCP normal to the contact surface of the object for stable pushing. The third term encourages the robot to keep the TCPs close to the contact points as much as possible to avoid losing contact.
To further explore the limit of the dual-arm tactile robot, we also introduce random perturbations to the objects during the gathering. Specifically, a random force is applied to an object's centre of mass at a random time step when training.
Although the most obvious formulation would be to place the goal of an object at another object, we found that this did not work well for a moving sparse goal both without and with perturbations, resulting in suboptimal performance as shown in Fig. 3c (green and purple plots). Thus, we propose a novel goal-update mechanism (GUM) that can generate subgoals to help the robot learn the task with auxiliary reward signals. Specifically, we generate \(n\) static points at equal interval distances along a target line as subgoals for the two objects (as shown by the blue line in Fig. 2c). We have experimented it with \(n=[5,10,20]\) and found \(n=10\) performed slightly better than the others. The target line is updated every \(h\) time steps, and the subgoal location for each robot arm is set to the generated point nearest to the object that is closer to it. We express the reward \(R_{t}^{\text{BG-GUM}}\) with GUM as
\[R_{t}^{\text{BG-GUM}}=R_{t}^{\text{BG}}-w_{4}\sum_{i=1}^{N}\left\| p_{t}^{e_{i}}-p_{t}^{o_{i}}\right\|_{2}\\ -w_{5}\sum_{i=1}^{N}S(\theta_{t}^{o_{i}},(-1)^{i}\theta_{t}^{c}), \tag{4}\]
where \(w_{j}>0\) (\(j=\{4,5\}\)) are reward weights; \(p_{t}^{e_{i}}\) and \(\theta_{t}^{c}\) represent the position of subgoal \(\g_{i}\) for object \(\o_{i}\) at time step \(t\) respectively. All notation is illustrated in Fig. 2c. The second term of Eq. (4) guides objects towards the nearest subgoals on the target line, providing denser auxiliary rewards for \(R_{t}^{\text{BG}}\). The final term encourages object to be pushed along the target line direction. The hyperparameter \(h\) controls the target line update rate, which should update with appropriate frequency to provide sufficient guidance for the robot to move objects in the desired direction, but also avoid updating too often, which can cause the robot to fail to reach the subgoals due to insufficient time steps. We have experimented it with \(h=[25,50,75,100]\) and found \(h=75\) performed the best.
For the sim-to-real application in this task, since the object pose is unknown in real-world experiments, we instead select the current tool-centre-points (TCPs) for constructing the target line. However, during training in simulation, we found that training from scratch with a line between TCP positions failed to learn. This is because the initial random policy cannot maintain contact with objects and accordingly, this line cannot provide useful information for learning. To circumvent this, we devised a 2-step curriculum in the simulation where we first use objects' centres to construct the target line to train a policy from scratch, and then switch to TCPs for extra training so that the policy can be attuned to the real-world setting. We
found that this simple curriculum was critical for successful training in simulation and sim-to-real application. During the curriculum, we also increased the probability of perturbation and its magnitude to further improve the policy's robustness when applied in the real world.
## IV Experiments and Results
### _Evaluations on the Bi-Touch in Simulation_
An on-policy model-free deep-RL algorithm called Proximal Policy Optimization (PPO) [24] is used to train policies in simulation for all three bimanual tactile robotic tasks described above. Specifically, we use the Stable-Baselines-3 [25] implementation of PPO for learning the policies.
We obtained successfully trained policies in simulation for all three tasks (training curves in Figs 3a-c respectively). The bi-pushing is the easiest task to learn with a smooth learning curve and convergence at an early time step. The other two tasks have subtleties in learning that we describe below.
#### Iv-A1 Bi-Pushing
In each episode, we selected a sequence of goals from a sampled linear or sinusoidal path as a desired object trajectory (parameterised by \(y=kx\) and \(y=a\sin(x/50)\) respectively, where \(k\in[-0.28,0.28]\), \(a\in[-20,20]\) and \(x\in[-280,50]\) mm). The simulated dual-arm tactile robot can learn to push a large object through a trajectory with a training curve shown in Fig. 3a. A successful example is shown in Fig. 5a. The accuracy from 20 simulated tests (10 for each type of trajectory) is 12.3 \(\pm\) 4.8 mm (Table I, top row).
#### Iv-A2 Bi-Reorienting
In each episode, the object length \(l\) is uniformly sampled from \([50,200]\) mm and a goal angle \(\theta^{\text{e}}\) is uniformly sampled from \([30^{\circ},90^{\circ}]\). Note that the goal angle is evenly divided into 10 subgoals as curriculum learning. The simulated dual-arm tactile robot can learn to reorient various lengths of objects to goal angles with the proposed reward design described in Sec. III-C, with a training curve shown in Fig. 3b. A successful example is shown in Fig. 6a. Note that the translation error is calculated by subtracting the final position of the object from the starting position \(p_{t=0}^{\text{o}}\). The average translation and orientation errors from 10 simulated tests are 10.2 \(\pm\) 4.8 mm and \(3.4\pm 1.8^{\circ}\) respectively (Table II).
#### Iv-A3 Bi-Gathering
In each episode, the two object's \(\text{o}^{\text{i}}\) (\(i=\{1,2\}\)) initial positions \(p_{0}^{\text{o}_{1}}=(x^{i},y^{i})\) are uniformly sampled from ranges \(x^{i}\in(-1)^{i}[50,200]\) mm, \(y^{i}\in(-1)^{i}[0,100]\) mm, respectively. The other hyperparameters are the termination distance (\(\epsilon=90\) mm; approximate object size), the number of points for subgoals (\(n=10\)) and updating the subgoal target line every \(h=75\) steps. The results (without perturbations) demonstrate that the proposed GUM improves the performance of the simulated dual-arm tactile robot in achieving this task more robustly in shorter episode times (Fig. 3c, blue plot), as compared to the one without this mechanism (green plot). This indicates its ability to learn and adapt more effectively: even though the exact object poses are unknown to the robot, it learns to bring together the object locations with only tactile and proprioceptive feedback.
When perturbations are also considered, these are uniformly sampled from \([1,5]\) N. In that situation, without GUM, the dual-arm tactile robot cannot learn to achieve this task, with longer episode times compared to the one without perturbations (Fig. 3c, green and purple plots respectively). With the use of GUM, the task is learned successfully even in the presence of perturbations, reaching a performance similar to that achieved without perturbations (Fig. 3c, red and blue plots respectively). A successful example with perturbations is shown in Fig. 7a. Upon testing the policy trained under
Figure 4: The objects used in the three proposed dual-arm manipulation tasks for real-world testing: a) a tripod box, a shuttles tube, and a loudspeaker for the bi-pushing task; b) a plastic cube, a cracker box (red), a shoe box (yellow), a cubes box (transparent), a mustard bottle, a goblet, a plastic ball, a soft brain toy, a plastic triangle prism, and a set of bricks for the bi-gathering task; c) a blue cylinder, a span can, a ceramic ring, an apple, a foam toy, a triangle prism, and two cubes for the bi-gathering task. These objects are unseen during training and are chosen to vary in size, weight, shape and stiffness.
Figure 5: The objects’ trajectories plots (in red) of the bi-pushing task with (a) a cuboid (simulation), (b) a tripod box, (c) a shuttles tube and (d) a loudspeaker. The TacTips’ trajectories are indicated with green arrows.
Figure 3: Task completion times during training for learned policies averaged over 10 trials (a) bi-pushing, (b) bi-reorienting, and (c) bi-gathering tasks. Note that less task completion time means better performance. Our proposed goal-update mechanism (GUM) in the bi-gathering task resulted in improved performance: the blue and red plots (with GUM) show reduced numbers of time steps required to achieve the task in one episode, under both perturbed and unperturbed conditions, compared to the green and purple plots (without GUM).
perturbations with the goal-update mechanism, the success rates are 100% with different perturbation times in each of 5 simulated tests (Table III, top row).
### _The Performance of the Bi-Touch in Reality_
#### Iv-B1 Bi-Pushing
We set up the real bi-pushing task with the same configuration as the simulated one, testing it with three unseen objects (Fig. 3(a)) varying in weight and contact shape. The real dual-arm tactile robot can push a large object through a desired trajectory (example of sinusoidal trajectories in Fig. 4(b)-d). The accuracy from 20 real-world tests (10 for each type of trajectory) for the tripod box, the shuttle tube, and the loudspeaker are 14.2 \(\pm\) 6.4 mm, 16.6 \(\pm\) 7.7 mm, and 17.4 \(\pm\) 8.1 mm respectively (Table I), compared to an overall distance travelled of 300-420 mm. The performances on all objects is similar despite notable differences in their contact shapes (e.g. flat, curved, and sloping surface), showing the generalization ability of the learned policy. Videos for these trajectories are provided in the supplementary materials.
Comparing the simulated and real trajectories of the end effectors (TacTips), while the box and tube behaved similarly (Fig. 5, three left plots), the loudspeaker travelled more distance (right-most plot). We attribute this behaviour to the larger frictional force due to the loudspeaker's heavier weight (879 g) compared with the box (244 g) and tube (127 g). The dual-arm robot needs to move its end-effector closer to the end of the loudspeaker to better counteract the larger frictional force and maintain the object's centre on the goal path. These behaviours show that the dual-arm robot learns to interact with objects of various weights using its tactile feedback, demonstrating the generalization ability and robustness of the learned policy.
#### Iv-B2 Bi-Reorienting
The real bi-reorienting task is considered with the same configuration as the simulated task, testing with a set of new objects that vary in size, shape, weight, and stiffness (Fig. 3(b)). We trained different policies for different directions. Note that the object position is defined as the top centre position where the ArUco marker is attached. The robot is considered to achieve the task when the object angle does not change more than \(1^{\circ}\) in 10 time steps for the final subgoal. The robot achieved this task with most of the selected objects with translation error from 12.5\(\pm\)5.3 mm to 19.5\(\pm\)7.0 mm, and orientation error from 7.5\(\pm\)3.9\({}^{\circ}\) to 13.4\(\pm\)6.5\({}^{\circ}\) (Table II), except the triangular prism where there was a problem with the sharp edge. Trajectories for a set of real-world objects are shown in Fig. 6 and videos are provided in the supplementary material.
Specifically, the real dual-arm robot can achieve higher accuracy with cube-shaped objects (plastic cube, shoe box, cracker box, mustard bottle, cubes box) compared to round-shaped objects (goblet, plastic ball, soft brain toy). This is because the policy is only trained with cube-shaped objects in simulation so the policy does not have knowledge of the object geometry. We also noticed that the larger the manipulated object is, the larger error will present. We attribute this to the increased difficulty for coordinating both arms to manipulate the larger objects as it requires the end effector to travel more distance during the task. Even so, it was able to successfully manipulate various unseen round objects of various stiffness, which demonstrates the learned policy's generalization ability to rounder objects. On the other hand, we attribute the difficulties on the prism to slippage caused by an unstable contact on the sharper edge, leading to an unrecoverable situation that was never experienced in training.
To further examine the generalization capability, we tried an additional experiment on a set of bricks. Specifically, the
\begin{table}
\begin{tabular}{c|c c c} \hline
**Object** & **Translation Err.** & **Orientation Err.** & **Size** \\ \hline
**Cube (Sim.)** & 10.2 \(\pm\) 4.8 mm & 3.4 \(\pm\) 1.8\({}^{\circ}\) & 100 mm \\
**Plastic Cube** & 12.5 \(\pm\) 5.3 mm & 7.5 \(\pm\) 3.9\({}^{\circ}\) & 100 mm \\
**Shoe Box** & 16.2 \(\pm\) 6.8 mm & 11.5 \(\pm\) 5.2\({}^{\circ}\) & 193 mm \\
**Cracker Box** & 13.9 \(\pm\) 5.6 mm & 8.4 \(\pm\) 4.7\({}^{\circ}\) & 172 mm \\
**Cubes Box** & 15.5 \(\pm\) 5.5 mm & 9.2 \(\pm\) 5.6\({}^{\circ}\) & 186 mm \\
**Mustard Bottle** & 13.6 \(\pm\) 5.4 mm & 7.6 \(\pm\) 4.1\({}^{\circ}\) & 122 mm \\
**Goblet** & 18.1 \(\pm\) 6.1 mm & 12.2 \(\pm\) 5.8\({}^{\circ}\) & 113 mm \\
**Soft Brain Toy** & 17.2 \(\pm\) 6.7 mm & 9.7 \(\pm\) 5.3\({}^{\circ}\) & 85 mm \\
**Red Ball** & 19.5 \(\pm\) 7.0 mm & 13.4 \(\pm\) 6.5\({}^{\circ}\) & 70 mm \\
**Triangular Prism** & - & - & 57 mm \\ \hline \end{tabular}
\end{table}
Table II: Mean errors and standard deviations of the reorientation angles and the positions of the objects from the ground truth for the bi-reorienting task.
Figure 6: Object trajectories for the bi-reorienting task for the (a) cuboid (simulation, (b) cubes box, (c) plastic ball, (d) mustard bottle, (e) soft brain toy and (j) triangular prism. End-effector positions during the trajectories (green arrows) are shown, and the start, goal and final object orientations (lines).
bricks are loosely placed together initially, then the robot needs to reorient and gather them together by pushing and rotating the bricks it is contacting. The robot is able to successfully complete this task and, moreover, during the reorientation we can remove the middle brick to demonstrate the robustness to further perturbation. The robot can achieve this (see the video in supplementary material) with a 95% success rate in 20 tests. Overall, this demonstrates that the policy can lead to a new emergent behaviour not experienced during training.
#### Iv-B3 Bi-Gathering
The real bi-gathering task is considered with the same configuration as in the simulation, with testing on a set of new objects (Fig. 4c) varying in shape, weight and stiffness. Here we applied the policy trained under perturbations in simulation with the target-line goal-update mechanism. We run 10 tests for each object pair with random force perturbations applied at random times. The robot is considered to achieve the task when the centre of the objects becomes closer than 7 cm (detected using ArUco markers) within 300 time steps. Note that object lengths are about 6 cm.
The tactile robot successfully completed the bi-gathering task without perturbation in all sets of 10 trials for each object. Regarding the effect of perturbations, the success rates of achieving the task are summarized in Table III. The robot completes the task at 100% success rate when the perturbation is applied twice or fewer times. The success rate decreases when the number of applied perturbations is increased for all pairs of objects, decreasing most for the irregular items (mug, triangular prism, foam toy and spam can). We show example trajectories of successful trials in Fig. 7.
Examining the results more closely, the dual-arm tactile robot can complete the task with objects of different weights and shapes to those during training. Comparing the foam toy to the cube in simulation (Figs. 7f, a), the TacTip is deformed much more lightly in reality than in simulation, due to the foam toy's light weight and soft material. Even so, the robot can achieve the task with generated tactile images that are notably different from those in training, both in terms of depression and in the contact shapes (e.g. the mug handle in Fig. 7d). These results demonstrate the strong generalization ability of the policy trained with our goal-update mechanism among different stiffness, shapes and object weights.
We observed that the failure cases are caused by the robot running out of workspace as it tried to push a perturbed object back together through a large turnaround. This happened mostly with the spam can and mug since they present additional challenges for pushing due to their larger weights. Even so, the dual-arm tactile robot still demonstrated a capability to push perturbed objects together (Fig. 7, third row).
## V Discussion and Future Work
In this paper, we developed a low-cost dual-arm tactile robot system called Bi-Touch for sim-to-real deep reinforcement learning based on Tactile Gym 2.0 [8]. The hardware includes two industry-capable desktop robot arms (Dobot MG400), each equipped with a low-cost high-resolution optical tactile sensor (TacTip) as end-effectors. We also designed a workspace configuration suited for three proposed bimanual tasks tailored towards tactile feedback and integrated into the Tactile Gym simulation methods and environments.
The performance of our low-cost sim-to-real deep RL dual-arm tactile robot system was evaluated in these three bimanual tasks in the real world. We introduced appropriate reward functions for these tasks in simulation, then investigated how
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline \hline
**Objects Pairs No. Perturb.** & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline
**Cabe \& Cube (Simulation)** & 100\% & 100\% & 100\% & 100\% & 100\% & 100\% \\ \hline
**Cube \& Cube** & 100\% & 100\% & 100\% & 100\% & 90\% & 90\% \\ \hline
**Apple \& Can** & 100\% & 100\% & 100\% & 100\% & 90\% & 70\% \\ \hline
**Mug \& Triangular Prism** & 100\% & 100\% & 80\% & 70\% & 40\% & 10\% \\ \hline
**Foam Toy \& Spam Can** & 100\% & 100\% & 70\% & 50\% & 20\% & 10\% \\ \hline \hline \end{tabular}
\end{table}
Table III: Success rates for bi-gathering under different numbers of perturbations.
Figure 7: Bi-gathering on one simulated pair (a) and four real pairs of objects, corresponding to the (b) cubes, (b) plastic apple and can, (d) ceramic mug and triangular prism and (e) soft foam toy and spam can. The first row shows the initial configurations of the objects and the end effectors (TacTips), along with examples of real and simulated tactile images. The second and third rows show the end-effector (green arrows) and object trajectories. The red arrows in the third row show applied force perturbations. Note that ArUco markers are just for quantitative evaluation, as our policy does not use visual feedback.
these policies apply to the real world. The experimental results show that the developed dual-arm tactile system is effective for all tasks on real objects unseen in the simulation learning.
For bi-gathering, we proposed a goal-update mechanism using a linear set of subgoals. These subgoals provide dense auxiliary rewards, enabling the dual-arm robot to learn from a high-dimensional state-action space. While this idea shares similarities with Hierarchical Visual Foresight (HVF) [26], we faced challenges in directly applying it to our tactile manipulation tasks due to the limited goal state information provided by tactile images. Instead, we simplified the problem by generating subgoals based on the current object positions in simulation or TCP positions in the real world. This mechanism, which preserves the distribution of initial environmental states, can be seen as a form of implicit curriculum learning [27]. It periodically generates goals that are easier to achieve compared to sparse goal settings where only object positions serve as goals. Unlike Hindsight Experience Replay [27], which is applicable only to off-policy RL methods, our method applies on-policy, such as PPO used in this study.
While here we demonstrate the feasibility of our work with 4-DoFs desktop robot arms and TacTips [21], the proposed framework should also work with different types of optical tactile sensors, e.g. the GelSight [28] and robots with more DoFs, as the Tactile Gym has demonstrated its scalability in both situations [7, 8]. A limitation of the present work is that shear deformation of the tactile sensor has not yet been considered in the simulation and may be needed for more complex tasks relying on the control of frictional forces. A future direction is to approximate the shear effect in simulation with reliable methods to further close the sim-to-real gap.
A future direction of this work is to apply our Bi-Touch framework and tactile robot platform to fine manipulation of held objects without a supporting table. To show the promise of this approach, we additionally developed a bimanual lifting (bi-lifting) task along with a reward function based on principles similar to the other three tasks: a) manipulating the object to achieve a goal state with tactile feedback, b) while maintaining stable contact with the object. Preliminary experiments for the bi-lifting task, both in simulation and the real world, demonstrate that this approach to dexterous manipulation can be effective, which we have included in a supplementary video that accompanies this paper.
|
2306.09439 | Minimal invariant subspaces for an affine composition operator | The composition operator $C_{\phi_a}f=f\circ\phi_a$ on the Hardy-Hilbert
space $H^2(\mathbb{D})$ with affine symbol $\phi_a(z)=az+1-a$ and $0<a<1$ has
the property that the Invariant Subspace Problem for complex separable Hilbert
spaces holds if and only if every minimal invariant subspace for $C_{\phi_a}$
is one-dimensional. These minimal invariant subspaces are always
singly-generated $ K_f := \overline{\mathrm{span} \{f, C_{\phi_a}f,
C^2_{\phi_a}f, \ldots \}}$ for some $f\in H^2(\mathbb{D})$. In this article we
characterize the minimal $K_f$ when $f$ has a nonzero limit at the point $1$ or
if its derivative $f'$ is bounded near $1$. We also consider the role of the
zero set of $f$ in determining $K_f$. Finally we prove a result linking
universality in the sense of Rota with cyclicity. | João R. Carmo, Ben Hur Eidt, S. Waleed Noor | 2023-06-15T18:42:49Z | http://arxiv.org/abs/2306.09439v2 | # Minimal invariant subspaces for an affine composition operator
###### Abstract
The composition operator \(C_{\phi_{a}}f=f\circ\phi_{a}\) on the Hardy-Hilbert space \(H^{2}(\mathbb{D})\) with affine symbol \(\phi_{a}(z)=az+1-a\) and \(0<a<1\) has the property that every bounded linear operator on a separable complex Hilbert space has a proper closed invariant subspace if and only if every minimal invariant subspace for \(C_{\phi_{a}}\) is one-dimensional. These minimal invariant subspaces are necessarily singly-generated \(K_{f}:=\overline{\text{span}\{f,C_{\phi_{a}}f,C_{\phi_{a}}^{2}f,\ldots\}}\) for some \(f\in H^{2}(\mathbb{D})\). In this article we characterize the minimal \(K_{f}\) when \(f\) has a nonzero limit at the point \(1\) or if its derivative \(f^{\prime}\) is bounded near \(1\). We also consider the role of the zero set of \(f\) in determining \(K_{f}\). Finally we prove a result linking universality in the sense of Rota with cyclicity.
## 1 Introduction
The _Invariant Subspace Problem_ (ISP) is one of the major open problems in operator theory. It belongs to a class of problems that can be stated in relatively simple terms but the complete solution remains a mystery. The current version of the ISP is the following: let \(H\) be a complex, separable and infinite dimensional Hilbert space. If \(T\in B(H)\) (a bounded linear operator on \(H\)), is it true that \(T\) has a non-trivial invariant subspace?
By a non-trivial invariant subspace we mean a closed subspace \(M\subseteq H\) such that \(M\neq\{0\}\), \(M\neq H\) and \(T(M)\subseteq M\). We note that the only open case is when \(H\) is separable. There are a lot of possible approaches to this problem; we refer to the detailed monograph of Partington and Chalendar [5] for some of the modern approaches. One of these methods is based on the concept of a _universal operator_ introduced by Rota [16].
**Definition 1.1**.: _Let \(H\) be a complex, separable and infinite dimensional Hilbert space. An operator \(U\in B(H)\) is said to be **universal** for \(H\) if for any operator \(T\in B(H)\) there exists \(M\) an invariant subspace of \(U\), \(\alpha\) a non-null complex scalar and \(S:M\to H\) an isomorphism such that \(\alpha T=SU_{|M}S^{-1}\)._
There is a simple connection between universal operators and the ISP: the ISP is true for \(H\), i.e, every operator \(T\in B(H)\) has a non-trivial invariant subspace if and only if each minimal invariant subspace \(M\) of \(U\) is one-dimensional. Here, the minimality of \(M\) implies that it contains no proper \(U\)-invariant subspace. Until recently the main method for identifying a universal operator has been the Caradus criterion (see [3]). More recently Pozzi [15] generalized this classical result and obtained the following theorem that we will call the _Caradus-Pozzi criterion_.
**Theorem 1.2**.: _Suppose that \(U\in B(H)\) satisfies:_
1. \(Ker\) \(U\) _is infinite-dimensional._
2. \(U(H)\) _has finite codimension (and is hence closed)._
_Then \(U\) is universal._
In this paper, we will deal with the space \(H^{2}:=H^{2}(\mathbb{D})\) called the Hardy-Hilbert space of the unit disk \(\mathbb{D}\). This is the Hilbert space of holomorphic functions \(f:\mathbb{D}\to\mathbb{C}\) such that
\[\|f\|^{2}=\sup_{0<r<1}\frac{1}{2\pi}\int\limits_{0}^{2\pi}|f(re^{i\theta})|^{2} d\theta<\infty.\]
Let \(H^{\infty}\) denote the space bounded holomorphic functions on \(\mathbb{D}\). If \(\phi:\mathbb{D}\to\mathbb{D}\) is a holomorphic self-map of \(\mathbb{D}\), then the _composition operator_ with _symbol_\(\phi\) is defined by \(C_{\phi}(f)=f\circ\phi\). By Littlewood's Subordination Theorem \(C_{\phi}\) is always bounded on \(H^{2}\). The study of composition operators centers around the interaction between the function-theoretic properties of the symbol \(\phi\) and the operator-theoretic properties of \(C_{\phi}\).
Let \(LFT(\mathbb{D})\) denote the set of all linear fractional self-maps of \(\mathbb{D}\). If \(\phi\in LFT(\mathbb{D})\), we say that \(\phi\) is _hyperbolic_ if it has two distinct fixed points outside \(\mathbb{D}\). Due to the remarkable work of Nordgren, Rosenthal and Wintrobe (see [14]) we know that if \(\phi\in LFT(\mathbb{D})\) is a _hyperbolic automorphism_ (i.e, the two fixed points belong to the unit circle \(S^{1}\)) then \(C_{\phi}-\lambda\) is universal for every \(\lambda\) in the interior of the spectra of \(\phi\). Recently, this result was extended by Carmo and Noor [4, Theorem 3.1] to non-automorphic hyperbolic self-maps.
**Theorem 1.3**.: _Let \(\phi\in LFT(\mathbb{D})\). Then \(C_{\phi}-\lambda\) is universal on \(H^{2}(\mathbb{D})\) for some \(\lambda\in\mathbb{C}\) if and only if \(\phi\) is hyperbolic._
The hyperbolic automorphism induced \(C_{\Phi}\) has been studied extensively during the last thirty years where
\[\Phi(z)=\frac{z+b}{bz+1}\]
with \(0<b<1\) and having fixed points at \(1\) and \(-1\) (see for instance [6],[9],[11],[12],[13]). Our focus here is on the non-automorphic case where \(\phi\) has a fixed point in \(S^{1}\) and another one outside \(\overline{\mathbb{D}}\) (possibly at \(\infty\)). In that case, Hurst [10, Theorem 8] proved that \(C_{\phi}\) is similar to \(C_{\phi_{a}}\) where
\[\phi_{a}(z)=az+1-a,\quad a\in(0,1)\]
and \(1,\infty\) are the fixed point of \(\phi_{a}\). By Theorem 1.3 one can approach the ISP using the symbol \(\phi_{a}\). Noting that \(\operatorname{Lat}(C_{\phi_{a}})=\operatorname{Lat}(C_{\phi_{a}}-\lambda)\) for any \(\lambda\in\mathbb{C}\), where \(\operatorname{Lat}(T)\) is the collection of all invariant subspaces of an operator \(T\), the following was shown in [4].
**Theorem 1.4**.: _For any \(a\in(0,1)\), the \(ISP\) has a positive solution if and only if every minimal invariant subspace of \(C_{\phi_{a}}\) has dimension 1._
If \(M\) is a minimal invariant subspace of \(C_{\phi_{a}}\), then we necessarily have
\[M=K_{f}:=\overline{span\{f,C_{\phi_{a}}f,C_{\phi_{a}}^{2}f,\ldots\}}\]
for every nonzero \(f\in M\). So to study the minimal invariant subspaces we need to understand the so-called _cyclic_ subspaces \(K_{f}\) for \(f\in H^{2}\). In fact, we need only consider \(f\) that are analytic across each point of the unit circle \(S^{1}\) except 1 (see [4, Prop. 5.1]).
The plan of the paper is the following. In Section 2 we will introduce some preliminary results and definitions. In particular the Nevanlinna counting function and the notion of an _eventually bounded function_ (simply **EB**) are introduced. When \(f\in H^{2}\) is **EB**, then \(K_{f}\) always contains an \(H^{\infty}\) function (see Proposition 2.3). Also functions with **EB** derivatives can never be hypercyclic vectors for \(C_{\phi_{a}}\) (see Proposition 2.4). In Section 3, we prove the main results regarding the minimal invariant subspaces of \(C_{\phi_{a}}\). In particular, we prove the veracity of the equivalence
\[K_{f}\text{ is minimal }\Longleftrightarrow\text{dim }K_{f}=1.\]
for a variety of classes of \(f\in H^{2}\) including those with non-zero boundary limits at 1 (Theorem 3.1), when \(f^{\prime}\) is **EB** and \((f(1-a^{n}))_{n\in\mathbb{N}}\) is bounded away from 0 (Corollary 3.5), and when \(f\) is analytic at 1 (Theorem 3.8). In Section 4 we prove results that connect the zero set of \(f\) with properties of \(K_{f}\). Finally in Section 5 we provide a sufficient condition for a cyclic operator on a Hilbert space to be universal (see Theorem 5.1). This is relevant since the best known examples of universal operators are similar to coanalytic Toeplitz operators, and these are always cyclic (see [19]).
## 2 Preliminaries
### The Nevanlinna counting function
In the Shapiro's seminal work [17] the essential norm of a composition operator was determined using the Nevanlinna counting function. It will play an important role here also. For a holomorphic map \(\phi:\mathbb{D}\rightarrow\mathbb{D}\) and each \(w\in\mathbb{D}\setminus\{\phi(0)\}\), define
\[N_{\phi}(w)=\begin{cases}\sum\limits_{z\in\phi^{-1}\{w\}}\log\frac{1}{|z|}& \text{if }w\in\phi(\mathbb{D})\setminus\{\phi(0)\}.\\ 0&\text{if }w\notin\phi(\mathbb{D}).\end{cases}\]
We shall need the following two results.
**Theorem 2.1** ([18], section 10.1 ).: _If \(\phi:\mathbb{D}\rightarrow\mathbb{D}\) is analytic then \(\forall f\in H^{2}(\mathbb{D})\) we have_
\[\|C_{\phi}f\|_{H^{2}}^{2}=2\int\limits_{\mathbb{D}}|f^{\prime}(w)|^{2}N_{\phi }(w)dA(w)+|f(\phi(0))|^{2}.\]
**Theorem 2.2** ([18], section 10.3).: _If \(f\) is a non-negative measurable function in \(\mathbb{D}\) and \(\phi\) is a holomorphic self-map of \(\mathbb{D}\) then_
\[\int\limits_{\mathbb{D}}f(w)N_{\phi}(w)dA(w)=\int\limits_{\mathbb{D}}f(\phi(z) )|\phi^{\prime}(z)|^{2}\log\frac{1}{|z|}dA(z)\]
We will specialize these formulas for our symbol \(\phi_{a}\) in the following sections.
### Eventually bounded functions
For each \(a\in(0,1)\) and \(n\in\mathbb{N}\), note that \(\underbrace{\phi_{a}\circ\ldots\circ\phi_{a}}_{n\text{ times}}=\phi_{a^{n}}\) and hence
\[C_{\phi_{a^{n}}}=C_{\phi_{a}}^{n}.\]
For each \(n\in\mathbb{N}\), define the disk \(D_{n}:=\phi_{a^{n}}(\mathbb{D})=a^{n}\mathbb{D}+1-a^{n}\) with center \(1-a^{n}\) and radius \(a^{n}\) in \(\mathbb{C}\). We call a holomorphic function \(g\) defined on \(\mathbb{D}\)_eventually bounded at \(1\)_ (or simply **EB**) if \(g\) is bounded on \(D_{n}\) for some \(n\in\mathbb{N}\) (and consequently bounded on \(D_{m}\) for each \(m\geq n\)). Note that every \(H^{\infty}\) function is trivially **EB**. In order to employ Theorems 2.1 and 2.2, we will need \(f^{\prime}\) to be **EB**. This in fact implies that \(f\) is also **EB** since if \(f^{\prime}\) is bounded on some \(D_{n}\), then
\[|f(z)|\leq|z-z_{0}|\sup_{w\in D_{n}}|f^{\prime}(w)|+|f(z_{0})|\]
for any \(z\in D_{n}\) and some fixed \(z_{0}\in D_{n}\).
The disks \(D_{n}=\phi_{a^{n}}(\mathbb{D})\) shrinking to the point \(1\).
Note that if \(f\) is **EB** then \(C_{\phi_{a}n}f\in H^{\infty}\) for some \(n\in\mathbb{N}\). Hence we immediately get the following result about minimal invariant subspaces of \(C_{\phi_{a}}\).
**Proposition 2.3**.: _Let \(M\in\operatorname{Lat}(C_{\phi_{a}})\). Then \(M\) contains an_ **EB** _function if and only if it contains an \(H^{\infty}\) function. Moreover if \(M\) is minimal then \(M=K_{f}\) for some \(f\in H^{\infty}\)._
It is known that the operator \(C_{\phi_{a}}\) is a hypercyclic operator, i.e, there exists \(f\in H^{2}\) such that the orbit \((C^{n}_{\phi_{a}}f)_{n\geq 0}\) is dense in \(H^{2}\). Such \(f\) are called hypercyclic vectors for \(C_{\phi_{a}}\) and they form a dense subset of \(H^{2}\) (see [1] or [2]). The next result shows that hypercyclicity of \(f\) and the eventual boundedness of its derivatives are incompatible.
**Proposition 2.4**.: _If any derivative of \(f\) is_ **EB**_, then \(f\) is not a hypercyclic vector._
Proof.: Let \(f^{(n)}\) be **EB** for some \(n\geq 1\) and consider \(e_{n}(z)=z^{n}\). Suppose on the contrary that \(f\) is a hypercyclic vector. Then there exists a subsequence \((C^{k_{l}}_{\phi_{a}}f)_{l\in\mathbb{N}}\) such that \(C^{k_{l}}_{\phi_{a}}f\to e_{n}\) as \(l\to\infty\). Note that for every \(g\in H^{2}\) we have \(\langle g,e_{n}\rangle=\frac{g^{(n)}(0)}{n!}\). So
\[\langle C^{k_{l}}_{\phi_{a}}f,e_{n}\rangle=\frac{\left(C^{k_{l}}_{\phi_{a}}f \right)^{(n)}(0)}{n!}=\frac{(a^{k_{l}})^{n}f^{(n)}\circ\phi_{a^{k_{l}}}(0)}{n! }\xrightarrow{k\to\infty}0\]
because \(f^{(n)}\) is **EB** and \((a^{k_{l}})^{n}=(a^{n})^{k_{l}}\to 0\) as \(k\to\infty\). On the other hand
\[\langle C^{k_{l}}_{\phi_{a}}f,e_{n}\rangle\to\langle e_{n},e_{n}\rangle=\|e_ {n}\|^{2}.\]
Thus \(e_{n}=0\) which is absurd. This contradiction establishes the result.
However, there exist cyclic vectors for \(C_{\phi_{a}}\) (whose orbits are complete in \(H^{2}\)) with derivatives all \(\mathbf{EB}\). Recall that for each \(\alpha\in\mathbb{D}\), the reproducing kernel at \(\alpha\) is defined by \(\kappa_{\alpha}(z)=\frac{1}{1-\overline{\alpha}z}\) and each \(\kappa_{\alpha}^{(n)}\) is \(\mathbf{EB}\) for \(n\in\mathbb{N}\) since \(\kappa_{\alpha}\) is analytic across \(S^{1}\).
**Proposition 2.5**.: _Let \(\kappa_{\alpha}\in H^{2}\) be a reproducing kernel. Then \(\kappa_{\alpha}\) is a cyclic vector for \(C_{\phi_{a}}\) if and only if \(\alpha\neq 0\)._
Proof.: If \(\kappa_{\alpha}\) is cyclic then \(\alpha\neq 0\), otherwise \(\kappa_{\alpha}=\kappa_{0}=1\) and \(K_{\kappa_{\alpha}}\) is the space of constants functions. For the other direction, let \(\kappa_{\alpha}\) with \(\alpha\neq 0\). Note that \(1-\overline{\alpha}+\overline{\alpha}a\neq 0\), otherwise \(1=\overline{\alpha}(1-a)\) and this is not possible because \(\overline{\alpha},(1-a)\in\mathbb{D}\). Thus
\[\kappa_{\alpha}\circ\phi_{a}(z)=\frac{1}{1-\overline{\alpha}(az+1-a)}=\frac{1 }{1-\overline{\alpha}+\overline{\alpha}a-\overline{\alpha}az}=\frac{1}{1- \overline{\alpha}+\overline{\alpha}a}\left(\frac{1}{1-\frac{\overline{\alpha} az}{1-\overline{\alpha}+\overline{\alpha}a}}\right).\]
We observe that a function of the form \(h(z)=\frac{1}{1-\overline{g}z}\) where \(y\in\mathbb{C}\) belongs to \(H^{2}\) if, and only if, \(y\in\mathbb{D}\); so \(\frac{\overline{\alpha}a}{1-\overline{\alpha}+\overline{\alpha}a}\in\mathbb{D}\). Consequently, for every \(a\in(0,1)\) we have
\[\kappa_{\alpha}\circ\phi_{a}=\frac{1}{1-\overline{\alpha}+\overline{\alpha}a }\kappa_{\frac{\alpha a}{1-\alpha+\alpha a}}.\]
Now let \(f\in H^{2}\) such that \(\langle f,\kappa_{\alpha}\circ\phi_{a^{n}}\rangle=0.\) Then \(f(\frac{\alpha a^{n}}{1-\alpha+\alpha a^{n}})=0\) for every \(n\in\mathbb{N}\). As the sequence \(\{\frac{\alpha a^{n}}{1-\alpha+\alpha a^{n}}\}_{n\in\mathbb{N}}\) is a sequence of distinct points (because \(\alpha\neq 0\)), \(\frac{\alpha a^{n}}{1-\alpha+\alpha a^{n}}\to 0\) and \(f\) is analytic at \(0\) we conclude that \(f=0\). Thus \(K_{\kappa_{\alpha}}=H^{2}\).
## 3 Minimal invariant subspaces of \(C_{\phi_{a}}\)
From now on, consider \(a\in(0,1)\) fixed. Recall that the ISP has a positive solution if and only if every minimal invariant subspace of the operator \(C_{\phi_{a}}\) is one-dimensional. Also every minimal invariant subspace is a cyclic susbspace of the form \(K_{f}\) where \(f\in H^{2}\) and
\[K_{f}:=\overline{\operatorname{span}\{f,C_{\phi_{a}}f,C_{\phi_{a}}^{2}f,\ldots \}}.\]
Indeed, if \(M\) is a minimal invariant subspace of \(C_{\phi_{a}}\) then \(M=K_{f}\) for every \(f\neq 0\) such that \(f\in M\). In this section we will present a class of functions \(f\) for which we know that the equivalence
\[K_{f}\text{ is minimal}\Longleftrightarrow\dim\,K_{f}=1\]
is true. Our first main result is the following.
**Theorem 3.1**.: _If \(g\in H^{2}(\mathbb{D})\) with \(\lim\limits_{z\to 1}g(z)=L\neq 0\) (\(z\to 1\) within \(\mathbb{D}\)), then \(K_{g}\) contains the constants. In particular if \(K_{g}\) is minimal, then \(g\equiv L\) and \(\dim\,K_{g}=1\)._
Proof.: There exists a \(\delta>0\) such that if \(z\in\mathbb{D}\) and \(|z-1|<\delta\) then \(|g(z)-L|<1\). In particular, for every \(z\in B(1,\delta)\cap\mathbb{D}\) we obtain
\[|g(z)|\leq|g(z)-L)|+|L|<1+|L|=:K\]
As \(\phi_{a^{n}}(\mathbb{D})=a^{n}\mathbb{D}+1-a^{n}\) are circles with center \(1-a^{n}\) (converging to \(1\)) and radius \(a^{n}\) (converging to \(0\)) there exists \(n_{0}\in\mathbb{N}\) such that for all \(n\geq n_{0}\) we have \(\phi_{a^{n}}(\mathbb{D})\subseteq B(1,\delta)\cap\mathbb{D}\). So, given \(re^{i\theta}\in\mathbb{D}\) we conclude that \(a^{n}re^{i\theta}+1-a^{n}\in B(1,\delta)\cap\mathbb{D}\) and so
\[|g\circ\phi_{a^{n}}(re^{i\theta})|^{2}=|g(a^{n}re^{i\theta}+1-a^{n})|^{2}\leq K ^{2}\quad\forall n\geq n_{0}.\]
Consequently we get
\[\|C_{\phi_{a^{n}}}g\|^{2}=\|g\circ\phi_{a^{n}}\|^{2}=\sup_{0<r<1}\frac{1}{2\pi} \int\limits_{0}^{2\pi}|g\circ\phi_{a^{n}}(re^{i\theta})|^{2}d\theta\leq K^{2} \quad\forall n\geq n_{0}.\]
This shows that the sequence \(\left(C_{\phi_{a^{n}}}g\right)_{n\in\mathbb{N}}\) is bounded. Moreover, for each \(z\in\mathbb{D}\) we have \(a^{n}z+1-a^{n}\to 1\) as \(n\to\infty\) and then
\[g\circ\phi_{a^{n}}(z)=g(a^{n}z+1-a^{n})\to L.\]
So \(\left(C_{\phi_{a^{n}}}g\right)_{n\in\mathbb{N}}\) converges pointwise to \(L\). By ([7], Corollary 1.3) \(\left(C_{\phi_{a^{n}}}g\right)_{n\in\mathbb{N}}\) converges weakly to the constant function \(L\). So \(L\) belongs to the weak closure of the convex set \(\operatorname{span}_{n\geq 0}(C_{\phi_{a}}^{n}g)\) which is equal to the norm closure by Mazur's Theorem. So \(L\in K_{g}\) and hence \(K_{g}=K_{L}\) since \(L\neq 0\). \(\blacksquare\)
Every function \(g\in A(\mathbb{D})\) (where \(A(\mathbb{D})\) denotes the disk algebra) such that \(g(1)\neq 0\) satisfies the above hypothesis. So for such functions \(K_{g}\) is minimal if and only if \(g\) is a constant. Notice that the theorem above does not cover polynomials \(g\) that vanish at \(1\). However the polynomial case in general is solved easily by the following remark.
**Remark 3.2**.: _For every polynomial \(p\) and \(a\in(0,1)\), the function \(p\circ\phi_{a}\) is again a polynomial with degree equal to that of \(p\). In particular, \(K_{p}\subseteq P(n)\) where \(P(n)\) denotes the space of polynomials of degree at most \(n=\deg(p)\). This implies that \(dim\ K_{p}<\infty\). Thus \(K_{p}\) is minimal precisely when \(\dim\ K_{p}=1\) since it will contain an eigenvector._
Following Theorem 3.1, it is clear that the boundary behaviour of \(g\in H^{2}\) at \(1\) plays a key role in characterizing \(K_{g}\). Hence we propose the following three cases:
* \(g(1-a^{n})\) converges to a number \(L\neq 0\).
* \(g(1-a^{n})\) converges to \(0\).
* \(g(1-a^{n})\) does not converges.
These three cases cover all possibilities. Recall that \(g^{\prime}\) is \(\mathbf{EB}\) if \(g^{\prime}\) is bounded on \(D_{n}\) for some \(n\in\mathbb{N}\). We begin our analysis with a lemma.
**Lemma 3.3**.: _If \(g\in H^{2}\) is such that \(g^{\prime}\) is \(\mathbf{EB}\), then_
\[\int_{\mathbb{D}}\left|g^{\prime}(w)\right|^{2}N_{\phi_{a^{n}}}(w)dA(w)\to 0 \quad\text{as $n\to\infty$}.\]
Proof.: Let \(n_{0}\in\mathbb{N}\) such that \(g^{\prime}\) is bounded in \(D_{n_{0}}\). Applying Theorem 2.2 for \(f=|g^{\prime}|^{2}\) and \(\phi=\phi_{a^{n}}\) we obtain:
\[\int\limits_{\mathbb{D}}|g^{\prime}(w)|^{2}N_{\phi_{a^{n}}}(w)dA(w) =\int\limits_{\mathbb{D}}|g^{\prime}(a^{n}z+1-a^{n})|^{2}|\phi^{ \prime}_{a^{n}}(z)|^{2}\log\frac{1}{|z|}dA(z)\] \[\leq\int\limits_{\mathbb{D}}Ca^{2n}\log\frac{1}{|z|}dA(z)=a^{2n}K \quad\text{ ($n\geq n_{0}$)}\]
where \(C\) is a constant such that \(|g^{\prime}(a^{n}z+1-a^{n})|^{2}\leq C\) for all \(n\geq n_{0}\) and \(K\) is the constant given by \(C\) times the integral of \(\log\frac{1}{|z|}\) over \(\mathbb{D}\). So, if \(n\to\infty\) then \(a^{2n}\to 0\) and we conclude the proof. \(\blacksquare\)
For each \(s\in\mathbb{C}\) Hurst showed in ([10, Lemma 7]) that the functions \(f_{s}(z)=(1-z)^{s}\) belong to \(H^{2}\) if and only if \(\Re(s)>-\frac{1}{2}\). For \(\Re(s)>-\frac{1}{2}\) these functions are eigenvectors
\[C_{\phi_{a}}f_{s}(z)=f_{s}\circ\phi_{a}(z)=(1-az-1+a)^{s}=a^{s}f_{s}(z).\]
We arrive at the first central result of this section.
**Theorem 3.4**.: _Suppose that \(f=f_{s}g\) for some \(g\in H^{2}\) and \(Re(s)\geq 0\). If \(g^{\prime}\) is_ **EB** _and there exists a subsequence of \((g(1-a^{n}))_{n\in\mathbb{N}}\) which is bounded away from zero, then \(f_{s}\in K_{f}\). So \(K_{f}\) is minimal if and only if \(\dim K_{f}=1\)._
Proof.: By hypothesis \(f\in H^{2}\) (because \(f_{s}\in H^{\infty}\)) and we can choose a subsequence \((g(1-a^{n_{k}}))_{k\in\mathbb{N}}\) which is bounded away from zero, i.e, there exists a constant \(M\) such that \(|g(1-a^{n_{k}})|\geq M>0\). As \(f_{s}\) is an eigenvector we obtain the following relation
\[C_{\phi_{a}}^{n_{k}}f=C_{\phi_{a}}^{n_{k}}f_{s}g=a^{n_{ks}}f_{s}C_{\phi_{a}}^{ n_{k}}g.\]
Using the expression above, Theorem 2.1 and Lemma 3.3 we obtain
\[\left\|\frac{C_{\phi_{a}}^{n_{k}}f}{a^{n_{k}s}g(1-a^{n_{k}})}-f_{ s}\right\|_{2}^{2} =\left\|\frac{f_{s}C_{\phi_{a}}^{n_{k}}g}{g(1-a^{n_{k}})}-f_{s} \right\|_{2}^{2}\] \[\leq\|f_{s}\|_{\infty}^{2}\left\|C_{\phi_{a^{n_{k}}}}\left(\frac {g}{g(1-a^{n_{k}})}-1\right)\right\|_{2}^{2}\] \[=2\|f_{s}\|_{\infty}^{2}\int_{\mathbb{D}}\left|\frac{g^{\prime}(w )}{g(1-a^{n_{k}})}\right|^{2}N_{\phi_{a^{n_{k}}}}(w)dA(w)+\left|\frac{g\left(1 -a^{n_{k}}\right)}{g\left(1-a^{n_{k}}\right)}-1\right|^{2}\] \[\leq\frac{2\|f_{s}\|_{\infty}^{2}}{M^{2}}\int_{\mathbb{D}}\left| g^{\prime}(w)\right|^{2}N_{\phi_{a^{n_{k}}}}(w)dA(w)\xrightarrow{k\to\infty}0.\]
This means that \(f_{s}\) is the norm-limit of a sequence of elements in \(\operatorname{span}\{f,C_{\phi_{a}}f,\ldots\}\). Hence \(f_{s}\in K_{f}\) and by minimality we have \(K_{f}=K_{f_{s}}\) which is one dimensional because \(f_{s}\) is an eigenvector.
**Corollary 3.5**.: _Suppose that \(g\in H^{2}\) is such that \(g^{\prime}\) is_ **EB** _and there exists a subsequence of \((g(1-a^{n}))_{n\in\mathbb{N}}\) which is bounded away from zero, then \(1\in K_{g}\). So \(K_{g}\) is minimal if and only if \(\dim\,K_{g}=1\)._
Proof.: Apply the above theorem to \(s=0\).
The result above corrects an error in [4, Theorem 4.2] which rendered the conclusion incorrect. The hypothesis of \(g^{\prime}\) being **EB** is in fact necessary, as the following example demonstrates.
**Example 3.6**.: _Choose \(g(z)=f_{t}(z)=(1-z)^{\frac{2\pi i}{\log a}}\) where \(t=\frac{2\pi i}{\log a}\) in Corollary 3.5. Then_
\[g(1-a^{n})=(a^{n})^{\frac{2\pi i}{\log a}}=e^{\frac{2\pi i}{\log a}\log a^{n} }=e^{2\pi in}=1\]
_for all \(n\in\mathbb{N}\) and in particular \((g(1-a^{n}))_{n\in\mathbb{N}}\) is bounded away from zero. But for \(r\in(0,1)\)_
\[|g^{\prime}(r)|=\left|\frac{2\pi i}{\log a}\frac{1}{r-1}e^{\frac{2\pi i}{\log a }\log(1-r)}\right|=\left|\frac{2\pi}{\log a}\right|\left|\frac{1}{r-1}\right| \xrightarrow{r\to 1}\infty\]
_shows that \(g^{\prime}\) is not_ **EB**_. Then \(K_{g}=\mathbb{C}g\) because \(g\) is a \(C_{\phi_{a}}\)-eigenvector and \(1\notin K_{g}\)._
As a consequence of the last corollary, we resolve cases \((A)\) and \((C)\):
**Theorem 3.7**.: _If \(g\in H^{2}\) belongs to case \((A)\) or \((C)\) defined above and \(g^{\prime}\) is_ **EB**_, then \(K_{g}\) is minimal if and only if \(dim\;\;K_{g}=1\)._
Proof.: If \(g\) belongs to case \((A)\) then \((g(1-a^{n}))_{n\in\mathbb{N}}\) converges to a non-zero number, say \(L\). Then, there exists \(n_{0}\in\mathbb{N}\) such that for every \(n\geq n_{0}\), \(|g(1-a^{n})-L|<\frac{|L|}{2}\) and then \(|g(1-a^{n})|\geq\frac{|L|}{2}>0\) for \(n\geq n_{0}\). If \(g\) belongs to case \((C)\) then the radial limit does not exist, and in particular it is not \(0\). So we can find an \(\epsilon>0\) and a subsequence \((g(1-a^{n_{k}}))_{k\in\mathbb{N}}\) such that \(|g(1-a^{n_{k}})|\geq\epsilon>0\). The result follows by Corollary 3.5.
As the reader may have noticed, the case \((B)\) remains open. However \((B)\) can be solved if \(g\) is analytic at \(1\). This was one of the main results (proved incorrectly) in [4].
**Theorem 3.8**.: _Suppose that \(g\in H^{2}\) is analytic at \(1\). Then \(K_{g}\) is minimal if and only if \(\dim\;K_{g}=1\)._
Proof.: Notice that as \(g\) is analytic at \(1\), then so is \(g^{\prime}\). In particular \(g^{\prime}\) is **EB**. If \(g(1)\neq 0\) then case \((A)\) of the previous theorem gives the result. Suppose that \(g(1)=0\), then \(g(z)=(1-z)^{K}h(z)\) on some neighborhood \(V\) of \(1\), where \(K\) is the multiplicity of the zero \(1\), \(h\) is analytic at \(V\) and \(h(1)\neq 0\). Choose \(n_{0}\in\mathbb{N}\) such that for all \(n\geq n_{0}\), \(D_{n}\subset V\). Note that \(h^{\prime}\) is **EB** because \(h^{\prime}\) is also analytic at \(V\). Moreover,
\[g\circ\phi_{a^{n}}(z)=a^{nK}(1-z)^{K}h\circ\phi_{a^{n}}(z).\]
So \(h\circ\phi_{a^{n}}\in H^{\infty}\) because \(h\) is bounded \(D_{n}\) (in particular, \(h\circ\phi_{a^{n}}\in H^{2}\)). As the function \(f=f_{K}h\circ\phi_{a^{n_{0}}}=\frac{g\circ\phi_{a^{n}}}{a^{nK}}\) satisfies the hypothesis of Theorem 3.4, we conclude that \(f_{K}\in K_{f_{K}h\circ\phi_{a^{n_{0}}}}=K_{g\circ\phi_{a^{n}}}\subseteq K_{g}\). Thus \(K_{g}=K_{f_{K}}\) by minimality and we are done.
We conclude this section by considering case \((B)\) more carefully. Even with the **EB** hypothesis over \(g^{\prime}\), case \((B)\) appears to be delicate. However if \(g(1-a^{n})\) has a subsequence that converges to \(0\) at a _sufficiently_ slow rate, then we can still obtain a positive result.
**Theorem 3.9**.: _Suppose that \(g\in H^{2}\) is such that \(g^{\prime}\) is_ **EB** _and there exists \(0<\epsilon<1\) and a constant \(L>0\) such that \(|g(1-a^{n_{k}})|\geq La^{n_{k}(1-\epsilon)}\) for some subsequence \((n_{k})_{k\in\mathbb{N}}\). Then \(1\in K_{g}\), and \(K_{g}\) is minimal if and only if \(\dim\;K_{g}=1\)._
Proof.: The computations are very similar to the last result:
\[\left\|\frac{C_{\phi_{a}}^{n_{k}}g}{g(1-a^{n_{k}})}-1\right\|^{2} = \left\|\frac{C_{\phi_{a}}^{n_{k}}g}{g(1-a^{n_{k}})}-1\right\|^{2}\] \[= \left\|C_{\phi_{a^{n_{k}}}}\left(\frac{g}{g(1-a^{n_{k}})}-1 \right)\right\|^{2}_{2}\] \[= 2\int_{\mathbb{D}}\left|\frac{g^{\prime}(w)}{g(1-a^{n_{k}})} \right|^{2}N_{\phi_{a^{n_{k}}}}(w)dA(w)+\left|\frac{g\left(1-a^{n_{k}}\right) }{g\left(1-a^{n_{k}}\right)}-1\right|^{2}\] \[= 2\int_{\mathbb{D}}\frac{|g^{\prime}(w)|^{2}}{|g(1-a^{n_{k}})|^{2 }}N_{\phi_{a^{n_{k}}}}(w)dA(w)\] \[= 2\int_{\mathbb{D}}\frac{|g^{\prime}(a^{n_{k}}w+1-a^{n_{k}})|^{2 }}{|g(1-a^{n_{k}})|^{2}}|\phi^{\prime}_{a^{n_{k}}}(w)|^{2}\log\frac{1}{|w|}dA(w)\] \[\leq 2\int_{\mathbb{D}}C^{2}\frac{1}{L^{2}a^{2n_{k}}a^{-2n_{k} \epsilon}}a^{2n_{k}}\log\frac{1}{|w|}dA(w)\] \[= a^{2n_{k}\epsilon}M\xrightarrow{k\to\infty}0\]
where \(C\) is a upper bound for the values of \(g^{\prime}\) in some open ball \(D_{n_{k_{0}}}\) and \(M\) is a constant. This proves the result.
To obtain a complete solution for case \((B)\) under the \(\mathbf{EB}\) hypothesis over \(g^{\prime}\), it is natural to ask what happens if \(g(1-a^{n})\to 0\) faster than required by Theorem 3.9. For instance if
\[|g(1-a^{n})|\leq a^{\frac{n}{2}}\]
then the next result shows that the series \(\sum\limits_{n=1}^{\infty}C_{\phi_{a^{n}}}g\) converges in \(H^{2}\).
**Proposition 3.10**.: _Let \(g\in H^{2}\) with \(g^{\prime}\) an \(\mathbf{EB}\) function. Then \(h:=\sum\limits_{n=1}^{\infty}C_{\phi_{a^{n}}}g\in H^{2}\) if and only if \(\sum\limits_{n=1}^{\infty}|g(1-a^{n})|\) converges in \(\mathbb{C}\). Moreover \(h\in K_{g}\)._
Proof.: Suppose that \(\sum\limits_{n=1}^{\infty}|g(1-a^{n})|\) converges in \(\mathbb{C}\). Using Theorems 2.1 and 2.2 we conclude that for all \(n\geq n_{0}\) (where \(D_{n_{0}}\) is a ball in which \(g^{\prime}\) is bounded)
\[\|C_{\phi_{a^{n}}}g\|^{2} =\int\limits_{\mathbb{D}}|g^{\prime}(w)|^{2}N_{\phi_{a^{n}}}(w) dA(w)+|g(1-a^{n})|^{2}\] \[=\int\limits_{\mathbb{D}}|g^{\prime}(a^{n}z+1-a^{n})|^{2}|\phi^{ \prime}_{a^{n}}(z)|^{2}\log\frac{1}{|z|}dA(z)+|g(1-a^{n})|^{2}\] \[\leq a^{2n}L+|g(1-a^{n})|^{2}\]
where \(L\) is a positive constant. So
\[\|C_{\phi_{a^{n}}}g\|\leq\sqrt{a^{2n}L+|g(1-a^{n})|^{2}}\leq a^{n}\sqrt{L}+|g( 1-a^{n})|\quad\text{ for }n\geq n_{0}.\]
Since \(a<1\) and \(\sum\limits_{n=1}^{\infty}|g(1-a^{n})|<\infty\), the comparison test implies
\[\sum\limits_{n=1}^{\infty}\|C_{\phi_{a^{n}}}g\|<\infty.\]
The reciprocal follows from
\[\|C_{\phi_{a^{n}}}g\|^{2}=\int\limits_{\mathbb{D}}|g^{\prime}(w)|^{2}N_{\phi_ {a^{n}}}(w)dA(w)+|g(1-a^{n})|^{2}\geq|g(1-a^{n})|^{2}\]
and by the comparison test again.
Under the hypothesis of Proposition 3.10 if we define \(h_{k}:=\sum\limits_{n=k}^{\infty}C_{\phi_{a^{n}}}g\), then \(h\in H^{2}\) obviously implies all \(h_{k}\in H^{2}\) for \(k\geq 1\). In this case, if \(K_{g}\) is a minimal invariant subspace for \(C_{\phi_{a}}\) then we must have \(K_{g}=K_{h_{k}}\) for all \(k\in\mathbb{N}\). Notice that if some \(h_{l}\) were independent of the rest (say \(h_{l}\notin\overline{\operatorname{span}}(h_{k})_{k>l}\)), then \(K_{h_{l}}\) would properly contain \(K_{h_{k}}\) for \(k>l\) and in particular \(K_{g}\) cannot be minimal. This leads us to conjecture that
_If \(g\in H^{2}\) and \(\sum\limits_{n=1}^{\infty}C_{\phi_{a^{n}}}g\in H^{2}\), then \(K_{g}\) is minimal if and only if \(g\) is a \(C_{\phi_{a}}\)-eigenvector._
The role of the zero set
In this section we present some results showing how the cardinality of the zero set of \(f\) effects the dimension of \(K_{f}\). We will use the notation \(Z(f)\) to denote the set of zeros of \(f\) in \(\mathbb{D}\).
**Proposition 4.1**.: _If \(0<|Z(f)|<\infty\) then \(dim\;\;K_{f}\geq 2\)._
Proof.: By the hypothesis of finite zeros, we can choose \(0<K<1\) such that \(f\) is zero-free in the annulus \(\mathbb{D}-\overline{B(0,K)}\). Moreover, we can choose \(n_{0}\in\mathbb{N}\) such that for every \(n\geq n_{0}\), \(f\circ\phi_{a^{n}}(\mathbb{D})\subseteq\mathbb{D}-\overline{B(0,K)}\).
Choosing \(K\) and \(n_{0}\).
If \(n\geq n_{0}\) we claim that \(\{f,f\circ\phi_{a^{n}}\}\) is a L I set. In fact, consider any scalars \(\alpha,\beta\in\mathbb{C}\) such that \(\alpha f+\beta f\circ\phi_{a^{n}}=0\). If \(z_{0}\in Z(f)\) we have
\[\beta f\circ\phi_{a^{n}}(z_{0})=\alpha f(z_{0})+\beta f\circ\phi_{a^{n}}(z_{0 })=0\]
and since \(f\circ\phi_{a^{n}}\) is zero-free, we conclude that \(\beta=0\) and, consequently, \(\alpha=0\).
This result has the following interesting consequence: If the ISP is true, then every \(K_{f}\) with \(f\) satisfying the hypothesis above is necessarily non-minimal. On the other hand if \(K_{f}\) is minimal for such an \(f\), then the ISP is false. Next we show that when \(|Z(f)|=0\) or \(|Z(f)|=\infty\), then \(K_{f}\) can be either minimal or not. The first case is more simple:
**Example 4.2**.: _Consider \(g_{1}(z)=z^{2}+1\) and \(g_{2}(z)=1-z\). Both functions are zero-free in the disk, but \(K_{g_{2}}\) is minimal because \(g_{2}\) is an eigenvector whereas \(K_{g_{1}}\) is not minimal by Theorem 3.1._
For the case when \(|Z(f)|=\infty\) we have the following result.
**Proposition 4.3**.: _Let \(w\in\mathbb{D}\). If \(f(w)\neq 0\) but there exists \(n_{0}\in\mathbb{N}\) such that for all \(n\geq n_{0}\) we have \(f(a^{n}w+1-a^{n})=0\) then \(K_{f}\) is not minimal._
Proof.: In fact, if \(K_{f}\) is minimal then \(K_{f}=K_{f\circ\phi_{a^{n_{0}}}}\) and consequently \((K_{f})^{\perp}=(K_{f\circ\phi_{a^{n_{0}}}})^{\perp}\). Considering the reproducing kernel \(\kappa_{w}\) we observe that \(\kappa_{w}\in(K_{f\circ\phi_{a^{n_{0}}}})^{\perp}\) since \(f(a^{n}w+1-a^{n})=0\) (\(n\geq n_{0}\)) but \(\kappa_{w}\notin(K_{f})^{\perp}\) because \(f(w)\neq 0\). This is a contradiction.
**Example 4.4**.: _The sequence \(\{\phi_{a^{n}}(0)\}_{n\geq 2}=(1-a^{n})_{n\geq 2}\) is a Blaschke sequence because \(\sum\limits_{n=2}^{\infty}|1-(1-a^{n})|=\sum\limits_{n=2}^{\infty}a^{n}<\infty\). Consider \(B\) the Blaschke product associated to this sequence (in particular \(B\) has infinitely many zeros). We can write \(B\) has_
\[B(z)=\prod\limits_{n=2}^{\infty}\left(\frac{1-a^{n}-z}{1-\overline{(1-a^{n})}z }\right).\]
_Note that \(B(0)\neq 0\) but \(B(a^{n}0+1-a^{n})=B(1-a^{n})=0\) for \(n\geq 2\), so by the previous remark \(K_{B}\) is not minimal._
The only remaining case is an example of some \(f\in H^{2}\) such that \(K_{f}\) minimal and \(|Z(f)|=\infty\).
**Example 4.5**.: _Let \(a=\frac{1}{2}\). Consider the function \(f=e_{0}+f_{s}\) where \(s=\frac{2\pi i}{\log a}\) and \(e_{0}\) is the constant function \(1\). Note that \(e_{0}+f_{s}\) is an \(C_{\phi_{a}}\)-eigenvector:_
\[C_{\phi_{a}}(e_{0}+f_{s})=e_{0}+a^{s}f_{s}=a^{\frac{2\pi i}{\log a}}f_{s}=e_{0 }+f_{s}.\]
_So \(K_{f}\) is minimal. Moreover, note that_
\[(e_{0}+f_{s})(1-\sqrt{2})=1+f_{s}(1-\sqrt{2})=1+e^{\frac{2\pi i}{-2\log\sqrt{2 }}\log\sqrt{2}}=1+e^{-\pi i}=0\]
_Now, consider the sequence \(\{\phi_{a^{n}}(1-\sqrt{2})\}_{n\in\mathbb{N}}\subseteq\mathbb{D}\). Then_
\[(e_{0}+f_{s})(\phi_{a^{n}}(1-\sqrt{2}))=C^{n}_{\phi_{a}}(e_{0}+f_{s})(1-\sqrt{ 2})=(e_{0}+f_{s})(1-\sqrt{2})=0\]
_and thus we conclude that \(f\) has infinitely many zeros in \(\mathbb{D}\)._
The final result of this section strengthens Proposition 4.1 and shows we can always find a function in \(K_{f}\) that is orthogonal to \(f\).
**Proposition 4.6**.: _Suppose that \(0<|Z(f)|<\infty\) and let \(z_{0}\in Z(f)\). There exists a non-zero \(g\in K_{f}\) such that \(\langle g,h\rangle=0\) for every \(h\in K_{f}\) such that \(h(z_{0})=0\)._
Proof.: Let \(z_{0}\in\mathbb{D}\) a zero of \(f\) and consider the continuous map \(E_{z_{0}}:K_{f}\rightarrow\mathbb{C}\) which is exactly the restriction of the evaluation map at \(z_{0}\) defined in \(H^{2}\). Note that \(E_{z_{0}}\) is surjective because there exists \(n\geq n_{0}\) such that \(C^{n}_{\phi_{a}}f(z_{0})\neq 0\). So,
\[K_{f}\ominus Ker_{E_{z_{0}}}\simeq\frac{K_{f}}{Ker_{E_{z_{0}}}}\simeq\mathbb{C}.\]
which implies that \(K_{f}\ominus Ker_{E_{z_{0}}}\) is a one-dimensional space. Let \(g\in K_{f}\ominus Ker_{E_{z_{0}}}\) a non-null element. Thus \(\langle g,h\rangle=0\) whenever \(h\in K_{f}\) is such that \(h(z_{0})=0\).
## 5 Cyclicity and universality
We end this article with a result that highlights a connection between cyclicity and universality. We note that the best known examples of universal operators are adjoints of analytic Toeplitz operators \(T_{\phi}f=\phi f\) for \(\phi\in H^{\infty}\), \(f\in H^{2}\) or those that are similar to them (see [8],[9]), and all such coanalytic Toeplitz operators are cyclic (see [19]).
**Theorem 5.1**.: _If \(T\) is a closed range cyclic operator with infinite dimensional kernel on a Hilbert space \(H\), then \(T\) is universal._
Proof.: By the Caradus-Pozzi criterion (Theorem 1.2) it is enough to prove that \(T(H)\) has finite codimension. In fact for any cyclic \(T\) the dimension of \(T(H)^{\perp}\) is either \(0\) or \(1\). Although this is known to experts, we provide a proof for the sake of completeness. Let \(f\) be a cyclic vector for \(T\in B(H)\) and define \(N:=\overline{\operatorname{span}_{n\geq 1}\{T^{n}f\}}\). Let \(P:H\to N^{\perp}\) be the orthogonal projection onto \(N^{\perp}\). Now let \(g\) be any element in \(T(H)^{\perp}\). Then we have \(\langle g,T^{n}f\rangle=0\) for \(n\geq 1\) and consequently \(g\in N^{\perp}\). Since \(f\) is a cyclic vector, we can find a sequence \(g_{n}\in\overline{\operatorname{span}_{n\geq 0}\{T^{n}f\}}\) such that \(g_{n}\to g\). If we write \(g_{n}=\alpha_{n}f+t_{n}\) where the \(\alpha_{n}\) are scalars and \(t_{n}\in N\), then we obtain \(\alpha_{n}f+t_{n}\to g\). Applying \(P\) to this we conclude that \(\alpha_{n}Pf\to Pg=g\) and hence \(g=\alpha Pf\) for some \(\alpha\in\mathbb{C}\). Since \(g\) is an arbitrary element of \(T(H)^{\perp}\), the latter space is at most one-dimensional. Finally \(\operatorname{codim}\,T(H)=\dim\,T(H)^{\perp}\leq 1\) because \(T\) has closed range and the result follows.
Recall that if \(\phi\in LFT(\mathbb{D})\) is hyperbolic then \(C_{\phi}-\lambda\) is universal for all eigenvalues \(\lambda\). The classical proof of the automorphic case (see [14]) and that of the recent non-automorphic case (see [4]) are both elaborate and involved. An elegant proof of the automorphic case was found recently by Cowen and Gallardo-Gutierrez [8]. Theorem 5.1 suggests a possible approach to simplify both proofs: this approach is based on showing that the range of \(C_{\phi}-\lambda\) is closed for some eigenvalue \(\lambda\), rather than proving surjectivity. It is known that when \(\phi\in LFT(\mathbb{D})\) is a hyperbolic map then the composition operator \(C_{\phi}\) is hypercyclic, and in particular cyclic (see [1, Theorem 1.47]). Also for every \(\lambda\in\mathbb{C}\) and bounded linear operator \(T\) we have
\[\operatorname{span}\{f,Tf,T^{2}f,\ldots\}=\operatorname{span}\{f,(T-\lambda) f,(T-\lambda)^{2}f,\ldots\}.\]
It follows that \(T\) is cyclic if and only if \(T-\lambda\) is cyclic. Thus for all \(\lambda\in\mathbb{C}\) the operator \(C_{\phi}-\lambda\) is cyclic whenever \(\phi\) is hyperbolic. Moreover for \(\lambda\) in the point spectrum of \(\phi\), the kernel of \(C_{\phi}-\lambda\) is infinite dimensional ([7], Lemma 7.24 and Theorem 7.4). Therefore the closure of the range of \(C_{\phi}-\lambda\) gives universality by Theorem 5.1.
As a final remark, we note that every known example of a universal operator has closed range, besides the necessary conditions of infinite dimensional kernel and range. This leads us to conjecture that
_An operator \(U\) is universal in the sense of Rota if and only if \(U\) has infinite dimensional closed range and infinite dimensional kernel._
## Acknowledgements
This work constitutes a part of the doctoral thesis of the second author, partially supported by the Conselho Nacional de Desenvolvimento Cientifico e Tecnologico - CNPq Brasil, under the supervision of the third named author. |
2305.08277 | Local Convergence of Gradient Descent-Ascent for Training Generative
Adversarial Networks | Generative Adversarial Networks (GANs) are a popular formulation to train
generative models for complex high dimensional data. The standard method for
training GANs involves a gradient descent-ascent (GDA) procedure on a minimax
optimization problem. This procedure is hard to analyze in general due to the
nonlinear nature of the dynamics. We study the local dynamics of GDA for
training a GAN with a kernel-based discriminator. This convergence analysis is
based on a linearization of a non-linear dynamical system that describes the
GDA iterations, under an \textit{isolated points model} assumption from [Becker
et al. 2022]. Our analysis brings out the effect of the learning rates,
regularization, and the bandwidth of the kernel discriminator, on the local
convergence rate of GDA. Importantly, we show phase transitions that indicate
when the system converges, oscillates, or diverges. We also provide numerical
simulations that verify our claims. | Evan Becker, Parthe Pandit, Sundeep Rangan, Alyson K. Fletcher | 2023-05-14T23:23:08Z | http://arxiv.org/abs/2305.08277v2 | # Local Convergence of Gradient Descent-Ascent for Training Generative Adversarial Networks
###### Abstract
Generative Adversarial Networks (GANs) are a popular formulation to train generative models for complex high dimensional data. The standard method for training GANs involves a gradient descent-ascent (GDA) procedure on a minimax optimization problem. This procedure is hard to analyze in general due to the nonlinear nature of the dynamics. We study the local dynamics of GDA for training a GAN with a kernel-based discriminator. This convergence analysis is based on a linearization of a non-linear dynamical system that describes the GDA iterations, under an _isolated points model_ assumption from [2]. Our analysis brings out the effect of the learning rates, regularization, and the bandwidth of the kernel discriminator, on the local convergence rate of GDA. Importantly, we show phase transitions that indicate when the system converges, oscillates, or diverges. We also provide numerical simulations that verify our claims.
## I Introduction
Modelling complex signals such as images, speech, text is of broad interest in machine learning and signal processing. Generative models for such data can enable many engineering and scientific applications such as sampling, inference, and understanding structural properties of complex data. With the increasing access to computational resources, the recent focus of generative modelling has been using data-driven techniques.
Generative Adversarial Networks (GANs) are a class of probabilistic generative models that avoid expensive likelihood computations while still providing good sample quality [3]. In order to fit complex data distributions, two models (typically deep neural networks) are trained in an alternating manner: a _generator_\(\mathcal{G}\) learns a deterministic map from a latent space \(\mathcal{Z}\) to the data space \(\mathcal{X}\), while a _discriminator_ or _critic_ model \(\mathcal{D}\) attempts to discern whether a sample belongs to the training dataset or the generated dataset.
The discriminator plays an important, yet poorly understood, role in the training of a GAN. It is well known from [3] that if the discriminator is trained to minimize the cross-entropy between true and generated samples, the generator would minimize the Jensen-Shannon divergence between the true distribution and the distribution of the generated samples. Similarly different choices for discriminator loss functions lead to a variety of \(f\)-divergences [12] and probability metrics [4] between the generated and true distributions.
In practice, however, we apply GDA for training GANs, whereby the discriminator is not allowed to converge, making analysis of the iterative training extremely difficult. Furthermore, training commonly suffers from empirical breakdowns such as mode collapse, in which the entire generated distribution converges to a small portion of the target distribution [13]. The generator may even fail to converge entirely when gradients from the discriminator are too small for the generator to proceed during training. Without an understanding of when and how these phenomena occur, practitioners have to rely on heuristics and extensive hyperparameter tuning based on trial and error procedures [6, 9, 13].
In this work, we characterize the local convergence rates of GAN training when the discriminator is kernel-based. This choice of the discriminator model is motivated by the recently discovered equivalence between wide neural networks and kernel methods via the Neural Tangent Kernel framework [5]. While the discriminators problem is simplified due to the kernel-based discriminator, the overall dynamics of the generated samples remain non-linear and complex, and hence retain many of the properties exhibited by GANs in practice such as mode collapse and divergence [2].
### _Prior Work on Linear GANs and Main Contributions_
Stability analysis for GANs under stylized settings goes back to the Dirac-GAN framework from [8], which looked at the local stability of a two-point system using a linear discriminator to demonstrate examples of catastrophic forgetting. Other GAN works use a similar linearization analysis, such as [10, 11]. The isolated points model proposed by [2] allowed for a more complex model while remaining analytically tractable, by letting the generated probability mass differ from the true mass in various isolated regions. We provide new insight into the framework proposed by [2] by going beyond stability analysis and characterizing rates of convergence.
We analyze the local convergence of the non-linear dynamical system that describes the GDA iterates, in settings when the equilibrium is stable. Our analysis is based on a linearization of these non-linear dynamics. We show how changing the kernel width can improve the rate of convergence, while also highlighting a phase transition under which the convergence remains unaffected by changes in the kernel width.
## II Model
We investigate the training dynamics of a GAN where the target distribution and the generated distribution are discrete point masses, following the framework of [2]. We highlight key elements of our model below.
### _Target and Generated Distributions_
We assume that the target and generated distributions consist of point masses with density functions over \(x\in\mathbb{R}^{d}\) given by
\[\mathbb{P}_{r}(x)=\sum_{i=1}^{N_{r}}p_{i}\delta(x-x_{i}),\quad\mathbb{P}_{g}(x)= \sum_{j=1}^{N_{g}}\widetilde{p}_{j}\delta(x-\widetilde{x}_{j}), \tag{1}\]
where \(\delta\) is the Dirac delta function, \(X=\{x_{i}\}_{i=1}^{N_{r}}\) and \(\widetilde{X}=\{\widetilde{x}_{j}\}_{j=1}^{N_{g}}\) are the true and generated points, and \(\{p_{i}\}_{i=1}^{N_{r}}\) and \(\{\widetilde{p}_{j}\}_{j=1}^{N_{r}}\) are their (fixed) probability masses. The problem we consider is learning the locations \(\widetilde{X}\) so that the generated and true distributions match. Thus the decision variable of the generator model is \(\widetilde{X}\). This simplification is justified since we wish to study the role of the discriminator in this work.
### _Kernel Discriminator_
The GAN discriminator is a function \(f:\mathcal{X}\rightarrow\mathbb{R}\) which predicts whether a sample \(x\) is real or fake based on \(\mathrm{sign}(f(x))\). In this paper we assume that the discriminator belongs to a Reproducing Kernel Hilbert Space \(\mathcal{H}\) corresponding to a bivariate positive definite kernel function \(K:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}\). The discriminator defines a maximum mean discrepancy (MMD) metric
\[\mathsf{MMD}(\mathbb{P}_{r},\mathbb{P}_{g}):=\max_{\begin{subarray}{c}f\in \mathcal{H}\\ \|f\|\leq 1\end{subarray}}\ \sum_{i=1}^{N_{r}}p_{i}f(x_{i})-\sum_{i=1}^{N_{g}} \widetilde{p}_{i}f(\widetilde{x}_{j}) \tag{2}\]
between \(\mathbb{P}_{r}\) and \(\mathbb{P}_{g}\), which the generator tries to minimize.
### _Minimax Optimization Formulation for Training a GAN_
We assume a mini-max loss function similar to [1, 2, 15, 1] of the form:
\[\min_{\widetilde{X}}\ \max_{f\in\mathcal{H}}\ \mathsf{L}(f, \widetilde{X}) \tag{3a}\] \[\mathsf{L}(f,\widetilde{X}):=\sum_{i=1}^{N_{r}}p_{i}f(x_{i})-\sum_{i=1}^{N_{g}} \widetilde{p}_{i}f(\widetilde{x}_{j})-\tfrac{\lambda}{2}\left\|f\right\|_{ \mathcal{H}}^{2}. \tag{3b}\]
The regularization parameter \(\lambda>0\) is some constant, that acts as a Lagrange multiplier for the optimization problem in equation (2). The loss is a function of the discriminator \(f\) and generated samples \(\widetilde{X}\).
**Notation:** For matrices \(X\in\mathbb{R}^{n\times d}\), \(Z\in\mathbb{R}^{p\times d}\) with rows \(x_{i},z_{j}\in\mathbb{R}^{d}\), and for vectors \(u,v,x,z\in\mathbb{R}^{d}\), and a kernel function \(K:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}\), by \(K(X,Z)\) we denote the \(n\times p\) matrix with \(ij^{\mathrm{th}}\) entry \(K(x_{i},z_{j})\). By \(\nabla_{1}K(x,z)\) we denote the map \(\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\) given by \((u,v)\mapsto\tfrac{\partial}{\partial x}K(x,z)\big{|}_{x=u,z=w}\). Similarly, by \(\nabla_{1}K(X,Z)\mathbf{p}\) we denote the \(n\times d\) matrix with \(i^{\mathrm{th}}\) row \(\sum_{i=1}^{p}\nabla_{1}K(x_{i},z_{j})p_{j}\). Furthermore for a vector \(\mathbf{v}\in\mathbb{R}^{n}\) and matrix \(\mathbf{M}\in\mathbb{R}^{n\times d}\), by \(\mathbf{v}\odot\mathbf{M}\) we denote the Hadamard product, which yields a \(n\times d\) matrix with \(ij^{\mathrm{th}}\) entry \(v_{i}\mathbf{M}_{ij}\). For example, \(\mathbf{v}\odot\nabla_{1}K(X,Z)\mathbf{p}\) is a \(n\times d\) matrix with \(i^{\mathrm{th}}\) row \(v_{i}\tfrac{\partial}{\partial x}\left(\sum_{j=1}^{p}K(x,z_{j})p_{j}\right) \big{|}_{x=x_{i}}\)
### _Training Dynamics of Gradient Descent Ascent_
We assume the generator performs gradient descent on the above minimax optimization problem with a step size \(\eta_{g}\) and the discriminator performs gradient ascent with step size \(\eta_{d}\). We let \((f^{t},\widetilde{X}^{t})\) denote the discriminator and generated samples in step \(t\).
\[f^{t+1} =f^{t}+\eta_{d}\frac{\partial}{\partial f}\mathsf{L}(f^{t}, \widetilde{X}^{t}) \tag{4a}\] \[\widetilde{X}^{t+1} =\widetilde{X}^{t}-\eta_{g}\frac{\partial}{\partial\widetilde{X}} \mathsf{L}(f^{t},\widetilde{X}^{t}) \tag{4b}\]
where the first equation uses the Frechet derivative with respect to \(f\). This can be simplified since the loss function \(\mathsf{L}(f,\widetilde{X})\) only consists of linear and quadratic terms of \(f\). Observe that for any \(u\) in \(\mathbb{R}^{d}\), the linear term \(f(u)=\langle f,K(u,\cdot)\rangle_{\eta}\) due to the reproducing property of the kernel whereby, \(\tfrac{\partial}{\partial f}f(u)\) is the function \(v\mapsto K(u,v)\), denoted \(K(u,\cdot)\).
Using the loss function in equation (3), we get the updates
\[f^{t+1} =(1-\lambda\eta_{d})f^{t}+\eta_{d}\left(K(\cdot,X)\mathbf{p}-K(\cdot, \widetilde{X}_{i})\widetilde{\mathbf{p}}\right) \tag{5a}\] \[\widetilde{x}_{j}^{t+1} =\widetilde{x}_{j}^{t}+\eta_{g}\widetilde{p}_{j}\nabla f^{t}( \widetilde{x}_{j}^{t}),\qquad\forall\ j=1,2,\ldots,N_{g} \tag{5b}\]
Notice that (5a) is linear in \(f\), whereby we can simplify these equations further. The following lemma simplifies equation (5) by eliminating the discriminator \(f\).
**Lemma 1** (Training Dynamics).: _Assume \(f_{0}=0,\) the zero function in the RKHS \(\mathcal{H}\). Then the following deterministic dynamical system describes the evolution of the samples generated using equation (5)._
\[\widetilde{X}^{t+1}= \widetilde{X}^{t}+\eta_{d}\eta_{g}\sum_{s=0}^{t}(1-\lambda\eta_{d} )^{t-s}\times\] \[\widetilde{\mathbf{p}}\odot\left(\nabla_{1}K(\widetilde{X}^{t},X)\mathbf{p }-\nabla_{1}K(\widetilde{X}^{t},\widetilde{X}^{s})\widetilde{\mathbf{p}}\right) \tag{6}\]
Note that the above dynamical system is nonlinear in \(\widetilde{X}\), and is non-Markovian due to dependence of \(\widetilde{X}^{t+1}\) on \(\left\{\widetilde{X}^{s}\right\}_{s\leq t}\). The term \(\widetilde{\mathbf{p}}\odot\nabla_{1}K(\widetilde{X}_{\cdot},X)\mathbf{p}\) can be thought of as a _drift_, whereas \(\widetilde{\mathbf{p}}\odot\nabla_{1}K(\widetilde{X},\widetilde{X})\widetilde{\mathbf{p}}\) can be thought of as a _diffusion_.
From equation (6) we can immediately infer a condition for a set of generated points \(\widetilde{X}^{*}\) to be in equilibrium
**Lemma 2**.: _A set of points \(\widetilde{X}^{*}\) is in equilibrium for the dynamics equation (6) if and only if_
\[\nabla_{1}K(\widetilde{X}^{*},X)\mathbf{p}=\nabla_{1}K(\widetilde{X}^{*},\widetilde{ X}^{*})\widetilde{\mathbf{p}}. \tag{7}\]
**Remark 1**.: The set of equilibrium points depend only on the kernel \(K\) and are invariant to the hyperparameters \(\eta_{d},\eta_{g},\lambda\). However the dynamics and convergence properties of these equilibria depend on \(\eta_{d},\eta_{g},\lambda\) as well as \(\sigma\).
### _Model and Optimization Hyperparameters_
Our analysis characterizes the effect of four hyperparameters in total which can be categorized as modelling and optimization hyperparameters.
This setting has two model hyperparameters that control the smoothness of the discriminator. The regularization \(\lambda\) controls the \(\mathcal{H}\)-norm of the discriminator, which is a measure of global smoothness. In contrast, the kernel bandwidth \(\sigma\) is a measure of the local smoothness. We also have two optimization hyperparameters, which are the learning rate of the generator \(\eta_{g}\) and the learning rate of the discriminator \(\eta_{d}\). In practice often \(\eta_{g}\ll\eta_{d}\).
## III Local Convergence around True Samples
### _Assumptions on the kernel function_
We assume the the kernel \(K(x,x^{\prime})\) is smooth and, at each true point \(x_{i}\):
\[\nabla_{1}K(x_{i},x_{i})=\left.\frac{\partial K(x,x_{i})}{ \partial x}\right|_{x=x_{i}}=\mathbf{0}, \tag{8a}\] \[-\frac{\partial^{2}K(x,x_{i})}{\partial x^{2}}\bigg{|}_{x=x_{i}}= \left.\frac{\partial^{2}K(x,x^{\prime})}{\partial x\partial x^{\prime}} \right|_{x=x^{\prime}=x_{i}}=\frac{1}{\sigma^{2}}\boldsymbol{I}_{d} \tag{8b}\]
for some \(\sigma>0\) that we call the _kernel width_ and represents the curvature of the kernel around \(x=x_{i}\). Note that (8a) and (8b) are satisfied for the standard RBF kernel:
\[K(x,x^{\prime})=\exp\left(-\tfrac{1}{2\sigma^{2}}\|x-x^{\prime}\|^{2}\right). \tag{9}\]
**Proposition 1**.: _Under the above assumption, \(\widetilde{X}^{*}\) such that \(\widetilde{x}^{*}_{j}=x_{i}\) for some \(i\), is an equilibrium._
This follows immediately from the observation in equation (7). When the assumptions on the kernel equation (8) are satisfied, both sides of equation (7) vanish.
The results in [2] analyzed the stability of this equilibrium under an isolated points model described below, which localizes the analysis around each true point.
### _Isolated Points Model_
We assume the true samples are separated far enough so that there exists a non-empty _isolated neighborhood_\(V_{i}\) around each sample \(x_{i}\) such that,
\[K(x,x^{\prime})=0\text{ for all }x\in V_{i}\text{ and }x^{\prime}\in V_{j}\text{ for all }i\neq j. \tag{10}\]
In other words, the generated points are separated sufficiently far apart such that they are outside the width of the kernel evaluated at another sample. We let \(\mathcal{N}_{i}\) be the set of indices \(j\) such that the generated points \(\widetilde{x}^{t}_{j}\in V_{i}\), for all \(t\).
Thus the dynamics we study can be written as
\[\widetilde{X}^{t+1}_{i}= \widetilde{X}^{t}_{i}+\eta_{d}\eta_{g}\sum_{s=0}^{t}(1-\lambda \eta_{d})^{t-s}\times\] \[\widetilde{\boldsymbol{p}}_{i}\odot\left(\nabla_{1}K(\widetilde{ X}^{t}_{i},x_{i})p_{i}-\nabla_{1}K(\widetilde{X}^{t}_{i},\widetilde{X}^{s}_{i}) \widetilde{\boldsymbol{p}}_{i}\right) \tag{11}\]
where \(\widetilde{X}^{t}_{i}\) are points generated inside the region \(V_{i}\), and \(\widetilde{\boldsymbol{p}}_{i}\) is the length \(|\mathcal{N}_{i}|\) subvector of \(\widetilde{p}\) corresponding to these points.
Under this assumption, if \(j\in\mathcal{N}_{i}\) and \(k\notin\mathcal{N}_{i}\), then equation (III-B) ignores interaction terms between \((\widetilde{x}_{k},x_{i})\), and \((\widetilde{x}_{j},\widetilde{x}_{k})\), compared to equation (6). Note equation (6) tracks \(N_{r}N_{g}+N_{g}^{2}\) interaction terms whereas equation (III-B) only tracks \(|\mathcal{N}_{i}|+|\mathcal{N}_{i}|^{2}\) terms where \(|\mathcal{N}_{i}|\) is the number of generated points inside \(V_{i}\).
We will call the updates equation (III-B) the _dynamical system in the region_\(V_{i}\). For the purpose of analysis it is beneficial to write the dynamics involving both the discriminator and the generator as below:
\[f^{t+1}_{i} =(1-\lambda\eta_{d})f^{t}+K(\cdot,x_{i})p_{i}-\sum_{i\in\mathcal{ N}_{i}}K(\cdot,\widetilde{x}_{i})\widetilde{p}_{i} \tag{12a}\] \[\widetilde{x}^{t+1}_{j} =\widetilde{x}^{t}_{j}+\eta_{g}\widetilde{p}_{j}\nabla f^{t}( \widetilde{x}_{j})\qquad x_{j}\in V_{i} \tag{12b}\]
Under the isolated points model, the discriminator satisfies
\[f^{t}(u) =\sum_{i=1}^{N_{r}}f^{t}_{i}(u),\qquad\forall\,u \tag{13a}\] \[f^{t}(x) =f^{t}_{i}(x)\qquad\forall\,x\in V_{i}. \tag{13b}\]
### _Main result_
Given a local region \(V_{i}\), we wish to study the dynamics of the local system given by equation (12) where \(\widetilde{x}_{j}\) are close to \(x_{i}\) for all \(j\in\mathcal{N}_{i}\). That is, all the generated points are close to the true point in that region. To this end, we write the local updates (12) as a mapping
\[(f^{t+1}_{i},\widetilde{X}^{t+1}_{i})=\Phi_{i}(f^{t}_{i},\widetilde{X}^{t}_{i}), \tag{14}\]
where \(\Phi_{i}(\cdot)\) represents the update function in (12). Also, let
\[\widetilde{X}^{*}_{i}=\{\widetilde{x}^{*}_{j},\ j\in\mathcal{N}_{i}\},\quad \widetilde{x}^{*}_{j}=x_{i}. \tag{15}\]
It is shown in [2] that there exists a parameter vector \(f^{*}_{i}\) such that \((f^{*}_{i},\widetilde{X}^{*}_{i})\) is an _equilibrium point_ of \(\Phi_{i}(\cdot)\) in that
\[(f^{*}_{i},\widetilde{X}^{*}_{i})=\Phi_{i}(f^{*}_{i},\widetilde{X}^{*}_{i}). \tag{16}\]
The condition (16) implies that if \((f^{t}_{i},\widetilde{X}^{t}_{i})=(f^{*}_{i},\widetilde{X}^{*}_{i})\) for some \(t\), then \((f^{*}_{i},\widetilde{X}^{*}_{i})\) will remain at \((f^{*}_{i},\widetilde{X}^{*}_{i})\) for all subsequent times \(s\geq t\). Let \(\boldsymbol{J}^{*}_{i}\) denote the Jacobian of the update mapping \(\Phi_{i}(\cdot)\) at the equilibrium point \((f^{*}_{i},\widetilde{X}^{*}_{i})\) and define the spectral radius of the Jacobian
\[\rho_{\text{max}}:=\rho_{\text{max}}(\boldsymbol{J}^{*}_{i})=\text{max}\left\{ \left|\rho\right|\mid\rho\in\operatorname{spec}(\boldsymbol{J}^{*}_{i})\right\}, \tag{17}\]
where \(\operatorname{spec}(\boldsymbol{J}^{*}_{i})\) is the spectrum of \(\boldsymbol{J}^{*}_{i}\), i.e., its eigenvalues.
A well-known result of non-linear systems theory [16] is that the equilibrium point \((f^{*}_{i},\widetilde{X}^{*}_{i})\) is _locally asymptotically stable_ if \(\rho_{\text{max}}(\boldsymbol{J}^{*}_{i})<1\). Conversely, if \(\rho_{\text{max}}(\boldsymbol{J}^{*}_{i})>1\), the system can be shown to be _locally unstable_ - see [16] for precise definitions. Hence, \(\rho_{\text{max}}(\boldsymbol{J}^{*}_{i})\) can provide necessary and sufficient conditions for local stability. Also, if \(\rho_{\text{max}}<1\) and the system is initialized at \((f^{0}_{i},\widetilde{X}^{0}_{i})\) sufficiently close to \((f^{*}_{i},\widetilde{X}^{*}_{i})\) then, the components will converge geometrically as
\[\|\widetilde{x}^{t}_{j}-\widetilde{x}_{i}\|\leq C\rho^{t}_{\text{max}}\| \widetilde{x}^{0}_{j}-\widetilde{x}_{i}\|, \tag{18}\]
for some constant \(C\). Hence, the spectral radius \(\rho_{\text{max}}\) also provides a measure of the convergence rate of the system. Our main theorem below applies this result to obtain an exact characterization of the convergence rate of the local dynamics by studying the spectrum of \(\boldsymbol{J}^{*}\) in terms of the model and
optimization hyperparameters.
Recall the model hyperparameters: \(\sigma\) - discriminator kernel width, \(\lambda\) - IPM regularization, \(\eta_{d}\) - learning rate of discriminator and \(\eta_{g}\) - learning rate of the generator.
**Theorem 1**.: _Consider the isolated neighborhood training dynamics in (12) under the assumptions in Section II in some region \(V_{i}\). Additionally, assume that the weights of the generated points are equal so that \(\widetilde{p}_{j}=\widetilde{p}\) for some \(\widetilde{p}>0\) and all \(j\in N_{i}\). Define_
\[a:=\lambda,\quad b:=\frac{\mu\widetilde{p}\Delta_{i}}{\lambda\sigma^{2}},\quad c :=\frac{\mu\widetilde{p}p_{i}}{\sigma^{2}},\quad\mu:=\frac{\eta_{g}}{\eta_{d}}, \tag{19}\]
_and_
\[\Delta_{i}:=p_{i}-\sum_{j\in N_{i}}\widetilde{p}_{j}=p_{i}-|\mathcal{N}_{i}| \widetilde{p}. \tag{20}\]
_Then, the eigenvalues of the \(\mathbf{J}^{*}\) are of the form_
\[\rho=1-\eta_{d}\nu, \tag{21}\]
_where \(\nu\) is from the set:_
\[\nu\in\begin{cases}\left\{a,b,m\pm\sqrt{m^{2}-c}\right\}&\text{ if }|\mathcal{N}_{i}|>1\\ \left\{a,m\pm\sqrt{m^{2}-c}\right\}&\text{ if }|\mathcal{N}_{i}|=1.\end{cases} \tag{22}\]
_where \(m=(a+b)/2\)._
The proof of the result is given in Appendix A and builds on the linear analysis in [2]. The theorem above gives an exact characterization of the eigenvalues of the linear system in terms of the key parameters including the step sizes and kernel width.
### _Selecting the step size_
An immediate consequence of Theorem 1 is that it guides the selection of the step-sizes that ensure local stability. As described above, for local stability, we wish that \(|\rho|<1\) for all \(\rho\) in (21). The following provides necessary and sufficient conditions on \(\eta_{d}\) for this stability condition to occur.
**Corollary 1**.: _Under the conditions in Theorem 1, the spectral radius of the Jacobian, \(\rho_{\text{max}}(\operatorname{spec}(\mathbf{J}^{*}_{i}))<1\), if and only if:_
\[0<\eta_{d}<\begin{cases}\text{min}\left\{\frac{2}{a},\;\frac{2}{b},\;\frac{a+ b}{c}\right\}&\text{if }|\mathcal{N}_{i}|>1\\ \text{min}\left\{\frac{2}{a},\;\frac{a+b}{c}\right\}&\text{if }|\mathcal{N}_{i}|=1. \end{cases} \tag{23}\]
In particular, by choosing \(\eta_{d}\) small enough, we can always guarantee the system is locally stable when \(\lambda>0\) and \(\Delta_{i}>0\), meaning that there is at least some regularization and the true point mass exceeds the generated point mass. We can also derive a simple sufficient condition:
**Proposition 2** (Sufficient condition for stability).: _The training dynamics equation (12) are stable around equilibrium \(\widetilde{X}^{*}_{i}\) from equation (15) for all \(\Delta_{i}\in(0,p_{i})\) if,_
\[\eta_{d}<\frac{2}{\lambda},\qquad\text{and}\qquad\eta_{g}<\lambda\sigma^{2}. \tag{24}\]
The rest of the paper assumes equation (24) holds and derives convergence rates based on the choice of kernel width \(\sigma\).
## IV Convergence Rate and Kernel Width
Theorem 1 can also provide insights into the relation of the convergence rate to the system parameters. Specifically, recall from equation (18) that the spectral radius, \(\rho_{\text{max}}(\mathbf{J}^{*}_{i})\) defined in equation (17), determines the convergence rate of the local dynamics, i.e., \(\rho_{\text{max}}\) closer to 1 indicating slower convergence and \(\rho_{\text{max}}\) closer to 0 indicates faster convergence. Now, among the values in (21), the \(\rho\) that maximizes \(|\rho|\) will be one of three values:
\[\rho_{a}=1-\eta_{d}a,\;\rho_{b}=1-\eta_{d}b,\;\rho_{c}=1-\eta_{d}(m-\sqrt{m^{2 }-c}), \tag{25}\]
where \(m=(a+b)/2\). The cases when the different values dominate are shown in Figure 1.
It is clear from equation (25) that controlling the dominant eigenvalue by adjusting the relevant hyperparameters can improve the rate of convergence.
**Remark 2** (Saturation with Kernel Width).: We now share a phase of the dynamical system where changing \(\sigma\) does not affect the convergence rate. Consider the dynamics (12) with fixed \(\widetilde{p}\), and \(|\mathcal{N}_{i}|\). Furthermore, assume \(\eta_{d}\) is fixed such that the condition from Corollary 1 is satisfied. Then changing the kernel width parameter \(\sigma^{2}\) cannot improve the convergence rate in the following settings:
* When all eigenvalues are real and \(\rho_{a}\) dominates. This condition is equivalent to \(c<\frac{1}{4}(a+b)^{2}\) and \(a<\min\;\left(b,m-\sqrt{m^{2}-c}\right)\).
* When two eigenvalues are complex and \(\rho_{a}\) dominates. Equivalently \(c>\frac{1}{4}(a+b)^{2}\) and \(a<\text{min}\left\{b,2m-\eta_{d}c\right\}\).
### _Diminishing learning rate_
One example regime in which this saturation can clearly be understood is when the learning rate is small, \(\Delta_{i}\) is positive, and \(\rho_{c}\) is complex. When the learning rate is small, the magnitude for any eigenvalue of the form \(\rho_{\nu}=1-\eta_{d}\nu\) can be approximated by \(|\rho_{\nu}|^{2}\approx 1-2\eta_{d}\text{Re}(\nu)+\mathcal{O}(\eta_{d}^{2})\). This means that we have eigenvalues with approximate magnitudes:
\[|\rho_{a}|^{2} \approx 1-2\eta_{d}\lambda \tag{26a}\] \[|\rho_{b}|^{2} \approx 1-2\eta_{d}\frac{\mu\widetilde{p}\Delta_{i}}{\lambda\sigma^{2}}\] (26b) \[|\rho_{c}|^{2} \approx 1-\eta_{d}(\lambda+\frac{\mu\widetilde{p}\Delta_{i}}{ \lambda\sigma^{2}}) \tag{26c}\]
This yields that the largest eigenvalue is
\[1-2\eta_{d}\cdot\text{min}\left\{\lambda,\frac{\mu\widetilde{p}\Delta_{i}}{ \lambda\sigma^{2}}\right\} \tag{27}\]
Thus reducing the kernel width \(\sigma\) below \(\sqrt{\mu\widetilde{p}\Delta_{i}}/\lambda\), does not lead to changes in the convergence rate \(1-2\eta_{d}\lambda\).
## V Numerical Results
In this section, we demonstrate the accuracy of our linearized dynamics by comparing predicted convergence to actual GAN training behavior around local equilibrium.
Phase transitionsIn Figure 2, we plot the heatmap of dominating eigenvalue magnitude for a range of regularization and kernel width settings. Note that in this figure we use small learning rate (\(\eta_{d}=\eta_{g}=1e^{-2}\)), meaning firstly that the system is stable for almost all choices of hyperparameters (Figure 2a). In the middle plot (Figure 2b), it can be observed that the majority of fast convergence behaviors occur when \(\rho_{c}\) has imaginary components. In order to analytically find this region, the condition \(m^{2}<c\) provides a quadratic inequality in terms of \(\gamma=1/\sigma^{2}\), from which the roots tell us the exact ranges of kernel widths. When \(\Delta=0\), \(\gamma>\frac{\lambda^{2}}{4\rho_{g}^{2}}\), meaning a small enough kernel width will always result in oscillatory behavior. When \(\Delta\neq 0\), we have \(\gamma\in\frac{\lambda^{2}}{\lambda^{2}p_{B}}(2p-\Delta\pm 2p\sqrt{1- \Delta/p})\), meaning extremely small or extremely large kernel widths will have no oscillations. Lastly for the right plot (Figure 2c), we highlight the range of kernel widths that for a given regularization strength do not affect the convergence rate (Remark 2). For positive \(\Delta\) (more target mass than generated), this region intuitively begins where \(\rho_{a}=\rho_{b}\): as kernel width shrinks further, the magnitude of both \(\rho_{b}\) and \(\rho_{c}\) shrink, leaving \(\rho_{a}\) fixed and dominating.
In Figure 3, we observe that our approximation matches true training dynamics very precisely when the learning rate is small. Note that in the small learning rate regime, corollary 1 correctly predicts stability for all simulations. In this setting, decreasing the kernel width and increasing regularization can speed up the convergence of the generated point. However, when regularization is small, the effect of kernel width is negligible, as predicted by the large saturation region in Figure 2.
## VI Conclusion
In this paper we consider a stylized analysis of GAN training using gradient descent ascent. We assumed that the generator was unconstrained and the discriminator was a kernel model (or equivalently a wide neural network in the kernel regime). The analysis uncovers the role of (i) kernel width
(or equivalently network depth), (ii) regularization, and (iii) learning rates of generator and discriminator, on the stability and local convergence rate of the dynamics. |
2302.03385 | NeuronsGym: A Hybrid Framework and Benchmark for Robot Tasks with
Sim2Real Policy Learning | The rise of embodied AI has greatly improved the possibility of general
mobile agent systems. At present, many evaluation platforms with rich scenes,
high visual fidelity and various application scenarios have been developed. In
this paper, we present a hybrid framework named NeuronsGym that can be used for
policy learning of robot tasks, covering a simulation platform for training
policy, and a physical system for studying sim2real problems. Unlike most
current single-task, slow-moving robotic platforms, our framework provides
agile physical robots with a wider range of speeds, and can be employed to
train robotic navigation and confrontation policies. At the same time, in order
to evaluate the safety of robot navigation, we propose a safety-weighted path
length (SFPL) to improve the safety evaluation in the current mobile robot
navigation. Based on this platform, we build a new benchmark for navigation and
confrontation tasks under this platform by comparing the current mainstream
sim2real methods, and hold the 2022 IEEE Conference on Games (CoG) RoboMaster
sim2real challenge. We release the codes of this
framework\footnote{\url{https://github.com/DRL-CASIA/NeuronsGym}} and hope that
this platform can promote the development of more flexible and agile general
mobile agent algorithms. | Li Haoran, Liu Shasha, Ma Mingjun, Hu Guangzheng, Chen Yaran, Zhao Dongbin | 2023-02-07T10:45:20Z | http://arxiv.org/abs/2302.03385v1 | # NeuronsGym: A Hybrid Framework and Benchmark for Robot Tasks with Sim2Real Policy Learning
###### Abstract
The rise of embodied AI has greatly improved the possibility of general mobile agent systems. At present, many evaluation platforms with rich scenes, high visual fidelity and various application scenarios have been developed. In this paper, we present a hybrid framework named NeuronsGym that can be used for policy learning of robot tasks, covering a simulation platform for training policy, and a physical system for studying sim2real problems. Unlike most current single-task, slow-moving robotic platforms, our framework provides agile physical robots with a wider range of speeds, and can be employed to train robotic navigation and confrontation polices. At the same time, in order to evaluate the safety of robot navigation, we propose a safety weighted path length (SFPL) to improve the safety evaluation in the current mobile robot navigation. Based on this platform, we build a new benchmark for navigation and confrontation tasks under this platform by comparing the current mainstream sim2real methods, and hold the 2022 IEEE Conference on Games (CoG) RoboMaster sim2real challenge1. We release the codes of this framework2 and hope that this platform can promote the development of more flexible and agile general mobile agent algorithms.
Footnote 1: [https://ieee-cog.org/2022/cog_sim2real/index.html](https://ieee-cog.org/2022/cog_sim2real/index.html)
Footnote 2: [https://github.com/DRL-CASIA/NeuronsGym](https://github.com/DRL-CASIA/NeuronsGym)
## I Introduction
Navigation and decision-making are basic abilities of general mobile intelligent system in the physical world. Although this field has a long history of research, classic navigation algorithms and SLAM based decision methods are still vulnerable to perception noise and environment changes. In recent years, with the rise of lightweight deep neural network[1][2] and embodied AI[3][4][5], learning based navigation and decision-making algorithms are more robust in complex environments[6][7][8]. Since these methods usually require a large amount of trial-and-error data, high-efficiency simulator is necessary for the algorithms training due to the safety risk and execution cost of the physical system.
In the past few years, we have seen a lot of work to build a more visually realistic environment and a more flexible robot model. At present, many benchmarks and platforms for evaluating algorithms have also been developed. For navigation task, there are some simulation platforms such as MINOS[9], Habitat[4], Gibson Env[10], AI2-THOR[11], and hybrid framework combining simulator and physical robot such as RoboTHOR[12], Duckietown[13], DeepRacer[14]. There are also some platforms for other tasks, such as Robosuite[15] for the robot manipulation, VSS-RL platform[16] for the robot soccer, etc. Currently most platforms refer to a single specific task platform which limits the expansion ability of agents to some extent, while general mobile intelligent systems usually need to complete different tasks on the same physical platform. Moreover, Since the popular mobile robots use differential wheels which limit the maximum speed to \(0.5m/s\)[17], it is difficult to fully study the sim2real gap caused by dynamic differences with the slow robots. In addition, as one of the most popular robot competitions, RoboMaster competitions[18] lack a suitable simulation environment for training intelligent algorithms, so that the current competitive strategy is still limited to classic robotics. Therefore, we need a more flexible and agile platform to expand the learning and transferring capabilities of agents.
Reasonable evaluation is the key to improve the effective ability of agent. Success Rate (SR)[17][19] and Success weighted by (normalized inverse) Path Length (SPL)[3][12] are popular evaluation metrics for robot navigation. Although these indicators can effectively evaluate the navigation efficiency of the robot, they ignore the safety during the navigation process. Especially for the physical system, the safety is more important than the navigation efficiency. The number of collisions is used as the crucial metric to evaluate safety in previous works[20][21]. However, when a policy has a specified collision probability, the count of collisions is larger for the long-path task, and vice versa. Therefore, for scenarios with different path lengths, it is difficult to accurately evaluate the navigation safety of the algorithm with the number of collisions. Moreover, When the robot slides against the wall, the number of collisions is difficult to reflect the real safety. However, this sliding phenomenon is very common in navigation policies[22][23].
Considering the above problems, we build a mobile robot training and evaluating framework including navigation and confrontation tasks. Unlike most current platforms, which only contain single-task, the designed framework NeurousGym contains navigation tasks, confrontation tasks and the combination of navigation and confrontation tasks, which can expand the ability of agents to a greater extent. Furthermore, we have enriched the mobile robot navigation performance evaluation system to highlight safer and more efficient robot policies. Specifically, the contributions of this paper are as follows:
* We build a mobile robot training and evaluating framework including navigation and confrontation tasks. In the simulation environment of this framework, we build detailed friction force models to provide more accurate
dynamics simulation, and provide multiple sensor models to better simulate real data for supporting the diverse policies. In the physical system of this framework, we build the similar robot and task environments as the simulation, which can directly deploy and verify the policy trained in the simulation environment.
* SFPL, which can be used to evaluate the safety of robot navigation. Compared with the current commonly used number of collisions, it can effectively compare the physical safety of algorithms under various path lengths by weighting collisions and path lengths.
* Base on the platform, we compose a new benchmark for navigation and confrontation tasks, and compare the performance of various sim2real algorithms. Moreover, we hold the 2022 IEEE CoG RoboMaster sim2real challenge to promote the development of flexible and agile agent algorithms.
## II Related Work
In this paper, we propose a mobile robot training and evaluation platform including navigation and confrontation tasks to study learning-based robot policies and sim2real problems, and propose a metric to evaluate the safety of robot navigation. In this section, we review the development of the policy training platforms for navigation and confrontation tasks, and the related platforms used to study the sim2real problems. In addition, we discuss the evaluation metrics of robot navigation algorithm.
### _Platforms for Navigation Task_
Learning-based mobile robot navigation has a long history of development. It gets rid of the dependence on the map by directly establishing the mapping of state to action. In recent years, with the rise of embodied AI, navigation through visual sensors has emerged in endlessly[6][7][24][25], and corresponding platforms have emerged. For example, Gazebo[26] has high fidelity for robot simulation, but it is not suitable for trial-and-error training of agents due to its low simulation frame rate[9]. ViZDoom[27][28] and DeepMind Lab[29] also attract some attention in the field of visual navigation due to their excellent simulation efficiency. However, its stylized maze environment and weakened robot properties make it not suitable for indoor mobile robot navigation. The MINOS[9], Habitat[4], Gibson Env[10], AI2-THOR[11] and other environments developed in recent years have high visual fidelity to the environment as well as the simulation of the robot.
The above platforms are only limited to the research in the virtual environment, and do not configure a physical robot to study the transfer from the simulation to the real. RoboTHOR[12] configures a LoCoBot[30] robot on the basis of AI2-THOR to facilitate the transfer of visual perception and navigation algorithms trained in the simulation environment to the real robot tasks. Sim2Real Challenge[31] held on the 2020 CVPR uses iGibson simulator and LoCoBot robot to study how visual navigation algorithms can be more efficiently transferred from the simulation to the real environment. In addition, DeepRacer[14] and Duckietown[13] provide an open-source low-cost standardized hardware platforms. Combining with the tracking navigation task, they are used to study the impact of the transfer of visual perception and dynamics from the simulation to the real environment. Similar to these works, our platform also provides a visual sensor and dynamic parameter interface for studying reality gap through joint perception and dynamics. Compared with the maximum moving speed limit of \(0.5m/s\) for most indoor mobile robots[30][17], we provide a wider speed interval to study the sim2real gap caused by different execution speeds.
Metrics are crucial for evaluating algorithms performance. In the current works[9][17] for studying of learning-based navigation algorithms, SR is one of the most commonly used indicators to evaluate the performance of the navigation algorithm. There are other similar indicators, such as timeout[32], which are used to indirectly evaluate the robot's ability to reach the target. Since these indicators are difficult to reflect the navigation process, the navigation efficiency can also be evaluated with such indicators as path length[12], average speed and consumed time. Moreover, there are also indicators that combine SR with navigation efficiency, such as SPL[3][21] and Success weighted Completion Time (SCT)[33]. In addition to navigation capability, navigation safety is a more important performance in physical systems[34], since it directly affects the health of robot systems, and even affects the safety of navigation participants (such as pedestrian). In the navigation system of indoor mobile robot, collision times and collision rate[32] are the main indicators used to evaluate the safety of the system. Similar to the SR, it can only generally express the navigation safety results, but is difficult to describe what happened during the navigation process. The number of collisions as an indicator of safety assessment also faces some problems. For example, when a policy has a specified collision probability, the count of collisions is larger for the long-path task and vice versa. Therefore, for scenarios with different path lengths, it is difficult to accurately evaluate the navigation safety of the algorithm with the number of collisions. Another dilemma is to judge the occurrence of a collision. Common one collision means that a robot contacts or approaches an obstacle and then leaves. However, in many robot navigation policies, the phenomenon of sliding against obstacles appears, which is difficult to evaluate the safety with the number of collisions. Therefore, we need a more reasonable navigation safety evaluation metric to build a more secure navigation agent.
### _Platforms for Confrontation Task_
AI technology has achieved milestone development in the field of the game[35][36][37]. The emergence of confrontation platforms and the development of game methods almost complement each other. For example, the emergence of real-time strategy game platforms such as StarCraft[35] and Honor of Kings[36] also greatly promote the related research. Compared with virtual games and two person sports[38], MuJoCo Soccer[39] and Google Research Football[40] are closer to real robot sports. However, they are not suitable for sim2real research, because they either do not consider important physical
and dynamic aspects, or represent very complex scenes that cannot be realized by current robot technology. In the robot confrontation field, with the promotion of DJI, the RoboMaster Series[18], represented by RoboMaster University Challenge (RMUC), RoboMaster University AI Challenge (RMUA) and RoboMaster Young Challenge (RMYC), have become one of the most popular robot competitions in recent years. Its unique confrontation form has inspired the rise of young engineers. However, the current RoboMaster series are still focused on classic robot technologies such as mechanical design, embedded development, control algorithm design, and rarely involves learning-based policy and sim2real problem. The proposed platform is redesigned on the basis of the competition between RMUA and RMYC to simplify the task and highlight the research on reinforcement learning algorithm and sim2real problem. The VSSS-RL[16] framework proposed for IEEE Very Small Size Soccer (VSSS) aims at the reinforcement learning algorithm and sim2real problem in robot soccer. It not only provides a simulation environment, but also provides a physical robot platform and real environment. Compared with the clean small football field used as the arena in this platform, the our confrontation platform contains fixed obstacles and randomly generated obstacles, which makes task execution more difficult with the uncertainty. Moreover, compared with ball control and shooting in the soccer, our confrontation platform has a mechanism of damage and health, and the rich state space can excavate more diverse decision-making behaviors. In addition, in terms of confrontation robots, we provide abundant sensors including cameras, LiDAR (Light Detection and Ranging), and more complex dynamic models including moving and shooting.
### _Methods and Platforms for Sim2Real Problems_
Transferring the trained policy in the simulation environment to the real environment is a challenging task[41][42], since the gap between the simulation and the real degenerates the performance. There have been many researches on sim2real gap. Some works are mainly to solve the problem of generalization of the models caused by the distribution shift between the virtual data and the real data[43][44][45]. This kind of method is not the focus of this paper. Another works mainly solves the problem of the model performance degradation caused by the dynamic difference between the simulation robot and the real robot. The common methods for this kind of problem include system identification[46][47], domain randomization[48][49][50][51], action transformation[52][53], and so on. Due to lack of suitable physical platform, some research work only relies on the sim2sim method of simulation environment to indirectly carry out sim2real problem research[54][55]. This method simplifies the complexity of the problem, while ignoring the influence of uncontrolled factors such as control delay, observation and execution noise after transferring. In addition, there are many works combining the simulation environment with the physical robot. For example, grasping operation of robot arm[12], manipulation of dexterous hand[56], motion control of quadruped robot[57][58][59][60], etc. In addition, many sim2real related competitions have been held, such as the aforementioned 2020 CVPR sim2real challenge, the 2021 NeurIPS AI Driving Olympics[61] based on the Duckietown platform, and the 2022 ICRA sim2real challenge[62] for mobile grasping robots. It should be pointed out that although the simulation environment and physical robots are used in the ICRA sim2real challenge, the motivation of this challenge is still the classical robot algorithm based on traditional planning and control algorithms. Therefore, there is a significant difference between sim2real discussed in this paper which focuses on learning-based algorithms.
## III NeuronsGym
Unlike most current platforms, which only contain single-task, we present a hybrid framework named NeuronsGym, designed to train learning-based robot policies and evaluate the transfer ability and generalization of algorithms. In this framework, we provide a simulator containing a robot and an arena, as well as a corresponding physical system. We design three tasks including robot navigation, robot confrontation and combined task, as well as corresponding metrics, for training and evaluating robot policies. In the simulation environment, we have configured a variety of sensors and rich interfaces of robot dynamics to support the research on sim2real issues. The overview of this framework is shown in Fig. 1.
### _Tasks_
As shown in Fig. 1, we have designed robot navigation task and robot confrontation task in NeuronsGym. As the user of the environment, you can use robot navigation task, or robot confrontation task alone, and also combination of the two tasks.
#### Iii-A1 Robot Navigation Task
Similar to the PointGoal[3] navigation task, the robot starts from the randomly generated starting pose and navigates to the specified goal only according to its sensor data. It should be noted that in our task setting, the position of the goal is also randomly generated. Different from the PointGoal task, the task navigating to the goal can only be completed with a certain distance and angle of avertence, which is named "activation". In our environment settings, when the distance from the goal to the agent is less than 1.0\(m\), and the avertence angle between the robot orientation and the line to the goal is less than 30 degrees. In addition, agent needs to activate 5 goals in alphabetical order to complete the navigation task. It should be noted that our goals are not a virtual points, but obstacle blocks with collision attribute. Therefore, in each navigation trial, the layout of the arena is different due to the random placement of the goal blocks.
#### Iii-A2 Robot Confrontation Task
Compared with the navigation task, the confrontation task is more like a robot version of the "Battle City". The confrontation task requires two robots, of which the red robot is controlled by the built-in agent and the blue robot needs to be controlled by the designed agent. Each robot has the health point (HP) attribute. By firing bullets, it can hit the sensing device configured on the robot to reduce the opponent's HP. Similar to the navigation task, each robot starts from a randomly generated starting pose. The
difference is that there is no clear goal for the confrontation task, and the agent needs to explore and find the appropriate pose and shooting time by itself. Moreover, in this task, in addition to the sensor data, the agent can also obtain the position, remaining HP, remaining bullets of the opponent. Each robot has 800 HP and 24 bullets at the initial moment. Each time it is hit, the robot loses 100 HP. When the HP of one robot decreases to 0, the robot "dies" and the surviving robot wins. When the confrontation lasts for more than 3 minutes and no robot is "killed", the robot with high remaining HP wins. When both robots are "killed" at the same time, it is a draw.
#### Iii-A3 Robot Combined Task
In addition to using navigation or confrontation tasks alone, these tasks can also be combined to form more complex combined tasks. In this combined task, the agent needs to complete the navigation task before activating the confrontation task. When the agent carries out the navigation task, the red robot is stationary in the arena. But if the blue robot collides with it, it will slide due to external forces. In this task, the agent needs to complete two tasks within 3 minutes and avoid collision with obstacles as much as possible. We also give the evaluation metric of this task later.
### _Simulation Platform_
In the NeuronsGym framework, we build a simulation platform based on Unity3D to train agent policies. This simulation platform includes the arena required by different tasks, the simulation robot models, the controllers, and a variety of sensors.
#### Iii-B1 Arena
The layout of the arena in the simulator is borrowed from the RMUA. The arena is a \(5m\times 8m\) rectangular area with two height obstacles, \(10cm\) and \(20cm\). The \(20cm\) height obstacles can block both the camera field of view and the LiDAR field of view, while the \(10cm\) height obstacle can only block the LiDAR field of view. Around the obstacles, we arrange some visual tags for providing richer visual features to the agents. In addition, we place 5 blocks with a height of \(20cm\) and a width of \(10cm\) in the field as the goals to be activated. Around the blocks, we paste 5 letters from A to E.
Considering that the rendering efficiency of the model can greatly affect the simulation efficiency, two arena models are constructed. One is a simple model based on Unity3D built-in geometry units, as shown in Fig. 2. Since only the base models built in Unity3D are used, collision detection and image rendering are high-efficiency in this platform. In this arena model, we do not model the background outside the arena, and the images collected under this model are different from the images collected by the physical robot. Another arena model is constructed based on 3D reconstruction software. We pre-capture a large number of high-quality images in the physical arena, import them into RealityCapture[63], and perform visual 3D reconstruction of the captured arena by matching feature points in the images. In the 3D reconstruction process, a large number of detailed grids are used to model the environment in order to restore the physical arena more realistically, which can lead to relatively low efficiency of Unity3D in the collision detection and rendering process. Users can select the arena model according to the efficiency and characteristics of the designed algorithm.
#### Iii-B2 Robot
The simulation robot is based on the infantry form of RoboMaster EP. The overall architecture of the simulation robot is shown in Fig. 3. Since the RoboMaster EP is a commercial product and its mechanical drawings are not open
Fig. 1: Overview of the hybrid framework - NeuronsGym. The framework is composed of simulation and physical system. Agents can interact with simulation systems or physical systems through communication protocols to achieve agent training or evaluation. The agent policy can access the parameter manager to adjust parameters of the robot model or environment in the simulation system. In addition, the same scenario and task are set in each system to study sim2real of the robot policy.
Fig. 2: Simulation arenas. The left is the arena established with the built-in geometric modules in Unity3D, and the right is the arena built by RealtyCapture with the real arena images.
source, we model the core mechanism of the robot, including the chassis part with the Mecanum wheel, the two degrees of freedom gimbal part, and the firing unit. For the moving simulation of the chassis, considering the simulation efficiency, we abandon the fully dynamical simulation of driving the robot motion with the friction between the rollers of the wheel and the ground in the simulator, but directly set the robot linear and angular velocities. In order to ensure the consistency of the motion characteristics between the simulator and the physical robot, we build a mathematical model for the simplified part of the speed calculation, so as to realize the influence of many factors on the speed, such as friction parameters, motor characteristics, and control parameters.
Specifically, for any of the wheels \(i\), when the current inputting to the drive motor is \(I_{i}(t)\), the wheel rotational speed \(\omega_{i}(t+1)\) at the next time can be calculated according to the current wheel rotational speed \(\omega_{i}(t)\)
\[\omega_{i}(t+1)=\omega_{i}(t)+\int_{t}^{t+1}\frac{C_{T}I_{i}(\tau)-4F_{f_{i}}( \tau)r_{w}/M}{\rho_{w}}d\tau. \tag{1}\]
Here \(r_{w}\) and \(\rho_{w}\) are the radius and the rotational inertia of the wheel, respectively. \(C_{T}\) is the motor characteristic parameter. \(M\) is the mass of the robot. Here \(F_{f_{i}}\) is the friction between the wheel \(i\) and the ground. It should be noted that when the wheel changes from motionless to rotating state, the friction will change from static friction to dynamic friction. The process is calculated as follows
\[F_{f_{i}}(\tau)=\begin{cases}C_{T}I_{i}(\tau),&|\omega_{i}|\leq\omega_{e}\\ f_{d_{i}}.&|\omega_{i}|>\omega_{e}\end{cases} \tag{2}\]
Here \(\omega_{e}\) is the threshold. \(f_{d_{i}}\) is the dynamic friction parameter. It includes the magnitude and direction of dynamic friction, which is composed of sliding friction \(f_{i}\) and rolling friction \(f_{\perp}\) of the roller. When the speed of the robot changes, the direction of sliding friction and rolling friction will also change. The robot is composed of two different groups of wheels, and the friction direction between the groups is different. Specifically, when the velocity direction of the robot is known as \(\theta_{v}\), we can analyze the friction of each wheel, as shown in Fig. 4. For the right front wheel \(i=1\) and the left rear wheel \(i=3\), the friction can be calculated by the following
\[f_{d_{i}}=\begin{cases}-f_{\perp}\sin(\theta_{v}-\frac{\pi}{4})+f_{v}\cos( \theta_{v}-\frac{\pi}{4}),&-\frac{\pi}{4}\leq\theta_{v}\leq\frac{\pi}{4}\\ f_{\perp}\sin(\theta_{v}-\frac{\pi}{4})+f_{v}\cos(\theta_{v}-\frac{\pi}{4}),& \frac{\pi}{4}\leq\theta_{v}\leq\frac{3\pi}{4}\\ f_{\perp}\sin(\theta_{v}-\frac{\pi}{4})-f_{v}\cos(\theta_{v}-\frac{\pi}{4}),& \frac{3\pi}{4}\leq\theta_{v}\leq\frac{7\pi}{4}\\ -f_{\perp}\sin(\theta_{v}-\frac{\pi}{4})-f_{v}\cos(\theta_{v}-\frac{\pi}{4}),&\frac{5\pi}{4}\leq\theta_{v}\leq\frac{7\pi}{4}\\ \end{cases} \tag{3}\]
For the left front wheel \(i=2\) and the right rear wheel \(i=4\), the calculation process is slightly different since the arrangement direction of their roller is different from the previous wheels \(i=1,3\). Specifically, it can be calculated by the following
\[f_{d_{i}}=\begin{cases}f_{\perp}\cos(\theta_{v}-\frac{\pi}{4})-f_{v}\sin( \theta_{v}-\frac{\pi}{4}),&-\frac{\pi}{4}\leq\theta_{v}\leq\frac{\pi}{4}\\ f_{\perp}\cos(\theta_{v}-\frac{\pi}{4})+f_{v}\sin(\theta_{v}-\frac{\pi}{4}),& \frac{\pi}{4}\leq\theta_{v}\leq\frac{3\pi}{4}\\ -f_{\perp}\cos(\theta_{v}-\frac{\pi}{4})+f_{v}\sin(\theta_{v}-\frac{\pi}{4}),&\frac{3\pi}{4}\leq\theta_{v}\leq\frac{5\pi}{4}\\ -f_{\perp}\cos(\theta_{v}-\frac{\pi}{4})-f_{v}\sin(\theta_{v}-\frac{\pi}{4}),&\frac{5\pi}{4}\leq\theta_{v}\leq\frac{7\pi}{4}\\ \end{cases} \tag{4}\]
Fig. 4 shows the detailed force analysis of two groups of wheels under four conditions. In the above calculation, we use the velocity direction of the robot. This direction is the linear velocity direction of the contact point between the wheel and the ground. It is the result of the superposition of the wheel angular velocity and the robot angular velocity. The actual moving speed direction of each wheel can be calculated through the triangle cosine theorem. The calculation process is not complicated, so we will not introduce it in detail here.
It should be noted that in the above modeling process, although we have simplified the process for some special cases, such as the wheel slippage phenomenon and the linear approximation of the motor characteristic curve, our experimental results show that the above model can perform a better simulation for the non-limit motion cases.
In the gimbal system, we can control the pitch angle and yaw angle. The modeling process of the system follows the previous work[64]. It should be noted that in order to simplify the difficulty of the control in the confrontation process, the current interface does not provide the gimbal control angle, but freezes the gimbal angle and maintains the same orientation with the robot.
In the firing unit simulation, the bullet firing mechanism and the strike sensing mechanism are mainly considered. Since
Fig. 4: Friction force analysis of the robot wheel.
Fig. 3: Robot system architecture.
laser beam firing is used in the physical system, we ignore the time-of-flight delay and the ballistic bending phenomenon during the simulation and consider only the actuation delay of the firing mechanism.
Similar to the physical robot strike sensing mechanism, we have arranged armors around the chassis of the simulated robot to determine whether it has been hit by constantly detecting whether it receives a collision with the laser beam emitted by the firing mechanism. This part is done with the help of the collision detection function of the Unity3D engine.
Accurate and reliable collision detection is a prerequisite for developing safe navigation and control algorithms. The collision detection in the environment relies mainly on the Unity3D engine. Meanwhile, in order to simplify the computational complexity of the collision detection, the mechanical model of the robot is simplified, and its collision detection model contains only two basic geometric units, a simple cylinder and a cube, which greatly improves the efficiency of the collision detection computation.
#### Iii-B3 Sensors
In the simulator, we mainly provide three kinds of sensors: LiDAR, odometer and camera. In addition, the speed, pose, collision and other information of the robot can be directly obtained through Unity3D.
**LiDAR** We build a virtual LiDAR with single laser. The scanning angle range of this virtual LiDAR is \([-135,135]\), and the angular resolution is 4.5 degree. Therefore, each frame contains 61 scanning points. In the simulator, we can obtain the precise distance from the LiDAR installation position to the obstacle. However, real LiDAR measurements usually contain outliers and measurement noise. Therefore, we use Gaussian distribution to build the LiDAR noise model. For any distance measurement noise \(n_{i}^{L}\), its generation probability \(p(n_{i}^{L})\) is
\[p(n_{i}^{L})=\frac{1}{\sqrt{2\pi}\sigma}\exp(\frac{(n_{i}^{L}-\mu_{l})^{2}}{2 \sigma_{l}^{2}}). \tag{5}\]
Here \(\mu_{l}\) and \(\sigma_{l}\) are the mean and standard deviation of the Gaussian distribution, respectively. In addition to normal data, anomalous data can be caused by the failure to receive reflected data due to the surrounding environment material, certain special incidence angles between the LiDAR and the object, and so on. Considering the randomness of anomalous data, we use Poisson distribution to build the LiDAR anomaly model. For any frame of LiDAR data, the probability of \(k\) abnormal points is
\[p(k)=\frac{\lambda^{k}}{k!}\exp(-\lambda),k=0,1,\cdots \tag{6}\]
where \(\lambda\) is the Poisson distribution parameter. After the number of abnormal data is determined by sampling, we randomly select data from the point cloud and process them as abnormal values.
**Odometer** As a common position measurement sensor on most mobile robots, encoders are commonly used to generate odometer data. In the previous robot motion modeling, we maintain the rotational speed of each wheel. Through the Mecanum wheel kinematics model, we can get the velocity \((\tilde{v}_{x}(t),\tilde{v}_{y}(t),\tilde{v}_{w}(t))\) of the robot according to the wheel speeds.
\[\tilde{v}_{x}(t) =\frac{(\tilde{\omega}_{3}(t)+\tilde{\omega}_{4}(t))r_{w}}{2} \tag{7}\] \[\tilde{v}_{y}(t) =\frac{(\tilde{\omega}_{3}(t)-\tilde{\omega}_{1}(t))r_{w}}{2}\] \[\tilde{v}_{w}(t) =\frac{(\tilde{\omega}_{2}(t)-\tilde{\omega}_{3}(t))r_{w}}{(h+w)}\]
In the above equation, \(\tilde{\omega}_{i}(t)\) is the observation of the wheel rotational speed. \(h\) and \(w\) are the wheel base and wheel tread of the robot, respectively. Here we simulate the measurement process of the encoder with Gaussian noise. The measurement is
\[\tilde{\omega}_{i}(t)=\omega_{i}(t)+n^{e},n^{e}\sim\mathcal{N}(\mu_{e},\sigma _{e}) \tag{8}\]
where \(\mu_{e}\) and \(\sigma_{e}\) are the mean and standard deviation of the measurement noise, respectively. In formula 7, the velocity of the robot speed is measured by the wheel encoders. By integrating the velocity, we get the simulated odometer measurement of the robot.
\[x(t) =\int_{0}^{t}\cos(\theta(\tau))\tilde{v}_{x}(\tau)-\sin(\theta( \tau))\tilde{v}_{y}(\tau)d\tau\] \[y(t) =\int_{0}^{t}\sin(\theta(\tau))\tilde{v}_{x}(\tau)+\cos(\theta( \tau))\tilde{v}_{y}(\tau)d\tau\] \[\theta(t) =\int_{0}^{t}\tilde{v}_{w}(t)d\tau\]
**Camera** Images are becoming core state input in more and more methods, and our proposed simulator also sets the camera sensor to support image output. In this section, we mainly rely on the camera sensor that comes with the Unity3D engine. By setting the field of view and focal length, we adjust the field of view to be as similar as possible to the camera carried by the physical robot. In terms of image rendering similarity, we provide two fidelity field models in the arena section for compromising the simulator rendering efficiency and image reproduction.
#### Iii-B4 Controller
The main work of the controller is to calculate the current required by the motor according to the input control. In our system, the control related to the robot movement is the expected velocity of the robot. In the simulator, we configure a motor for each wheel. The current input required by the motor is calculated according to the expected rotational speed of each wheel and the actual rotational speed. We use Proportiona-Integral-Differential (PID) controller to calculate the required current. First, we need to calculate the expected rotational speed \(\tilde{\omega}_{i}(t)\) according to the kinematic model of the robot and the control input \((u_{x}(t),u_{y}(t),u_{w}(t))\) at the current time.
\[\hat{\omega}_{1}(t) =(u_{x}(t)-u_{y}(t)-u_{w}(t)\frac{(h+w)}{2})\frac{1}{r_{w}}\] \[\hat{\omega}_{2}(t) =(u_{x}(t)+u_{y}(t)+u_{w}(t)\frac{(h+w)}{2})\frac{1}{r_{w}}\] \[\hat{\omega}_{3}(t) =(u_{x}(t)+u_{y}(t)-u_{w}(t)\frac{(h+w)}{2})\frac{1}{r_{w}}\] \[\hat{\omega}_{4}(t) =(u_{x}(t)-u_{y}(t)+u_{w}(t)\frac{(h+w)}{2})\frac{1}{r_{w}}\]
Then PID is used to calculate the current \(I_{i}(t)\) corresponding to each wheel.
\[I_{i}(t)=k_{p}(\hat{\omega}_{i}(t)-\omega_{i}(t))+k_{i}\int_{0}^{t}( \hat{\omega}_{i}(\tau)-\omega_{i}(\tau))d\tau\] \[+k_{d}\frac{d(\hat{\omega}_{i}(\tau)-\omega_{i}(\tau))}{d\tau}\]
where \(k_{p}\), \(k_{i}\) and \(k_{d}\) are the proportional, integral and differential parameters, respectively.
### _Physical Platform_
In the real environment, we have built an arena and a robot system that are the same as the simulation environment. Next, we will introduce these two parts respectively.
#### Iii-C1 Arena
The arena in the real environment is built in equal proportion to the arena in the simulator. The layout of the obstacles and the visual tags are consistent with the simulation environment. Different from the simulation environment, the goal blocks in the real environment can be pushed by the robot, while the blocks in the simulation environment are fixed.
#### Iii-C2 Robot and Sensors
We use the infantry of RoboMaster EP as the robot platform. Compared with other commonly used mobile robot platforms (turtle bot, LoCoBot, etc.), RoboMaster EP has higher speed, bullet shooting function and bullet sensing device, which can better meet the task requirements in the framework. In addition to the cameras, odometers and other sensors already configured on the robot, RPLiDAR S2 is installed in front of our robot, and NVIDIA Jetson NX is arranged behind the robot as the computing platform. It should be noted that pointcloud distortion of the real LiDAR will happen during robot fast movement, which does not exist in the simulation environment. Therefore, we correct the pointcloud with odometer to remove the distortion. In addition, compared with the LiDAR in the simulation environment, the angle resolution of RPLiDAR S2 is smaller and the field of view is larger. In order to keep the same as the simulation environment, we sample the real pointcloud. In the real robot system, the true pose of the robot cannot be directly measured by the sensor. Therefore, an Adaptive Monte Carlo Localization (AMCL) algorithm is used to obtain the true pose of the robot.
### _Referee System_
In order to ensure the communication between robots and the calculation of scores during the game, we construct a referee system as a message relay node, which is responsible for sending and receiving data to and from the red and blue robots. The referee system determines the activation status of the goals based on the poses of the blue robots and whether the goals are activated in sequence. In addition, the referee system is also responsible for collecting data on the poses of the red and blue robots, the remaining HP and the number of remaining bullets, as well as calculating whether the blue robot has collided with an obstacle based on the position and the distribution of obstacles and goals in the arena. Finally, the referee system calculates the current score of the blue robot according to the collected information and broadcast the information to both robots.
## IV Sim2Real Protocols and Baselines
Due to the low sampling efficiency and safety of the physical system, most of the current robotic reinforcement learning algorithms are trained in the simulator. The policy directly transfers from the simulation environment to the real environment will face performance degradation with the gap between the simulation environment and the real environment. Therefore, how to transfer the trained agents to the real physical environment has become an essential problem. In order to more formally describe the problem, Markov Decision Processes (MDP) with tunable environment parameters are used in many works to model sim2real problems. The objective of the sim2real is
\[\mathcal{J}(\theta)=\mathbb{E}_{\xi\sim p_{real}(\xi)}[\mathbb{E}_{\tau\sim p _{real}(\tau)}[\sum_{t=0}^{T-1}\gamma^{t}r_{\xi}(s_{t},a_{t})|\pi_{sim}^{*}, \xi]] \tag{9}\]
where \(\pi_{sim}^{*}\) is optimal policy trained in the simulator. We can obtain this policy by optimizing the following objective.
\[\pi_{sim}^{*}=\arg\max_{\pi}\mathbb{E}_{\xi\sim p_{sim}(\xi)}[\mathbb{E}_{ \tau\sim p_{sim}(\tau)}[\sum_{t=0}^{T-1}\gamma^{t}r_{\xi}(s_{t},a_{t})|\pi,\xi]] \tag{10}\]
Here the environment, also called domain, is characterized by the parameters \(\xi\) which are assumed to be random variables distributed according to an unknown probability distribution \(p_{real|sim}\). \(s_{t}\) and \(a_{t}\) are the state and action at time \(t\), respectively. \(r_{\xi}(\cdot)\) is the reward function. \(\gamma\) is the discount factor. The definitions of action space and state space of MDP are the same for environments with different parameters. The variability of the dynamics causes distribution shift between the simulation data and the real data, thus degrading the performance of the trained policy during testing. For our proposed environment and tasks, in order to be able to study the sim2real problem more easily, we specify the possible parameters in the environment that cause the variability of the state transition function, and we describe each parameter in detail below.
### _Robot Motion-related Parameters_
#### Iv-A1 Friction Coefficients
Friction directly affects the acceleration and deceleration response processes of the robot, and friction coefficient are considered in many domain randomization works. With analysis of the mobile robot dynamics, the friction coefficients include two parameters: sliding friction coefficient \(f_{\perp}\) and rolling friction coefficient \(f_{\text{\tiny{in}}}\).
#### Iv-A2 Motor Character
Motor character \(C_{T}\) responses to the ability to generate torque, which directly affects the robot's acceleration and braking capabilities. Usually, the motor character is not constant and is related to the consumed time, the load, and the voltage. In this paper, we only consider that the motor character can be approximated to a constant under normal working conditions.
#### Iv-A3 Controller Parameters
We use PID controller for the robot in the simulator and the physical robot, and the parameters \((k_{p},k_{i},k_{d})\) are crucial to the controller. For a given robot, the PID control parameters are usually fixed. However, for
different robots, due to the differences in actuators, the PID parameters are usually adjusted in order to achieve excellent performance for each robot. Therefore, the PID parameters may vary slightly from robot to robot.
#### Iv-A4 Robot Mass
The mass \(M\) of the robot affects the magnitude of the inertia and thus the control system. Therefore, many domain randomization methods in the literature take this parameter into account.
#### Iv-A5 Rotational Inertia of the Wheel
As can be seen from the dynamics model of the robot, the rotational inertia \(\rho_{w}\) of the wheels affects the calculation of the velocity. Since the rotational inertia of the wheels on the physical robot is difficult to measure precisely, we also use it as an uncertain parameter.
#### Iv-A6 Control Response Latency
The response latency \(\zeta\) of the system can greatly affect the control effect of the robot, and there is also some work on reinforcement learning algorithms carried out specifically for system latency[65][66]. The system response latency usually includes several kinds, the first one is the node execution response latency due to limited computational resources during the simultaneous execution of multiple nodes or threads by the operating system, the second one is the response latency due to communication blockage between nodes and between the computer and the embedded system, and the other one is the execution response latency of the actuator itself. This delay parameter is usually not a constant value, but a random variable due to the chance of occurrence of the above mentioned cases. In this paper, we no longer distinguish the causes of delay, but divide the control delay into four parts from the results of delay: longitudinal velocity delay \(\zeta_{v_{s}}\), lateral velocity delay \(\zeta_{v_{s}}\), angular velocity delay \(\zeta_{v_{w}}\) and shooting control delay \(\zeta_{s}\).
### _Sensor-related Parameters_
#### Iv-B1 LiDAR Noise Parameters
In the previous sections, we analyze the sources of LiDAR noise and build the LiDAR noise model using a Gaussian distribution. Where we assume that the mean \(\mu_{l}\) of the Gaussian distribution is 0 and the standard deviation \(\sigma_{l}\) is related to the LiDAR measurement performance.
#### Iv-B2 LiDAR Anomaly Parameters
LiDAR data anomalies maybe caused by sensor anomalies, but a more likely cause is the measurement of laser reflectivity of special object materials is too low or some special laser incidence angles, resulting in LiDAR not receiving reflected data. This latter phenomenon is related to the layout of the arena and the material of the environment. In the previous sections, we employ Poisson distribution to build the LiDAR anomaly model, so the parameter to be determined is \(\lambda\) in Poisson distribution.
### _Baselines_
At present, there are many sim2real works aiming at the dynamics difference between simulation and real environment such as system identification[46][47], domain randomization[48], adaptive domain randomization[49][50], etc. In this paper, we choose 5 methods to build the baselines of our proposed tasks. In the navigation task, the state has 69 dimensions including LiDAR data, robot pose, velocity, and goal position. The action is continuous speed control, and the shooting control command is kept at 0. Its reward function is set as follows
\[r_{n}=\begin{cases}40,&\text{IF goal activated}\\ d_{t}-d_{t-1}+\frac{|\Delta\theta_{t}|-|\Delta\theta_{t-1}|}{2}\\ -\alpha\min(\exp(\frac{n_{s}}{4000}-5),1)-0.1.&\text{ELSE}\end{cases} \tag{11}\]
Here \(d_{t}\) is the distance from the robot to the goal at the timestep \(t\). \(\Delta\theta_{t}\) is the angle difference between the robot orientation and the line to the goal. \(n_{s}\) and \(\alpha\) are the training iteration and the weight of the collision, respectively.
Fig. 5: Several different sim2real methods. In the training process, each trial starts by sampling the simulator parameters from the parameter generator, then uses the simulator to train the agent, and then transfers to the real robot. The difference is that the parameter generators of \((a)\) and \((e)\) get the same results each time, and \((b)-(d)\) sample parameters from some distribution, and the results are different each time. Here \((b)\) does not require actual robot data, \((c)\) requires offline robot data, and \((d)\) requires online robot data. Unlike \((c)\) and \((d)\), \((e)\) uses offline data to learn the action transformer to correct the state generated by the simulator.
In the confrontation task, we select 73 dimensional data including LiDAR data, robot pose, velocity, enemy robot pose, remaining HP and remaining bullets of both robots as the state. The action is continuous speed control and the probability of firing. The reward function is set as follows
\[r_{c}=\begin{cases}40+\frac{HP}{2},&\text{IF win}\\ d_{t}-d_{t-1}+\frac{|\Delta\theta_{t}|-|\Delta\theta_{t-1}|}{2}+\frac{D}{20}-0.1.&\text{ELSE}\end{cases} \tag{12}\]
Here \(HP\) and \(D\) are the remaining health points and damage, respectively. We use Soft-Actor-Critic (SAC) [67] algorithm to train the navigation and confrontation policy in the simulation, and evaluate the sim2real performance by transferring the trained policy to physical robots. In order to reduce the performance degradation, we have adopted the following 5 sim2real methods in the training process.
#### Iv-A1 None
We implement a naive transfer method that does not rely on any real robot a priori or data, train the policy using the default parameters of the simulator, and then transfer it to the physical robot and uses it as a comparison to other sim2real methods. The method is shown in \((a)\) of Fig. 5.
#### Iv-A2 Uniform Domain Randomization(UDR)[48]
According to the literature[48], we reproduce the uniform domain randomization method. In contrast to using the default parameters, we set a reasonable random sampling interval for each parameter based on the engineering experience. During the training process, the simulator acquires a set of parameters through the parameter generator to collect robot trajectory data. The method is shown in \((b)\) of Fig. 5.
#### Iv-A3 Droid[68]
In contrast to UDR, this method draws on offline expert data to optimize the simulator parameters by reducing the error between the simulated and real trajectories. In the implementation of this paper, we minimize the \(l2\)-norm loss of the state including the pose and velocity by means of the CMA-ES[69]. The method is shown in \((c)\) of Figure 5.
#### Iv-A4 SimOpt[49]
SimOpt defines a discrepancy metric between real and simulated trajectories and uses relative entropy policy search[70] to optimize the simulator parameter distribution. Unlike DROID, this method requires interaction with the real robot to collect real data and continuously update the simulator parameters. We still consider only robot poses and velocities when reproducing the trajectory discrepancy metrics. The method is shown in \((d)\) of Fig. 5.
#### Iv-A5 Ground Action Transformation(GAT)[53]
Unlike the previous domain randomization methods, GAT does not require modifications to the simulator parameters, but instead constructs an action transformer outside the simulator to make the next state in the simulation environment the same as in the real environment with the same current state and action. Again, the method requires interaction with the real robot to collect data to build the action transformer. The method is shown in \((e)\) of Fig. 5.
## V Evaluation Scenarios and metrics
In order to evaluate the performance of different methods, evaluation scenarios with different levels are defined. Moreover, we set corresponding evaluation metrics for different tasks.
### _Evaluation Scenarios_
In order to evaluate the navigation ability of the agent comprehensively, we design 3 levels of evaluation scenarios based on the location distribution of the goals. Specifically, we designate the blocking zones on the arena, as shown in Fig. 6. These zones are usually blocked by obstacles, and the agent needs to bypass the obstacles to activate the goals in these zones. We set up 3 different levels according to the number of goals within the blocking zones.
* **Level 1**: All goals are outside the blocking zones;
* **Level 2**: There is 1 goal randomly generated within the blocking zones;
* **Level 3**: There are 3 goals randomly generated within the blocking zones.
### _Navigation Task_
SR is one of the most commonly used evaluation metrics in navigation task. It is the percentage of successfully reaching the goals over multiple trials, and for the task in this paper it is the percentage of successfully activating the goals.
\[\text{SR}=\frac{1}{N}\sum_{i=1}^{N}\frac{\mathcal{N}_{i}^{a}}{\mathcal{N}_{i} ^{g}} \tag{13}\]
Here \(\mathcal{N}_{i}^{a}\) is the number of activated goals in the trial \(i\). \(\mathcal{N}_{i}^{g}=5\) is the total number of the goals in each trial. \(N\) is the number of the trials. Another one is the SPL.
\[\text{SPL}=\frac{1}{N}\sum_{i=1}^{N}\mathcal{S}_{i}\frac{l_{i}}{\max(p_{i},l_ {i})} \tag{14}\]
where \(l_{i}\) is the shortest path from the starting point to the goal. \(p_{i}\) is the length of the path traveled by the agent. \(\mathcal{S}\) is an indication of successful activation, and \(\mathcal{S}_{i}=1\) means the goal is successfully activated in the trial \(i\). However, for physical robots, the two aforementioned metrics do not assess the safety of the agent during navigation. Therefore, we define the SaFety weighted by Path Length (SFPL), which is calculated by following
\[\text{SFPL}=\frac{1}{N}\sum_{i=1}^{N}\frac{\sum_{t=1}^{T}(1-\alpha_{i,t}) \delta_{i,t}}{p_{i}} \tag{15}\]
where \(\alpha_{i,t}\) is the collision identifier. If the robot collides in time \(t\) of the trial \(i\), \(\alpha_{i,t}\) is 1, otherwise is 0. \(\delta_{i,t}\) indicates the distance the robot moves in time \(t\). Compared with the number of collisions, this metric can more reasonably evaluate the safety of robot sliding against the wall. In addition, we calculate the average velocity of each trajectory during the
Fig. 6: Blocking Zones and evaluation scenarios. From left to right are the samples of the evaluation scenarios are from Level 1, Level 2 and Level 3.
episode, which is used to measure the efficiency of the agent during navigation.
\[\text{Velocity}=\frac{1}{N}\sum_{i=1}^{N}\frac{1}{T}\sum_{t=1}^{T}v_{i,t} \tag{16}\]
where \(v_{i,t}\) is the velocity of the robot in the time \(t\) of the trial \(i\). Beside the metrics about the navigation performance, in order to compare the difference of the simulated trajectory and the real trajectory, we employ the Wasserstein distance and design a metric named TrajGap for evaluating the trajectory distribution gap of the policy in the simulator and the real environments.
\[\text{TrajGap}=\frac{1}{N}\sum_{i=1}^{N}\inf_{\kappa\sim\Pi}\int_{x\in S_{sim} }\int_{y\in S_{real}}\kappa(x,y)||x-y||dxdy \tag{17}\]
Here \(\Pi\) is the joint distribution of simulated and real data. \(S\) is the state of the trajectory which includes the robot velocity and pose.
### _Confrontation Task_
For confrontation task, we use the remaining health points (\(HP\)) and the damage (\(D\)) of the red robot, which is used to evaluate the robot's offensive and defensive capabilities during the confrontation.
### _Combined Task_
For this task, we design a comprehensive evaluation metric, and consider the navigation efficiency, confrontation ability and safety to comprehensively evaluate the performance of the agent.
\[\text{Score}=\frac{1}{N}\sum_{i=1}^{N}(60\mathcal{N}_{i}^{a}+\frac{1}{2} \mathcal{A}_{i}(D_{i}+HP_{i})-\mathcal{T}_{i}-20\mathcal{T}_{i}^{c}) \tag{18}\]
It includes 4 parts. The first part is score for finding the goals, and each goal activated will be rewarded with 60 points. The second part is the score against the enemy. The precondition for obtaining this score is to successfully activate the enemy robot so that the indicator \(\mathcal{A}_{i}=1\). The score of this part is calculated according to the damage \(D_{i}\) caused to the enemy and remaining health points \(HP_{i}\). The third part is the total time \(\mathcal{T}_{i}\) of the task. The fourth part is the collision punishment. Some points will be deducted based on the collision time \(\mathcal{T}_{i}^{c}\). To rank the algorithms, we develop a composite metric FS that takes into account both simulation and real scores
\[\text{FS}=0.2\times\text{Score}_{sim}+0.8\times\text{Score}_{real} \tag{19}\]
In the 2022 IEEE CoG RoboMaster sim2real challenge, we rank the algorithms submitted by the participants through this score.
## VI Experiments
In the previous sections, we construct the robot dynamics model and the sensor simulation model. In the experimental section, we first validate the robot model and the sensor models. Then we analyze the safety metrics to evaluate the navigation trajectories, and the baselines are evaluated in evaluation scenarios for navigation and confrontation tasks, respectively. Finally, we evaluate the baselines on a combined task and compare them with the 2022 IEEE CoG RoboMaster sim2real challenge winning methods.
### _Robot Dynamic Model and Sensor Simulation Model_
In the previous sections, we construct the dynamics model of the robot, and build a complex friction model of the Mecanum wheel structure. In addition, we also build the noise model and the anomaly model for the LiDAR data. Therefore, in this section, we focus on answering two questions.
* **Q1**: Will it be better to use the friction model to simulate the dynamic response of the real robot?
* **Q2**: Can the noise model and the anomaly model simulate the real LiDAR data?
Vi-A1 Will it be better to use the friction model to simulate the dynamic response of the real robot?
We set up a set of control sequences on the real robot and collect the velocity of the robot at each moment. We also apply the sequence to the simulation environment and collect the simulated velocity. For comparison, we also construct a common simplified friction model. The experimental results are shown in Fig. 7.
From the results in the figure, we can see that our friction model can better simulate the speed response of the robot than the simplified friction model. Especially when the speed direction changes, due to the combined effect of rolling and sliding friction, the simplified friction model is difficult to simulate the non-linear change of speed.
#### Vi-A2 Can the noise model and the anomaly model simulate the real LiDAR data?
To verify the LiDAR anomaly data model, we collect 1000 frames of LiDAR data in different locations in the real environment. The left part of Fig. 8 shows the real LiDAR anomaly data distribution and the distribution of data sampled from the anomaly model. It can be seen that the real data distribution is very consistent with the simulated data distribution, and the KL divergence between the two data distributions is very small \((-0.0051)\).
To verify the LiDAR noise model, we randomly select five locations in the real environment and collected 50 frames (5 seconds) of data at each location. We calculate the measurement noise of LiDAR by using the average of 50 frames as the true measurement value. The right part of Fig. 8 shows the real LiDAR noise data distribution and the data distribution
Fig. 7: Input control and longitudinal speed response of the robot.
of the noise model. From the results in the figure, it can be seen that the real LiDAR noise dataset is within the range of \([-0.05,0.05]\). Within this range, our model can better simulate the data distribution of noise.
### _Comparison of Safety Metrics_
Reasonable safety metrics are crucial to evaluate the policy on the physical robot. The number of collisions is a common safety evaluation metric in the literature. In this section, we focus on the following question:
* **Q**: Is SFPL a more reasonable metric to evaluate trajectory safety than the number of collisions?
We analyze the collected navigation trajectories to compare the rationality of SFPL and collision number for evaluating safety. The experimental results are shown in Table I.
It can be seen that the assessment results of the trajectory by SFPL and the number of collisions are not consistent. While the SFPL of Trajectory1 and Trajectory2 are relatively close, the number of collisions differs greatly. The number of collisions between Trajectory2 and Trajectory3 is close, but SFPL is quite different. The evaluation of the two metrics for Trajectory3 and Trajectory4 is consistent. We visualize the trajectories, as shown in Fig. 9. It can be seen from the figure that compared with Trajectory2, Trajectory1 has continuous collisions, which will result in a large number of collisions. But on the whole, the safety of the two trajectories is similar, and Trajectory1 is safer. In Trajectory3 and Trajectory4, the agent slides against the obstacle. Since the length of the sliding path does not affect the number of collisions, the number of collisions between Trajectory3 and Trajectory4 is similar to Trajectory2. However, from the perspective of trajectory distribution, Trajectory2 is safer than Trajectory3 and Trajectory4. Therefore, we can conclude that compared with the collision number, SFPL can more reasonably evaluate the safety of the trajectory.
### _Comparison of Baselines on the Navigation Task_
In this section, we focus on evaluating the performance changes of the baselines transferring from the simulation to the real in different navigation scenarios. Specifically, we will answer 3 questions.
* **Q1**: Does the data augmentation commonly used during navigation training still work to improve the generalization of the method?
* **Q2**: How are the performance of the sim2real baselines in different evaluation scenarios?
* **Q3**: What will happen on the performance of sim2real baselines if the robot moving speed changes greatly?
#### Vi-C1 Effect of different data augmentation methods during training **(Q1)**
Data augmentation is an important way to improve the generalization of deep reinforcement learning algorithm. We consider 3 different data augmentation methods for the robot pose: pure noise, pure bias and noise bias combination. Among them, the noise is to simulate the localization error of the real robot owing to the sensor noise, and the bias is to simulate the localization offset of the real robot when the wheels slip or collide with the obstacles.
Table II shows the results of the policy trained by SAC[67] after 1000 iterations. These results are the averages over 20 trials in each scenario. It can be seen from the experimental
\begin{table}
\begin{tabular}{c c c c} \hline \hline Scenarios & Methods & SR\(\uparrow\) & SPL\(\uparrow\) & SFPL\(\uparrow\) \\ \hline \multirow{3}{*}{Level 1} & None & 0.91 & 0.83 & 0.87 \\ & Noise & 0.95 & 0.87 & 0.88 \\ & Shift & 0.95 & 0.87 & 0.85 \\ & Noise+Shift & 0.93 & 0.84 & 0.84 \\ \hline \multirow{3}{*}{Level 2} & None & 0.83 & 0.79 & 0.85 \\ & Noise & 0.91 & 0.84 & 0.83 \\ & Shift & 0.74 & 0.77 & 0.82 \\ & Noise+Shift & 0.77 & 0.77 & 0.83 \\ \hline \multirow{3}{*}{Level 3} & None & 0.68 & 0.71 & 0.81 \\ & Noise & 0.87 & 0.80 & 0.83 \\ \cline{1-1} & Shift & 0.65 & 0.69 & 0.81 \\ \cline{1-1} & Noise+Shift & 0.63 & 0.72 & 0.79 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Comparison of the Different Data Augmentation Methods on the Navigation Scenarios.
Fig. 8: Data distributions of simulation LiDAR and real LiDAR. The left is the distribution of the abnormal points per frame, and the right is the distribution of LiDAR ranging noise data.
\begin{table}
\begin{tabular}{c c c} \hline Trajectories & SFPL\(\uparrow\) & Collision Number\(\downarrow\) \\ \hline Trajectory1 & 0.854 & 20 \\ Trajectory2 & 0.842 & 13 \\ Trajectory3 & 0.770 & 14 \\ Trajectory4 & 0.726 & 12 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Comparison of the Different Safety Metrics.
Fig. 9: The collision visualization of the different trajectories. Blue indicates a safe trajectory and red indicates the collision between the agent and the obstacle. The cyan circle indicates that the robot continuously collides with the obstacle, which happens in Trajectory1(\(a\)). The green circle indicates that the robot slides against the obstacles, which happens in Trajectory2(\(b\)), Trajectory3(\(c\)) and Trajectory4(\(d\)).
results that the data augmentation methods improve the test performance of the policy in Level 1 scenarios. For Level 2 and Level 3 scenarios, pure bias and noise bias combination degrade the algorithm performance, since the disturbance of noise and bias to the localization makes the task more difficult. Navigation with localization offset is a challenging task compared to localization noise. Overall, the results in the table show that appropriately increasing the difficulty of navigation tasks can improve the generalization of the policy, and that over-difficult tasks may result in degraded performance.
#### Iv-A2 The effect of evaluation scenarios on sim2real transfer performance (**Q2**)
In this section, we compare the performance of the baselines and discuss the impact of different levels on sim2real in the 3 test scenarios. The evaluation results in all scenarios are shown in Table III. It can be seen from the results that the online adaptive domain randomization method (SimOpt) achieves better results both in simulation and in real with its SPL and SFPL. UDR and DROID show the smallest sim2real gap on SPL and SFPL, respectively. The velocity of the action transformation method (GAT) in the simulation environment show a smaller difference from the real speed of the robot.
The performance of the baselines at different levels of navigation tasks is detailed in table IV. The polices used for evaluation are trained with 2000 iterations. The maximum speed of the polices is \(1.0m/s\). We carry out 2 trials of each level of the navigation task to test the baselines, and average the results to evaluate the performance. From the results in the table, it can be seen that the transfer of the baselines from the simulation to the real environment has little influence on SR, but has a significant impact on the mobile navigation process of the robot. Domain randomization methods can significantly improve the SPL and SFPL performance of the algorithm in the real environment. Among them, UDR, as a simple randomization method, has smaller SPL and SFPL gaps in all three scenarios. The adaptive domain randomization methods have a significant advantage in SFPL performance in the real evaluation environment, and have a smaller TrajGap. GAT can effectively reduce the difference between the simulation velocity and the real velocity, but does not necessarily improve the SPL and SFPL properties, probably because the algorithm utilizes the fitting error of the action transformer, which leads to poor generalization performance. SPL gaps and TrajGaps of the policies trained with None and GAT increase with the task level, while there is no significant trend in SFPL. SPL and SFPL gaps of domain randomization have no significant trend with the various task levels.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{SR\(\uparrow\)} & \multicolumn{4}{c}{SPPL\(\uparrow\)} & \multicolumn{4}{c}{SPPL\(\uparrow\)} & \multicolumn{4}{c}{Velocity\(\uparrow\)} & \multicolumn{4}{c}{TrajGap\(\downarrow\)} \\ \hline Speed(m/s) & Methods & sim & real & [gap] \(\downarrow\) & sim & real & [gap] \(\downarrow\) & sim & real & [gap] \(\downarrow\) & sim & real & [gap] \(\downarrow\) & \(\downarrow\) \\ \hline \multirow{4}{*}{0.5} & None & 1.0 & 1.0 & 0.0 & 0.95 & 0.85 & 0.10 & 0.91 & 0.75 & 0.16 & 0.61 & 0.65 & 0.04 & 0.97 \\ & UDR & 1.0 & 1.0 & 0.0 & 0.80 & 0.90 & 0.10 & 0.75 & 0.75 & 0.0 & 0.48 & 0.61 & 0.13 & 0.95 \\ & DROID & 1.0 & 1.0 & 0.0 & 0.91 & 0.86 & 0.05 & 0.86 & 0.72 & 0.14 & 0.56 & 0.51 & 0.05 & 0.54 \\ & SimOpt & 1.0 & 1.0 & 0.0 & 0.98 & 0.96 & 0.02 & 0.94 & 0.82 & 0.12 & 0.57 & 0.57 & 0.0 & 0.28 \\ & GAT & 1.0 & 1.0 & 0.0 & 0.95 & 0.95 & 0.0 & 0.86 & 0.63 & 0.23 & 0.53 & 0.58 & 0.05 & 0.48 \\ \hline \multirow{4}{*}{1.0} & None & 1.0 & 1.0 & 0.0 & 0.94 & 0.83 & 0.11 & 0.80 & 0.51 & 0.29 & 1.10 & 0.93 & 0.17 & 1.77 \\ & UDR & 1.0 & 1.0 & 0.0 & 0.86 & 0.79 & 0.07 & 0.75 & 0.75 & 0.0 & 1.16 & 0.99 & 0.17 & 1.90 \\ & DROID & 1.0 & 1.0 & 0.0 & 0.83 & 0.79 & 0.04 & 0.77 & 0.75 & 0.02 & 1.0 & 0.81 & 0.19 & 1.89 \\ & SimOpt & 1.0 & 1.0 & 0.0 & 0.95 & 0.92 & 0.03 & 0.88 & 0.76 & 0.12 & 1.05 & 1.00 & 0.05 & 0.59 \\ & GAT & 1.0 & 1.0 & 0.0 & 0.91 & 0.83 & 0.08 & 0.85 & 0.62 & 0.23 & 0.96 & 0.94 & 0.02 & 1.76 \\ \hline \multirow{4}{*}{1.5} & None & 1.0 & 1.0 & 0.0 & 0.82 & 0.67 & 0.14 & 0.72 & 0.49 & 0.23 & 1.50 & 1.04 & 0.46 & 6.31 \\ & UDR & 1.0 & 1.0 & 0.0 & 0.81 & 0.76 & 0.05 & 0.76 & 0.60 & 0.16 & 1.70 & 1.06 & 0.64 & 5.02 \\ & DROID & 1.0 & 1.0 & 0.0 & 0.77 & 0.76 & 0.01 & 0.74 & 0.61 & 0.13 & 1.47 & 1.17 & 0.30 & 2.86 \\ & SimOpt & 1.0 & 1.0 & 0.0 & 0.92 & 0.85 & 0.07 & 0.86 & 0.64 & 0.22 & 1.47 & 1.23 & 0.24 & 2.51 \\ & GAT & 1.0 & 1.0 & 0.0 & 0.90 & 0.76 & 0.14 & 0.83 & 0.52 & 0.31 & 1.45 & 1.39 & 0.06 & 2.65 \\ \hline \hline \end{tabular}
\end{table} TABLE V: Comparison of the Baselines on the Different Speeds Scenarios.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{SR\(\uparrow\)} & \multicolumn{4}{c}{SPPL\(\uparrow\)} & \multicolumn{4}{c}{Velocity\(\uparrow\)} & \multicolumn{4}{c}{TrajGap\(\downarrow\)} \\ \hline Scenarios & Methods & sim & real & [gap] \(\downarrow\) & sim & real & [gap] \(\downarrow\) & sim & real & [gap] \(\downarrow\) & sim & real & [gap] \(\downarrow\) \\ \hline \multirow{4}{*}{Level 1} & None & 1.0 & 1.0 & 0.0 & 0.94 & 0.83 & 0.11 & 0.80 & 0.51 & 0.29 & 1.10 & 0.93 & 0.17 & 1.77 \\ & UDR & 1.0 & 1.0 & 0.0 & 0.86 & 0.79 & 0.07 & 0.75 & 0.0 & 1.16 & 0.99 & 0.17 & 1.90 \\ & DROID & 1.0 & 1.0 & 0.0 & 0.83 & 0.79 & 0.04 & 0.77 & 0.75 & 0.02 & 1.0 & 0.81 & 0.19 & 1.89 \\ & SimOpt & 1.0 & 1.0 & 0.0 & 0.95 & 0.92 & 0.03 & 0.88 & 0.76 & 0.12 & 1.05 & 1.00 & 0.05 & 0.59 \\ & GAT & 1.0 & 1.0 & 0.0 & 0.81 & 0.82 & 0.01 & 0.72 & 0.65 & 0.07 & 1.09 & 0.98 & 0.11 \\ & IDROID & 1.0 & 1.0 & 0.0 & 0.84 & 0.82 & 0.02 & 0.75 & 0.71 & 0.04 & 1.03 & 0.94 & 0.09 \\ & SimOpt & 1.0 & 1.0 & 0.0 & 0.93 & 0.84 & 0.09 & 0.87 & 0.76 & 0.11 & 1.03 & 0.97 & 0.06 \\ & GAT & 1.0 & 1.0 & 0.0 & 0.85 & 0.73 & 0.12 & 0.78 & 0.60 & 0.18 & 0.90 & 0.86 & 0.04 \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Comparison of the Baselines on the Different Levels Scenarios.
Fig. 10: The sim2real gaps of different metrics with various maximum velocities.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{SR\(\uparrow\)} & \multicolumn{4}{c}{SPPL\(\uparrow\)} & \multicolumn{4}{c}{SPPL\(\uparrow\)} & \multicolumn{4}{c}{Velocity\(\uparrow\)} \\ \hline Methods & sim & real & [gap] \(\downarrow\) & sim & real & [gap] \(\downarrow\) & sim & real & [gap] \(\downarrow\) & sim & real & [gap] \(\downarrow\) \\ \hline None & 1.0 & 1.0 & 0.0 & 0.87 & 0.71 & 0.16 & 0.73 & 0.56 & 0.17 & 1.02 & 0.88 & 0.14 \\ UDR & 1.0 & 1.0 & 0.0 & 0.81 & 0.82 & 0.01 & 0.72 & 0.65 & 0.07 & 1.09 & 0.98 & 0.11 \\ DROID & 1.0 & 1.0 & 0.0 & 0.84 & 0.82 & 0.02 & 0.75 & 0.71 & 0.04 & 1.03 & 0.94 & 0.09 \\ SimOpt & 1.0 & 1.0 & 0.0 & 0.93 & 0.84 & 0.09 & 0.87 & 0.76 & 0.11 & 1.03 & 0.97 & 0.06 \\ GAT & 1.0 & 1.0 & 0.0 & 0.85 & 0.73 & 0.12 & 0.78 & 0.60 & 0.18 & 0.90 & 0.86 & 0.04 \\ \hline \hline \end{tabular}
\end{table} TABLE III: Comparison of the Baselines on the Navigation Task.
### _Comparison of the Baselines on the Confrontation Task_
Unlike navigation tasks, in confrontation tasks, robots do not have clear navigation goals and need to explore and find a reasonable location to defeat their opponents. Compared to the static goals, the opponent in the confrontation task will move continuously according to the pose of the robot. The robot needs to choose a reasonable action based on the state of the opponent. In this section, we use the previous baselines to train and transfer confrontation policy, and discuss the impact of sim2real methods on confrontation performance.
Table VI shows the performance of the baselines in the confrontation tasks. From the results in the table, it can be seen that in the confrontation task, although the robot's movement is affected by the opponent's behavior, the sim2real method can still reduce the TrajGap of the confrontation policy in the simulation environment and the real environment. In addition, according to the remaining \(HP\) and damage \(D\), the use of sim2real method does not significantly reduce the sim2real gap. Since the confrontation policy of the robot is influenced by the opponent's behavior, and the opponent's behavior in simulation is different from that in reality, it is difficult to reduce the confrontation performance by only reducing the dynamic difference between the simulation and the physical robot. Obtaining robust confrontation policies should take full account of the dynamics differences and diversity of opponent strategies.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{3}{c}{\(HP\uparrow\)} & \multicolumn{3}{c}{\(D\uparrow\)} & \multicolumn{2}{c}{TrajGap\(\downarrow\)} \\ \hline Methods & sim & real & [gap] \(\downarrow\) & sim & real & [gap] \(\downarrow\) & \\ \hline None & 0.0 & 0.0 & 0.0 & 100 & 333.33 & 233.33 & 4.04 \\ UDR & 0.0 & 16.67 & 16.67 & 233.33 & 366.67 & 133.34 & 2.78 \\ DROID & 0.0 & 0.0 & 0.0 & 200 & 283.33 & 83.33 & 2.99 \\ SimOpt & 0.0 & 0.0 & 0.0 & 233.33 & 316.67 & 83.34 & 1.90 \\ GAT & 0.0 & 0.0 & 0.0 & 166.67 & 450.0 & 283.33 & 2.36 \\ \hline \hline \end{tabular}
\end{table} TABLE VI: Comparison of the Baselines on the Confrontation Task.
Fig. 11: Visualization of trajectory distribution and state-action distribution with various maximum speeds. In rows of **Trajectories**, the yellow squares are the goal blocks, and the green square is the starting point of the agent. The red and green trajectories are collected in the simulation environment and the real environment, respectively. The color shade indicates the velocity, the darker the color, the larger the velocity. In the rows of **S-A pairs**, the red and cyan points are the state-action distribution of the trajectories in the simulation environment and the real environment, respectively.
### _Comparison of the Baselines on Combined Tasks_
Based on the combined tasks proposed in the paper, we organize the RoboMaster sim2real challenge1 at the 2022 IEEE Conference on Games, aiming to improve the navigation and confrontation capabilities of physical robots through simulation. In this section, we list the 3 winning teams (D504, Asterism, SEU-AutoMan) that used learning-based algorithms in the competition. D504 uses SAC algorithm to train navigation and confrontation policies, and combines system identification and domain randomization to enhance the performance of algorithm transfer to physical robots. SEU-AutoMan emploies deep deterministic policy gradient to train navigation and confrontation policies, and designs a dual stream network to process LiDAR data and robot status data respectively. Unlike the previous two methods, Asterism applies discrete action space and deep Q-learning. In order to ensure that the difference of the state between the simulation and the real environment is as small as possible, this method converts all the useful information into images to improve the generalization of the algorithm. In addition, we use 5 different sim2real methods to build the baselines. The results are shown in Table VII. The baselines in the table are mixed policies obtained by combining the navigation policy and confrontation policy trained in the previous session.
Footnote 1: [https://ieee-cog.org/2022/cog_sim2real/index.html](https://ieee-cog.org/2022/cog_sim2real/index.html)
It can be seen from the remaining \(HP\) in the table that most of the agents are difficult to defeat the red robot in the simulation environment. However, in the real environment, due to the difference of the behavior policy between simulation and physical robot, D504 and SEU-AutoMan have better confrontation performance and can defeat the opponent in the Level 2 and Level 3 scenarios. From the perspective of collision time, almost all algorithms will increase the collision time when they are transferred from simulation to real environment. UDR shows a small difference in the three scenarios, while SimOpt has a shorter collision time in the real. The collision time of the 3 participants is very small in the simulation environment, but the performance gap between the simulation and the reality is larger. From the final results, D504 performs better in the middle and hard scenarios, while UDR performs better in the easy scenario, and SimOpt performs competitively in all three scenarios.
## VII Conclusions and Future Application
In this paper, we construct a hybrid framework including the simulation system and the physical platform for training and evaluating robot navigation and confrontation policies. In the simulator, we provide a variety of sensors and rich dynamic interfaces for policy transfer. We also propose a novel safety metric named SFPL for evaluating the safety of the mobile robot. By evaluating common sim2real methods on this framework, we provide a new benchmark for robot navigation and confrontation tasks. In addition, relying on this platform, we hold the 2022 IEEE CoG RoboMaster sim2real challenge to promote the development of flexible and agile agents. We believe that there are many areas that will benefit from our proposed platform.
* **Sim2Real Transfer**: The simulator of NeuronsGym provides a rich dynamics interface including friction parameters, motor characteristics parameters, etc., and a variety of sensor models to support sim2real algorithm development. Researchers can study the generalizability of algorithms through sim2sim by setting different parameter distributions, or use the physical platform to study sim2real problems.
* **Safety Navigation Learning**: As mentioned in our paper, safety is vital for physical systems, and how to learn and evaluate safe policy are critical for real robot learning. In mobile robotic systems, collision is the most common safety issue. We implement collision detection in both simulated and real environments of NeuronsGym, and propose a more reasonable safety evaluation metric that can help researchers develop safe navigation algorithms more easily.
* **Visual Navigation**: Visual navigation is one of the hotly developed fields nowadays. Although visual navigation is not introduced in this paper, we also provide first-person view images in our platform, as well as the arena models with higher visual fidelity. We also organize the vision-based track in the 2022 IEEE CoG sim2real challenge, but very few teams are able to complete them successfully. Vision-based navigation and confrontation are still challenging tasks.
* **Competitive Multi-Agent Policy Learning**: Unlike most robotics platforms, our NeuronsGym supplies robot confrontation tasks. Competitive multi-agent reinforcement learning algorithms have made landmark advances in virtual games, but are difficult to feed back into physical systems. We hope our robot confrontation tasks make physical robot policy learning benefit from the development of gaming algorithms in the virtual world.
* **Multi-task Learning**: While multiple tasks in most current robot grasping platforms refers to grasping different objects, the multiple tasks defined in our platform differ in task form, reward function, and valid action space. This relaxed task form poses a greater challenge to multi-task learning algorithms, and we hope that this platform will facilitate the development of multi-task reinforcement learning algorithms.
Of course, we also note that fixed environment layouts are difficult to cope with the challenges posed by real-world environmental changes to the agent. In future work, we will explore larger arenas and more diverse environment layouts, as well as consider the inclusion of dynamic participants that can interact in the environment. In addition, we will also expand to more agents to support research on multi-robot collaborative policy learning as well as intelligent emergence.
|
2304.10711 | EulerNet: Adaptive Feature Interaction Learning via Euler's Formula for
CTR Prediction | Learning effective high-order feature interactions is very crucial in the CTR
prediction task. However, it is very time-consuming to calculate high-order
feature interactions with massive features in online e-commerce platforms. Most
existing methods manually design a maximal order and further filter out the
useless interactions from them. Although they reduce the high computational
costs caused by the exponential growth of high-order feature combinations, they
still suffer from the degradation of model capability due to the suboptimal
learning of the restricted feature orders. The solution to maintain the model
capability and meanwhile keep it efficient is a technical challenge, which has
not been adequately addressed. To address this issue, we propose an adaptive
feature interaction learning model, named as EulerNet, in which the feature
interactions are learned in a complex vector space by conducting space mapping
according to Euler's formula. EulerNet converts the exponential powers of
feature interactions into simple linear combinations of the modulus and phase
of the complex features, making it possible to adaptively learn the high-order
feature interactions in an efficient way. Furthermore, EulerNet incorporates
the implicit and explicit feature interactions into a unified architecture,
which achieves the mutual enhancement and largely boosts the model
capabilities. Such a network can be fully learned from data, with no need of
pre-designed form or order for feature interactions. Extensive experiments
conducted on three public datasets have demonstrated the effectiveness and
efficiency of our approach. Our code is available at:
https://github.com/RUCAIBox/EulerNet. | Zhen Tian, Ting Bai, Wayne Xin Zhao, Ji-Rong Wen, Zhao Cao | 2023-04-21T02:48:29Z | http://arxiv.org/abs/2304.10711v3 | # EulerNet: Adaptive Feature Interaction Learning via
###### Abstract.
Learning effective high-order feature interactions is very crucial in the CTR prediction task. However, it is very time-consuming to calculate high-order feature interactions with massive features in online e-commerce platforms. Most existing methods manually design a maximal order and further filter out the useless interactions from them. Although they reduce the high computational costs caused by the exponential growth of high-order feature combinations, they still suffer from the degradation of model capability due to the suboptimal learning of the restricted feature orders. The solution to maintain the model capability and meanwhile keep it efficient is a technical challenge, which has not been adequately addressed. To address this issue, we propose an adaptive feature interaction learning model, named as **EulerNet**, in which the feature interactions are learned in a complex vector space by conducting space mapping according to Euler's formula. EulerNet converts the exponential powers of feature interactions into simple linear combinations of the modulus and phase of the complex features, making it possible to adaptively learn the high-order feature interactions in an efficient way. Furthermore, EulerNet incorporates the implicit and explicit feature interactions into a unified architecture, which achieves the mutual enhancement and largely boosts the model capabilities. Such a network can be fully learned from data, with no need of pre-designed form or order for feature interactions. Extensive experiments conducted on three public datasets have demonstrated the effectiveness and efficiency of our approach. Our code is available at: [https://github.com/RUCAlBox/EulerNet](https://github.com/RUCAlBox/EulerNet).
Key words and phrases:Feature Interaction, CTR Prediction, Recommender Systems, Neural Networks +
Footnote †: 2023) Cortiheld by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xx-xxx-xxx-x/Y/MML... 515.00
+
Footnote †: 2023) Cortiheld by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xx-xxx-xxx-x/Y/MML... 515.00
+
Footnote †: 2023) Cortiheld by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxx-xxx-x/Y/MML... 515.00
+
Footnote †: 2023) Cortiheld by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxx-xxx-x/Y/MML... 515.00
+
Footnote †: 2023) Cortiheld by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxx-xxx-xxx-x/Y/MML... 515.00
+
Footnote †: 2023) Cortiheld by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxx-xxx-xxx-x/Y/MML... 515.00
+
Footnote †: 2023) Cortiheld by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxx-xxx-xxx-x/Y/MML... 515.00
## 1. Introduction
Click-Through Rate (CTR) prediction, which aims to predict the probability of a user clicking on an item, is a very critical task in online e-commerce platforms. In the literature, various approaches have been proposed for effective CTR prediction (Zhou et al., 2017; Chen et al., 2017; Chen et al., 2017; Zhang et al., 2017). The key of CTR prediction is to accurately model the complicated context data by capturing underlying feature relationships. Typically, these methods either learn _explicit feature interaction_ by manually setting the interaction form/order via factorization based models (Zhou et al., 2017; Chen et al., 2017), or _implicit feature interaction_ by directly modeling the fusion of all the features via deep neural networks (Zhou et al., 2017; Chen et al., 2017).
Despite the progress, these methods still have limitations in learning complicated feature relationships (_e.g._, high-dimensional varied contexts). Firstly, due to an exponential growth of combinational complexity, explicit learning methods usually set a small interaction order, which cannot scale to the cases requiring high-order feature interaction modeling. Further, they only model the integer-order interactions, thus leading to an inaccurate modeling
of real-world scenarios. Secondly, due to the lack of effective design in interaction mechanisms, implicit learning methods are shown to be less effective than explicit learning methods (Kang et al., 2017).
A major challenge in modeling high-order interactions among raw features is the incurred high computational cost due to the exponential feature combinations as the number of raw features increases. In a practical scenario, raw features tend to be very sparse and have hundreds of fields with millions of dimensions. For example, identifier features like user ID or item ID become very sparse when encoded as one-hot vectors, so are the multi-field features extracted from the user behavior logs. Calculating high-order interactions on such sparse features with hundreds of fields is computationally intensive and time-consuming.
Considering the above limitations, several studies (Zhou et al., 2017; Wang et al., 2018; Wang et al., 2018) manually assign a maximal order, and further remove useless interactions from them. However, they still suffer from the degradation of model capability due to the restricted feature orders. As a promising approach, a recent study (AFN, 2018) leverages logarithmic neural network (LNN (He et al., 2019)) to adaptively learn the order of feature interactions. It can automatically learn the orders of feature interactions, but at the expense of limited feature representation space, _i.e.,_ only positive feature embeddings can be learned in logarithmic space transformation, which requires a large consumption of logarithmic neurons for retaining the performance.
To address these issues, in this paper, we propose an adaptive feature interaction learning model, named as **EulerNet**, for automatically learning arbitrary-order feature interactions. Unlike prior work, the core idea of EulerNet is to model the feature interaction in a _complex vector space_ by conducting space mapping according to Euler's formula. Specially, EulerNet converts the exponential powers of feature interactions into simple linear combinations of the modulus and phase of the complex features, making it feasible to capture complicated feature interactions in an _efficient_, _flexible_ way. Based on such an idea, we develop an Euler interaction layer that performs the above transformation, which can be stacked to form a capable interaction learning network. Such a network can be fully learned from data, with no need of pre-designed form or order for feature interactions. Furthermore, Euler interaction layer can be extended to integrate the implicit feature interactions. Different from previous explicit-implicit hybrid approaches, our model can fuse the feature representations from the two ways in the Euler interaction layer, instead of simply keeping two separate feature interaction models.
The contributions are summarized as follows:
\(\bullet\) We propose an adaptive feature interaction learning model EulerNet. It can automatically learn the arbitrary-order feature interactions from data. Meanwhile, our model can jointly capture the explicit and implicit feature interactions in a unified model architecture.
\(\bullet\) We propose to model the feature interaction in the complex vector space, by conducting space mapping according to Euler's formula. It enables EulerNet to convert the complicated exponential powers into simple linear computation.
\(\bullet\) We conduct extensive experiments on three widely used datasets. EulerNet consistently outperforms a number of competitive baselines with much fewer parameters, showing the effectiveness and efficiency of our model.
## 2. Preliminary
We first introduce the CTR prediction task, then present the formulations for explicit and implicit feature interactions in existing work, and finally introduce the Euler's formula used in our model.
**CTR Prediction.** The task of the click-through rate (CTR) prediction aims to estimate the probability that a user will click on an item. It takes as input a vector of context features (_e.g.,_ user and item features), denoted as \(\mathbf{x}=\{x_{1},x_{2},...,x_{m}\}\), where \(m\) is the number of feature fields and \(x_{j}\) is the \(j\)-th feature, the label \(y\in\{0,1\}\) represents whether the item is clicked or not and it is predicted from the input feature \(\mathbf{x}\). We further apply a look-up operation to each feature \(x_{j}\) by mapping it into a \(d\)-dimensional embedding \(\mathbf{e}_{j}\in\mathbb{R}^{d}\). In this way, the original feature vector can be represented as a list of feature embeddings \(\{\mathbf{e}_{1},\mathbf{e}_{2},..,\mathbf{e}_{m}\}\).
**Explicit Feature Interactions.** The key of CTR prediction is to learn the effective feature interactions, which is a fundamental problem for this task (Wang et al., 2018). According to the interaction forms, existing methods can be roughly divided into _explicit_ and _implicit_ feature interactions. Explicit feature interactions is usually modeled by a pre-designed interaction formula with a controllable order, such as FM (Wang et al., 2018), HOFM (Beng et al., 2019) and IM (Liu et al., 2019). We introduce a special symbol \(\Delta_{\mathbf{ex}}\) to denote the explicit feature interaction, generally defined as:
\[\Delta_{\mathbf{ex}}=\sum_{\mathbf{\alpha}\in\mathcal{A}}\mathbf{e}_{1}^{\mathbf{\alpha}_{1}} \odot\mathbf{e}_{2}^{\mathbf{\alpha}_{2}}\odot\cdots\odot\mathbf{e}_{m}^{\mathbf{\alpha}_{m}}, \tag{1}\]
where \(\mathbf{\alpha}=[\mathbf{\alpha}_{1},\mathbf{\alpha}_{2},...,\mathbf{\alpha}_{m}]\) consists of the orders for each feature in \(\mathbf{x}\), \(\odot\) is the element-wise product. Based on \(\Delta_{\mathbf{ex}}\), another prediction function \(f(\cdot)\) (_i.e.,_ sigmoid function) can be employed to generate the predicted label \(\hat{y}\) in \([0,1]\). Here, \(\mathcal{A}\) is the set of all planned interactions by a CTR model. Most CTR models require the interaction orders to be non-negative integers, _i.e.,_\(\alpha_{j}\in\mathbb{N}^{0}\). For example, FM (Wang et al., 2018) only considers second-order interaction, which specify \(\mathcal{A}=\{\mathbf{\alpha}\}\sum_{j=1}^{m}\alpha_{j}=2,\forall\alpha_{j}\in\{0,1\}\). Different from most existing methods (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018), we aim to learn the arbitrary-order feature interactions in an adaptive learning way, _i.e.,_\(\mathbf{\alpha}\) could be arbitrary real values that are automatically learned from data.
**Implicit Feature Interactions.** As another form of feature interaction, implicit feature interactions are commonly modeled by feed-forward neural networks, _e.g.,_ the multi-layer perceptron (MLP) used in xDeepFM (Zhou et al., 2017), DCNV2 (Wang et al., 2018) and DeepIM (Liu et al., 2019). Different from explicit feature interactions, it does not specify the concrete interaction forms in the model. Formally, given the concatenation of all feature embeddings, _i.e.,_\(\mathbf{z}^{(0)}=[\mathbf{e}_{1};\mathbf{e}_{2};...;\mathbf{e}_{m}]\), the implicit feature interaction process \(\Delta_{im}\) can be formulated as:
\[\Delta_{im} =\mathbf{z}^{(L)}, \tag{3}\] \[\mathbf{z}^{(l)} =\sigma(\mathbf{W}^{(l)}\mathbf{z}^{(l-1)}+\mathbf{b}^{(l)}), \tag{2}\]
where \(l\in[1,L]\), \(L\) is the layer depth and \(\sigma\) is the activation function.
**Euler's Formula.** Euler's formula is a mathematical formula that establishes the relationships between different expressions of complex vectors, and can be formulated as:
\[\mathbf{\lambda}e^{i\mathbf{\theta}}=\mathbf{\lambda}\cos\mathbf{\theta}+i(\mathbf{\lambda}\sin\mathbf{ \theta}), \tag{4}\]
where \(\mathbf{\lambda}e^{i\mathbf{\theta}}\) and \(\mathbf{\lambda}\cos\mathbf{\theta}+i(\mathbf{\lambda}\sin\mathbf{\theta})\) are the representations of a complex vector in the polar form and the rectangular form respectively. Here, \(i\) is the imaginary unit, \(\mathbf{\lambda}\) and \(\mathbf{\theta}\) are the modulus and phase of a complex vector. For a complex vector \(\mathbf{r}+i\mathbf{p}\), we set the real part \(\mathbf{r}=\mathbf{\lambda}\cos\mathbf{\theta}\) and imaginary part \(\mathbf{p}=\mathbf{\lambda}\sin\mathbf{\theta}\). The modulus \(\mathbf{\lambda}\) and phase \(\mathbf{\theta}\) can be represented as:
\[\mathbf{\lambda} =\sqrt{\mathbf{r}^{2}+\mathbf{p}^{2}},\] \[\mathbf{\theta} =\text{atan2}(\mathbf{p},\mathbf{r}), \tag{5}\]
where \(\text{atan2}(\mathbf{y},\mathbf{x})\) is the two-argument arctangent function. The transformation via Euler's formula makes it feasible to convert the complex vectors from the rectangular form to the polar form, providing a way to encode the features in the polar space.
## 3. Methodology
To adaptively learn the arbitrary-order feature interactions, we propose a feature interaction learning model via Euler's formula, named as **EulerNet**. We first present a general introduction of our model, and then introduce the technical details in each part.
### Overview of EulerNet
The overview architecture of EulerNet is shown in Figure 1. EulerNet is designed by stacking the key structure of _Euler interaction layer_. The core idea of Euler interaction layer is to transform explicit interaction of feature embeddings (Eq. (1)) in a _complex vector space_ according to Euler's formula (Eq. (4)). As such, we can model complicated feature relationships in a flexible way, without the constraints in existing work (_e.g._, non-negativity or integer). Further, exponential computation can be simplified as linear computation, making it possible to adaptively learn the high-order feature interactions in an efficient way. Further, Euler interaction layers can be extended to incorporate implicit feature interaction learning, which can naturally integrate the two kinds of feature interaction.
In what follows, we introduce the details of explicit feature interaction (Section 3.2) and implicit feature interaction (Section 3.3).
### Explicit Feature Interaction Learning
Previous works (Kang et al., 2018; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019) mainly learn the feature interactions in the real vector space, which limits the expressiveness of features, lacking the ability to adaptively capture the arbitrary-order feature interactions. To address this issue, we first map the input features from the real vector space to the complex vector space, and then learn the explicit feature interactions in the complex vector space.
#### 3.2.1. Complex Vector Representation of Features
As discussed in Section 2, the original input vector \(\mathbf{x}\) can be mapped into a list of feature embeddings \(\{\mathbf{e}_{1},\mathbf{e}_{2},..,\mathbf{e}_{m}\}\) via an embedding layer. Based on the embedding representations, we next discuss how to map them into complex space and further conduct Euler interaction.
**Complex Space Mapping**. To improve the expressiveness of features, we map the feature embeddings from the _real vector space_ to the _complex vector space_. Given a feature embedding, \(\mathbf{e}_{j}\), in the complex vector space, we utilize two real vectors \(\mathbf{r}\) and \(\mathbf{p}\) to represent the real and imaginary parts of the complex vector respectively (_i.e._, \(\tilde{\mathbf{e}}_{j}=\mathbf{r}_{j}+i\mathbf{p}_{j}\)). To transform a feature embedding into a complex vector, the key idea is to consider it as the phase and incorporate a learnable parameter (or parameter vector) \(\mu_{j}\) as the modulus following Euler's formula in Eq. (4):
\[\mathbf{\lambda}\rightarrow\mu_{j},\ \mathbf{\theta}\rightarrow\mathbf{e}_{j}. \tag{6}\]
According to Eq. (4), we can obtain the corresponding complex representation of \(\mathbf{e}_{j}\) by introducing the modulus parameter \(\mu_{j}\):
\[\tilde{\mathbf{e}}_{j}=\underbrace{\mu_{j}\cos(\mathbf{e}_{j})}_{\text{ real}}+i\underbrace{\mu_{j}\sin(\mathbf{e}_{j})}_{\text{ imaginary}}, \tag{7}\]
where we have \(\mathbf{r}_{j}=\mu_{j}\cos(\mathbf{e}_{j})\) and \(\mathbf{p}_{j}=\mu_{j}\sin(\mathbf{e}_{j})\). To enhance the field-specific semantics, we let the feature embeddings corresponding to the same field share the same modulus parameter. After complex space mapping, each feature is represented by a complex vector \(\tilde{\mathbf{e}}_{j}\). We utilize the complex feature representations \(\{\tilde{\mathbf{e}}_{j}\}_{j=1}^{m}=\{\mathbf{r}_{j}+i\mathbf{p}_{j}\}_{j=1}^{m}\) for subsequent interaction modeling.
#### 3.2.2. Euler Interaction Layer
Euler interaction layer is the core component of our proposed EulerNet, which enables the adaptive learning of explicit feature interactions. An Euler interaction layer performs the feature interaction under the complex space one time, taking as input a complex representation and outputting a transformed complex representation. In this way, we can stack multiple Euler interaction layers for enhancing the model capacity. Next, we describe the transformation process with an Euler interaction layer.
**Euler Transformation**. In order to adaptively learn the explicit feature interactions, we utilize Euler Transformation to transform the complex feature representations from the rectangular form to the polar form. This step can convert exponential multiplications into simplified linear computation, making it feasible to adaptive capture complicated feature interactions. Given the input complex representation \(\mathbf{r}_{j}+i\mathbf{p}_{j}\) of feature embedding \(\mathbf{e}_{j}\), we use Euler's formula in Eq. (4) to obtain the polar-form representations:
\[\mathbf{r}_{j}+i\mathbf{p}_{j}\rightarrow\mathbf{\lambda}_{j}e^{i\mathbf{\theta}_{j}}. \tag{8}\]
Figure 1. The overall architecture of EulerNet.
In this form, the explicit feature interaction can be formulated as:
\[\begin{split}\Delta_{ex}&=\hat{\mathbf{e}}_{1}^{\alpha_{1}} \odot\hat{\mathbf{e}}_{2}^{\alpha_{2}}\odot\dots\odot\hat{\mathbf{e}}_{m}^{\alpha_{m}} \\ &=\prod_{j=1}^{m}\left(\mathbf{\lambda}_{j}^{\alpha_{j}}\exp\left(i \alpha_{j}\mathbf{\theta}_{j}\right)\right)\\ &=\exp\left(\sum_{j=1}^{m}\alpha_{j}\log(\mathbf{\lambda}_{j})\right) \exp\left(i\sum_{j=1}^{m}\alpha_{j}\mathbf{\theta}_{j}\right),\end{split} \tag{9}\]
where \(\mathbf{\lambda}_{j}=\sqrt{\mathbf{r}_{j}^{2}+\mathbf{p}_{j}^{2}}\) (always non-negative) and \(\mathbf{\theta}_{j}=\text{atan2}(\mathbf{p}_{j},\mathbf{r}_{j})\) are the modulus and phase vectors of the complex features in the polar form. In this way, explicit feature interaction has been cast into a linear weighted combination of modulus and phase values in the polar space, and the original interaction order (_i.e._, \(\mathbf{\alpha}\)) becomes the combination coefficients.
Note that, to achieve the similar formulation, we can also perform the log operation on the original feature interaction (Eq. (1)), while it requires the feature embeddings to be _non-negative_, which do not always hold for all the cases. Such a transform provides a possibility to model complicated feature interaction in a more simplified way.
**Generalized Multi-order Transformation.** In the above, we have discussed the case with an order vector \(\mathbf{\alpha}\). In this part, we generalize such a transformation into a group of \(n\) order vectors \(\{\mathbf{\alpha}_{k}\}_{k=1}^{n}\), where \(\alpha_{k,j}\) denotes the \(j\)-th order of the \(k\)-th vector \(\mathbf{\alpha}_{k}\). Formally, we introduce the \(\mathbf{\psi}_{k}\) and \(\mathbf{I}_{k}\) to generalize Eq. (9) as follows:
\[\begin{split}\mathbf{\psi}_{k}&=\sum_{j=1}^{m}\alpha_{ k,j}\mathbf{\theta}_{j}+\mathbf{\delta}_{k},\\ \mathbf{I}_{k}&=\exp\left(\sum_{j=1}^{m}\alpha_{k,j}\log (\mathbf{\lambda}_{j})+\mathbf{\delta}_{k}^{\prime}\right),\end{split} \tag{10}\]
where \(\mathbf{\delta}_{k}\) and \(\mathbf{\delta}_{k}^{\prime}\) are learnable bias vectors that are incorporated for enhancing the representations. With this generalized extension, we can obtain the explicit interaction with \(\mathbf{\alpha}_{k}\) in the polar form:
\[\begin{split}\Delta_{ex}=&\exp\left(\sum_{j=1}^{m} \alpha_{k,j}\log(\mathbf{\lambda}_{j})+\mathbf{\delta}_{k}^{\prime}\right)\exp\left( i(\sum_{j=1}^{m}\alpha_{k,j}\mathbf{\theta}_{j}+\mathbf{\delta}_{k})\right)\\ =&\mathbf{I}_{k}e^{i\mathbf{\psi}_{k}}.\end{split} \tag{11}\]
**Inverse Euler Transformation.** Since the above feature interactions are in the polar form, we do not directly perform the corresponding interactions with a group of multi-order coefficients \(\{\mathbf{\alpha}_{k}\}_{k=1}^{n}\). We further utilize _inverse Euler transformation_ to convert them into the original complex vectors in the rectangular form as:
\[\begin{split}\hat{\mathbf{r}}_{k}&=\mathbf{I}_{k}\cos(\mathbf{ \psi}_{k}),\\ \hat{\mathbf{p}}_{k}&=\mathbf{I}_{k}\sin(\mathbf{\psi}_{k}), \end{split} \tag{12}\]
where \(\hat{\mathbf{r}}_{k}\) and \(\hat{\mathbf{p}}_{k}\) are the real and imaginary vectors. In this way, the Euler interaction layer can model \(n\) explicit feature interactions. The generalized explicit feature interactions with a group of multi-order coefficients \(\{\mathbf{\alpha}_{k}\}_{k=1}^{n}\) learned in Euler interaction layer can be described as:
\[\begin{split}\Delta_{ex}&=\sum_{k=1}^{n}\mathbf{I}_{k} \cos(\mathbf{\psi}_{k})+i(\mathbf{I}_{k}\sin(\mathbf{\psi}_{k}))\\ &=\sum_{k=1}^{n}(\hat{\mathbf{r}}_{k}+i\hat{\mathbf{p}}_{k}).\end{split} \tag{13}\]
This formula is the core of the proposed EulerNet model for explicit feature interactions. Unlike prior work, the order of the interactions (_i.e._, \(\alpha_{k,j}\)) can be set to arbitrary real value, without additional limits such as _non-negativity_. Instead of manually setting the order coefficients, we adaptively learn them from data, and use the number of order vectors \(n\) to control the model complexity. Furthermore, we can also set varying \(n\) at different layers to increase the model flexibility.
### Integrating Implicit Interactions
Considering that feature relationship in the real scenarios is very complicated, we further incorporate implicit feature interactions into our model. Different from previous studies (Kang et al., 2018; Wang et al., 2019; Wang et al., 2020), which model the explicit and implicit feature interactions in different architectures, we integrate them in each Euler interaction layer to enhanced the representation capacity.
#### 3.3.1. Fusing Explicit and Implicit Interactions
To model more complicated feature relationship, we construct a neural network component for capturing implicit feature interactions. Given the input complex features \(\{\hat{\mathbf{e}}_{j}\}_{j=1}^{m}=\{\mathbf{r}_{j}+i\mathbf{p}_{j}\}_{j=1}^{m}\), we can obtain the input of the implicit interaction by concatenating the these vectors as: \(\mathbf{r}=[\mathbf{r}_{1};\mathbf{r}_{2},...;\mathbf{r}_{m}]\) (_real part_) and \(\mathbf{p}=[\mathbf{p}_{1};\mathbf{p}_{2},...;\mathbf{p}_{m}]\) (_imaginary part_). Then, we feed the real and imaginary parts of feature representations into the same linear layer with a subsequent non-linear activation function:
\[\begin{split}\mathbf{r}_{k}^{\prime}&=\text{ReLU}(\mathbf{W}_ {k}\mathbf{r}+\mathbf{b}_{k}),\\ \mathbf{p}_{k}^{\prime}&=\text{ReLU}(\mathbf{W}_{k}\mathbf{p}+ \mathbf{b}_{k}),\end{split} \tag{14}\]
where \(k\in\{1,\cdots n\}\), and \(\mathbf{W}_{k}\in\mathbb{R}^{d\times md}\) is the weight matrix and \(\mathbf{b}_{k}\in\mathbb{R}^{d}\) is the bias. Finally, in order to integrate the two kinds of feature interaction, we add the explicit and implicit representations (See Eq. (12) and Eq. (14)) by real and imaginary parts accordingly as:
\[\{\mathbf{\alpha}_{k}\}_{k=1}^{n}=\{(\hat{\mathbf{r}}_{k}+\mathbf{r}_{k}^{\prime})+i(\hat{ \mathbf{p}}_{k}+\mathbf{p}_{k}^{\prime})\}_{k=1}^{n}. \tag{15}\]
We can stack multiple Euler interaction layers by taking the output features of the previous layer as the input for the next layer and optionally applying normalization methods such as BatchNorm (Kang et al., 2018) or LayerNorm (Bahdan et al., 2018) to adjust the distribution as it passes through each layer.
#### 3.3.2. Output for CTR Predictions
In order to predict the CTR value, we further perform linear regression on the output representations \(\hat{\mathbf{\omega}}=\{\hat{\mathbf{\omega}}_{k}\}_{k=1}^{n}=\{\hat{\mathbf{r}}_{k}+i\hat{ \mathbf{p}}_{k}\}_{k=1}^{n}\). Specially, we concatenate the real and imaginary vectors accordingly, and introduce a regression weight vector \(\mathbf{w}\in\mathbb{R}^{nd}\), so as to obtain a scalar value for both the real and imaginary parts:
\[z=\mathbf{w}^{\top}\tilde{\mathbf{r}}+i(\mathbf{w}^{\top}\tilde{\mathbf{p}})=z_{re}+iz_{im}, \tag{16}\]
where \(z_{re}\) and \(z_{im}\) are the real and imaginary part of \(z\) respectively. The prediction for CTR by integrating both explicit and implicit interactions can be given as:
\[\hat{y}=\sigma(z_{re}+z_{im}). \tag{17}\]
For training, we utilize the binary cross entropy loss with a regularization term to train our model, which is formulated as:
\[\mathcal{L}(\Theta)=-\frac{1}{N}\sum_{j=1}^{N}\left(y_{i}\log(\hat{y}_{j})+(1-y _{j})\log(1-\hat{y}_{j})\right)+\gamma||\Theta||_{2}^{2}, \tag{18}\]
where \(y_{j}\) and \(\hat{y}_{j}\) are the ground-truth label and predicted result of \(j\)-th training sample respectively, and \(\Theta\) denotes the set of the parameters and \(\gamma\) is the \(L_{2}\)-norm penalty.
### Discussion
#### 3.4.1. Intuitive Explanation of Feature Interaction
To have an intuitive understanding of our approach, we consider a simple case when the embedding dimension \(d=1\). Further, since we apply normalization at the Euler interaction layer, the modulus is around 1, so that we can omit the corresponding \(\lambda\) from Eq. (9). The forms of explicit and implicit interaction can be simplified as:
\[g_{ex}(\mathbf{\hat{e}}_{j},\mathbf{\hat{e}}_{k})=\mathbf{\hat{e}}_{j}^{ \alpha_{j}}\circ\mathbf{\hat{e}}_{k}^{\alpha_{k}}\approx\exp\big{(}i(\alpha_{j} \mathbf{\hat{e}}_{j}+\alpha_{k}\mathbf{\hat{e}}_{k})\big{)},\] \[g_{im}(\mathbf{\hat{e}}_{j},\mathbf{\hat{e}}_{k})=\text{ReLU}(W_{j}\mathbf{r} _{j}+W_{k}\mathbf{r}_{k})+i\text{ReLU}(W_{j}\mathbf{p}_{j}+W_{k}\mathbf{p}_{k}). \tag{19}\]
As we can see, explicit interaction \(g_{ex}(\cdot)\) mainly affects the phase of features (_i.e._, \(\theta_{j}\) and \(\theta_{k}\)), which can be approximately considered as the rotations in the complex vector space, while implicit interaction \(g_{im}(\cdot)\) performs a parallelogram-like transformation in the complex vector space, which mainly affects the modulus instead of the phase (due to the limits in first quadrant). By integrating both implicit and explicit feature interactions, our approach can model the effect in both _phase_ and _modulus_, thus leading to an improved capacity due to mutual enhancement. Figure 2 presents a geometric interpretation of explicit and implicit feature interactions.
To further understand the explicit interaction, we present an illustrative example with a simple interaction \(\hat{\mathbf{e}}^{0.33}_{2}\circ\hat{\mathbf{e}}_{2}^{0.25}\) with two feature vectors: \(\mathbf{\hat{e}}_{1}=[-8,1]^{\top}\) and \(\mathbf{\hat{e}}_{2}=[-16,4]^{\top}\). With some mathematical computations, we can get the following representation: \(\hat{\mathbf{e}}^{0.33}_{1}\circ\hat{\mathbf{e}}^{0.25}_{2}=\mathbf{\hat{r}}+i\mathbf{\hat{p}} =[-0.99,1.41]^{\top}+i[3.85,\ 0]^{\top}\).
#### 3.4.2. Novelty and differences
In Table 1, we compare our approach with existing feature interaction methods. To the best of our knowledge, it is the first attempt that adaptively captures arbitrary-order feature interactions in the complex vector space. Although AFN+ (Chen et al., 2017) leverages the LNN (Kang et al., 2018) to learn arbitrary-order feature interactions adaptively, it constrains the feature representations to positive real vectors. This approach not only degrades the model performance, but also requires additional feature embeddings for implicit interactions. Furthermore, most studies (Chen et al., 2017; Chen et al., 2017; Chen et al., 2018; Wang et al., 2019) model the explicit and implicit interactions in different architectures and seldom integrate them in a joint approach. As a comparison, EulerNet is more _general, unified_ in integrating the modeling of implicit and explicit feature interactions, via the enhanced Euler interaction layer in Section 3.3. In general, our approach provides a more capable solution to model complicated feature interactions.
#### 3.4.3. Complexity Analysis
We also compare the time complexities of different CTR methods in Table 1. For ease of analysis, we assume that the hidden size of different components is set to the same number. Specially, \(m\) is the number of feature fields, \(d\) is the embedding dimension, \(L\) and \(T\) are the layer depth of the explicit and implicit component respectively, \(H\) is the hidden size of MLP, \(K\) is the number of logarithmic neurons of AFN+ \([8]\), and \(n\) is the number of order vectors of EulerNet. Note that \(K\) is much larger than \(m\cdot d\), leading to a very high complexity of AFN+ (Chen et al., 2017). In contrast, \(n\) is very small, which can be set to \(m\) in practice. The complexity of EulerNet for a training instance can be estimated as \(O(m^{2}d^{2}L)\), which is comparable to mainstream efficient methods such as FmFM (Zhou et al., 2019) and DCNV2 (Zhou et al., 2019) (See Table 3 for experimental analysis).
## 4. Experiments
We conduct extensive experiments to show the effectiveness of EulerNet, and analyze the effect of each learning component in it.
### Experimental Settings
We introduce the experimental settings, including the datasets, baseline approaches, and the details of hyper-parameters.
#### 4.1.1. Datasets
We utilize three real world datasets in our experiments: Criteo1, Avazu2, MovieLens-1M3. Table 2 summarizes the dataset statistics information.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Methods & \multicolumn{2}{c}{Feature Interaction} & \multicolumn{2}{c}{Embedding} & \multicolumn{2}{c}{Complexity} \\ \cline{2-6} & High-order & Adaptive & Unified & UR & SV & \\ \hline FMM (Kang et al., 2018) & ✗ & ✗ & ✗ & ✗ & \(O(m^{2}d)\) \\ FanFM (Zhou et al., 2019) & ✗ & ✗ & ✗ & ✗ & \(O(m^{2}d)\) \\ DCNV2 (Zhou et al., 2019) & ✗ & ✗ & ✗ & ✗ & \(O(m^{2}d^{2}L+mH+d^{2}T)\) \\ AFN+ (Chen et al., 2017) & ✗ & ✗ & ✗ & \(O(mK+K+mH+mH+d^{2}T)\) \\ FenderNet (ours) & ✗ & ✗ & ✗ & \(O(mK+K+mH+mH+d^{2}T)\) \\ \hline \hline \end{tabular} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c}{Feature Interaction} & \multicolumn{2}{c}{Embedding} & \multicolumn{2}{c}{Complexity} \\ \cline{2-6} & High-order & Adaptive & Unified & UR & SV & \\ \hline FMM (Kang et al., 2018) & ✗ & ✗ & ✗ & ✗ & \(O(m^{2}d)\) \\ FanFM (Zhou et al., 2019) & ✗ & ✗ & ✗ & ✗ & \(O(m^{2}d^{2})\) \\ DCNV2 (Zhou et al., 2019) & ✗ & ✗ & ✗ & \(O(m^{2}d^{2}L+mH+d^{2}T)\) \\ AFN+ (Chen et al., 2017) & ✗ & ✗ & ✗ & \(O(mK+K+mH+mH+d^{2}T)\) \\ \hline \hline \end{tabular}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c}{Feature Interaction} & \multicolumn{2}{c}{Embedding} & \multicolumn{2}{c}{Complexity} \\ \cline{2-6} & High-order & Adaptive & Unified & UR & SV & \\ \hline FMM (Kang et al., 2018) & ✗ & ✗ & ✗ & ✗ & \(O(m^{2}d)\) \\ FanFM (Zhou et al., 2019) & ✗ & ✗ & ✗ & ✗ & \(O(m^{2}d^{2})\) \\ DCNV2 (Zhou et al., 2019) & ✗ & ✗ & ✗ & \(O(m^{2}d^{2}L+mH+d^{2}T)\) \\ AFN+ (Chen et al., 2017) & ✗ & ✗ & ✗ & \(O(mK+K+mH+mH+d^{2}T)\) \\ \hline \hline \multirow{2}{*}{FenderNet (ours)} & \multicolumn{2}{c}{✓
\(\bullet\) Criteo. The most popular CTR prediction benchmark dataset contains user logs over a period of 7 days.
\(\bullet\) Avazu. It contains user logs over a period of 7 days, which was used in the Avazu CTR prediction competition.
\(\bullet\) MovieLens-1M. The most popular dataset for recommendation systems research.
#### 4.1.2. Compared Models
We compare EulerNet with state-of-the-art methods in CTR prediction task, including:
\(\bullet\) FwFM (Zhou et al., 2017) improves FM by considering field information and uses field-specific weights to capture the field-wise relationship.
\(\bullet\) FmFM (Zhou et al., 2017) replaces the field scalar weight in FwFM with a kernel matrix, allowing for modeling more informative interactions.
\(\bullet\) DeepFM (Zhou et al., 2017) uses FM to model the second-order interactions, and incorporates DNNs to model the high-order interactions.
\(\bullet\) DeepIM (Liu et al., 2018) utilizes Newton's identity to implement high-order FM, and incorporate implicit interactions via an MLP.
\(\bullet\) xDeepFM (Zhou et al., 2017) encodes high-order interactions into multiple feature maps and combine an MLP to model implicit interactions.
\(\bullet\) DCNV2 (Zhou et al., 2017) takes the kernel product of concatenated feature vectors to model high-order interactions and combine an MLP to model implicit interactions.
\(\bullet\) FiBiNet (Liu et al., 2018) uses the bilinear operation to model pair-wise interactions and uses SENet (Liu et al., 2018) to capture the feature importance.
\(\bullet\) AutoInt (Liu et al., 2018) uses the self-attention mechanism to learn high-order interactions. AutoInt+ improves it by combining an MLP.
\(\bullet\) FiGNN (Liu et al., 2018) represents the features into a full-connected graph, and uses gated GNNs to model the high-order feature interactions.
\(\bullet\) AFN (Liu et al., 2018) encodes features into a logarithmic space to adaptively learn the arbitrary-order feature interactions. AFN+ improves the base model by using an MLP to model implicit interactions.
The above models we compared in our experiments have covered different types of feature interaction methods. FwFM and FmFM are shallow models that only model the second-order explicit interactions. DeepFM, DeepIM, xDeepFM and DCNV2 are ensemble methods that learn both the explicit interactions by an empirically designed component and implicit interactions by an MLP. FiBiNet, AutoInt, and FiGNN have the ability to learn the importance of feature interactions. AFN encodes features into a logarithmic space to adaptively learn the arbitrary-order feature interactions. Different from them, our proposed EulerNet represents the features in a complex vector space, in which the exponential computation can be simplified as linear computation, making it possible to adaptively learn the arbitrary-order feature interactions in an efficient way.
#### 4.1.3. Implementation Details
All methods are implemented in Pytorch (Paszke et al., 2017). The size of feature embedding is 16. The learning rate is in {1e-3, 1e-4, 1e-5}. The \(L_{2}\) penalty weight is in {1e-3, 1e-5, 1e-7}. The batch size is 1024. The training optimizer is Adam (Kingmae and Ba, 2014). The hidden layer of MLP component is \(400\times 400\times 400\) and the dropout rate is 0.1. For DeepIM, the interaction order is in {2, 3, 4}. For xDeepFM, the depth of CIN is in {1, 2, 3, 4, 5} and the hidden size is in {100, 200, 400}. For DCNV2, the depth of CrossNet is in {1, 2, 3, 4, 5}. For FiGNN, the graph interaction step is in {1, 2, 3, 4, 5}. For AutoInt+, the depth, number of head and attention size is 2, 2, 40 respectively. For AFN, the number of logarithmic neurons is in {40, 400, 800, 1000}. For EulerNet, the number of Euler interaction layer is in {1, 2, 3, 4, 5}, and the number of order vectors is set as {7, 23, 39} for MovieLens-1M, Avazu and Criteo datasets respectively. Our implementation is also available at RecBole (Zhou et al., 2017; Liu et al., 2018).
### Overall Performance
We present the experimental results of different methods for CTR prediction in Table 3, and have the following observations:
(1) Compared to the DNN-based methods, FwFM and FmFM perform worst due to the limited ability to only capture the second-order explicit feature interactions.
(2) Ensemble methods (_i.e.,_ DeepFM (Chen et al., 2017), DeepIM (Chen et al., 2017), DCNV2 (Zhou et al., 2017) and xDeepFM (Zhou et al., 2017)) achieve competitive performance across on all three datasets, which shows the effectiveness of integrating implicit feature interactions.
(3) For the feature importance learning methods (_i.e.,_ FiBiNet (Liu et al., 2018), FiGNN (Liu et al., 2018) and AutoInt+ (Liu et al., 2018)), their performance largely varies across different datasets. AutoInt+ performs very well on all three datasets, while FiGNN and FiBiNet lose the advantage on the Criteo and MovieLens-1M datasets respectively. This indicates that the self-attention mechanism is more capable in modeling high-order feature interactions.
(4) AFN+ outperforms all the other baseline methods on the Avazu and MovieLens-1M datasets, demonstrating the effectiveness of adaptively learning the arbitrary-order feature interactions in the CTR prediction task. However, its advantage on the Criteo dataset is small. This may be caused by the restriction of positive values assigned in the feature embeddings, which hinders the representation capability of the model.
(5) Our proposed EulerNet consistently performs better than all of the compared methods. It shows the effectiveness of encoding the features into the complex vectors via Euler's formula and conducting transformations in the polar space.
As for the model efficiency, we can see that the latency of FwFM, FmFM, DeepFM, DeepIM and DCNV2 are relatively small. They are more efficient due to the simple architecture and fewer parameters learned in the model. For AutoInt, FiGNN, FiBiNet and xDeepFM, the latency of them is much larger due to the complex model architecture or the complicated training strategy. For AFN+, due to the limited feature representation space, _i.e.,_ only positive feature embeddings can be learned in logarithmic space transformation, it requires a large amount of parameters for retaining the performance. This makes it impractical in the industrial scenarios. In contrast, the latency of EulerNet is much less than AFN+ (_i.e.,_ under 10.2%) and it is comparable to many efficient methods such as DeepFM and DeepIM. With the highest accuracy and lower complexity, EulerNet has a great potential to be applied into large-scale industrial recommender systems.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline
**Dataset** & \# Features & \# Fields & \# Instances \\ \hline \hline Criteo & 1.3M & 39 & 45M \\ Avarau & 1.5M & 23 & 40M \\ MovieLens-1M & 13k & 7 & 740K \\ \hline \hline \end{tabular}
\end{table}
Table 2. The statistics of datasets.
### Experimental Analysis
We conduct experiments to investigate the interaction orders learned in EulerNet, and then visualize the learned feature representations to show their correlation with the feature importance.
#### 4.3.1. Arbitrary-Order Learning Analysis
Learning effective high-order feature interactions is very crucial in the CTR prediction task. To verify the orders learned in EulerNet, we visualize the learned feature interaction orders (_i.e._, the total order of each learnable coefficient vector \(\mathbf{\alpha}_{k}\) in Eq. (10)) in the explicit feature interaction component. Note that the orders adaptively learned in our model can be arbitrary values in \([-\infty,+\infty]\), we cluster them by setting the interval to \(0.5\) for better presentation. As shown in Figure 3, we can see that our model not only learns the integer-order feature interactions, but also can adaptively learn the fractional-order feature interactions in a fine-grained way. Specially, the feature interaction orders learned in our model varies from \([0,3.5]\) on MovieLens-1M dataset and \([0.5,3.5]\) on Avazu dataset. Fine-grained feature interaction learning can improve the capability of our model and enable it to capture more effective information for CTR prediction.
#### 4.3.2. Verification on Synthetic Dataset
Since it can not identify the ground-truth of meaningful feature interactions in real-world public datasets, we further conduct an experiment using synthetic data to verify the degree of coincidence with the learned orders in EulerNet. The synthetic dataset consists of 1 million synthesized click-through records with 7 fields (\(F=[f_{1},f_{2},...,f_{7}]\)) that simulate real click-through records. Each field is independently created and contains one thousand features, and each feature is assigned a probability that affects the likelihood of a click-through event occurring. For a given record \(x_{i}=[p_{1},p_{2},...,p_{7}]\), its label \(y_{i}\) is generated by sampling from a probability distribution that is pre-defined by one of the patterns, _i.e._, \(R\in[R_{1},R_{2},R_{3}]\) (See Table 4). We compare the interaction orders learned in the explicit feature interaction component of AFN+ and EulerNet, and utilize fitting deviation to evaluate the difference between the orders in different learning algorithms and the ground-truth pattern \(R\). From Table 4, we can see that the deviation in EulerNet is much smaller than AFN+, demonstrating that EulerNet has the ability to adaptively learn more meaningful feature interactions.
Specifically, we present the order vectors of EulerNet after training on the synthetic dataset defined by the pattern \(R_{3}\) in Figure 4. Different rows represent the explicit feature interactions learned by different order vectors (See Eq. (10)). For example, the most important feature interactions learned by the order vector \(\mathbf{\alpha}_{1}\) is \(p_{1}^{1.12}p_{3}^{1.90}p_{5}^{1.48}\). We can see that the combinational feature interactions learned by a group of multi-order vectors \(\{\mathbf{\alpha}_{1},\mathbf{\alpha}_{2},\mathbf{\alpha}_{7}\}\) (_i.e._, \(p_{1}^{1.12}p_{3}^{1.90}p_{4}^{1.48}+p_{2}^{0.36}p_{4}^{0.37}p_{5}^{0.38}+p_{7 }^{0.67}\)) are quite similar (average order deviation is 0.47) to the ground truth interaction pattern (_i.e._, \(R_{3}=\frac{1}{3}(p_{1}^{1.32}p_{3}^{2.9}p_{5}^{1.7}+p_{2}^{0.52}p_{4}^{0.5}p_{6 }^{0.5}+p_{7})\)) in the data, showing the ability of EulerNet to learn the effective feature interactions.
#### 4.3.3. Visualization of Feature Embeddings
EulerNet not only can adaptively learn the arbitrary-order feature interactions, but also have the ability to capture the importance of features. We visualize the learned feature embeddings in EulerNet and show its ability in learning the importance of features. The heat map in Figure 5(a) illustrates the mutual information scores between feature fields and
\begin{table}
\begin{tabular}{c|c c|c c} \hline \multirow{2}{*}{**Pattern**} & \multirow{2}{*}{**Formula**} & \multicolumn{2}{c|}{**Deviation**} \\ & & AFN+ & EulerNet \\ \hline \hline \(R_{1}\) & \(p_{1}^{0.9}p_{1}^{1.7}p_{2}^{2.3}\) & 0.6296 & 0.1141 \\ \(R_{2}\) & \(\frac{1}{3}(p_{1}^{1.7}p_{2}^{1.7}+p_{3}^{0.5}p_{4}^{0.5}p_{6}^{0.5}+p_{7})\) & 2.4021 & 0.7481 \\ \(R_{3}\) & \(\frac{1}{3}(p_{1}^{1.3}p_{3}^{2.9}p_{5}^{1.7}+p_{2}^{0.53}p_{4}^{0.5}p_{6}^{0.5}+p_{7 })\) & 1.4732 & 0.4779 \\ \hline \end{tabular}
\end{table}
Table 4. The pattern for creating the synthetic dataset and the deviation comparisons between different models.
Figure 3. The statistics of the interaction orders learned in EulerNet.
\begin{table}
\begin{tabular}{c|c c c|c c c c|c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{4}{c|}{**Criteo**} & \multicolumn{4}{c|}{**Avazu**} & \multicolumn{4}{c}{**MovieLens-1M**} \\ & AUC & LogLoss & Params & Latency & AUC & LogLoss & Params & Latency & AUC & LogLoss & Params & Latency \\ \hline \hline FwFM & 0.8104 & 0.4414 & 0.74 K & 6.71 ms & 0.7741 & 0.3835 & 0.25 K & 3.96 ms & 0.8815 & 0.3351 & 0.02 K & 1.12 ms \\ FmFM & 0.8112 & 0.4408 & 0.39 M & 8.06 ms & 0.7744 & 0.3831 & 0.13 M & 4.32 ms & 0.8864 & 0.3295 & 0.01 M & 1.26 ms \\ DeepFM & 0.8121 & 0.4401 & 0.57 M & 10.31 ms & 0.7830 & 0.3790 & 0.47 M & 6.94 ms & 0.8935 & 0.3230 & 0.36 M & 3.78 ms \\ DeepFM & 0.8124 & 0.4397 & 0.57 M & 10.86 ms & 0.7838 & 0.3779 & 0.47 M & 7.19 ms & 0.8927 & 0.3230 & 0.36 M & 4.20 ms \\ xDeepFM & 0.8122 & 0.4407 & 2.44 M & 188.68 ms & 0.7821 & 0.3799 & 2.29 M & 78.01 ms & 0.8944 & 0.3235 & 0.52 M & 44.68 ms \\ DCNV2 & 0.8127 & 0.4394 & 1.74 M & 16.39 ms & 0.7838 & 0.3782 & 0.87 M & 11.63 ms & 0.8946 & 0.3229 & 0.39 M & 4.03 ms \\ FiBiNet & 0.8126 & 0.4415 & 9.82 M & 136.79 ms & 0.7837 & 0.3783 & 3.57 M & 32.08 ms & 0.8860 & 0.3291 & 0.87 M & 8.13 ms \\ FiGNN & 0.8109 & 0.4412 & 0.08 M & 121.51 ms & 0.7830 & 0.3799 & 0.05 M & 46.31 ms & 0.8939 & 0.3232 & 0.01 M & 10.75 ms \\ AutoInt+ & 0.8126 & 0.4396 & 3.80 M & 41.67 ms & 0.7838 & 0.3785 & 1.43 M & 18.52 ms & 0.8937 & 0.3288 & 1.17 M & 9.61 ms \\ AFN+ & 0.8123 & 0.4396 & 19.46 M & 132.45 ms & 0.7843 & 0.3785 & 1.43 M & 93.72 ms & 0.8950 & 0.3212 & 5.65 M & 84.18 ms \\ \hline \hline EulerNet & **0.8137** & **0.4389** & 0.79 M & 13.51 ms & **0.7863** & **0.3769** & 0.27 M & 9.09 ms & **0.9008** & **0.3114** & 0.02 M & 2.69 ms \\ \hline \hline \end{tabular}
\end{table}
Table 3. Performance comparisons. A higher AUC or lower Logloss at 0.001-level is regarded significant, as stated in previous studies (Fang et al., 2019; Liu et al., 2019; Liu et al., 2019).
labels on the MovieLens-1M dataset, which represents the strength of each field on the prediction results. We can observe that the fields _item_id_, _user_id_ and _zip_code_ have the strongest effect on the click results. The distributions of feature embeddings are plotted with Gaussian kernel density estimation (KDE) in two-dimensional space in Figure 5(b). The more dispersive the distribution of feature embeddings, the less influence of it has on the prediction results due to the low information quantity in the random varies. It can be seen that for the important fields (_i.e._, _item_id_, _user_id_ and _zip_code_) in Figure 5(a), the distribution of feature embeddings in Figure 5(b) is more concentrated and has a smaller variance. While for the fields with less importance (_i.e._, _age_, _occupation_ and _release_year_), they are chaotically distributed, and their variance is relatively large. This indicates that the feature embeddings, which also represent the phase of the complex features as defined in Eq. (6), can reflect the feature importance to a certain extent. In EulerNet, the phase of the complex features is effectively controlled by explicit feature interactions (See Section 3.4.1), which enables it to capture the meaningful feature relationship and improves the model capabilities.
### Ablation Study
We conduct ablation studies to explore the impact of each component or hyper-parameter on the model performance.
#### 4.4.1. Effect of Implicit and Explicit Feature Interactions
EulerNet contains both the explicit and implicit interaction learning components. In order to investigate the impact of each interaction type, we conduct experiments on the two variants of EulerNet, termed as \(\text{EulerNet}_{E}\) and \(\text{EulerNet}_{I}\), in which the implicit and explicit learning parts are removed respectively. As shown in Table 5, we can see that the model performance has a decrease for both \(\text{EulerNet}_{I}\) and \(\text{EulerNet}_{E}\), showing the mutual complementary effects of them, which is consistent with the observations in Section 3.4.1. Besides, \(\text{EulerNet}_{I}\) shows a larger decrease in performance than \(\text{EulerNet}_{E}\) on the Avazu and MovieLens-1M datasets, but the decrease is smaller on the Criteo dataset, showing both the implicit and explicit interactions are important for CTR prediction.
#### 4.4.2. Impact of the Interaction Layer Number
EulerNet is designed by stacking the key structure of the Euler interaction layer. We study the impact of the Euler interaction layer number, which reflects the intricacy of feature interactions, on the model performance. As shown in Figure 6, we can observe that the performance of EulerNet increases as the number of layers increases. EulerNet achieves the best model performance with 5 interaction layers. When the number of layers exceeds 5, the model performance decreases due to the overfitting issue caused by incorporating more parameters.
#### 4.4.3. Impact of the Number of Order Vectors
As introduced in Section 3.2.2, we use multiple order vectors to adaptively learn the arbitrary-order feature interactions. The number of order vectors is denoted as \(n\) (See Eq. (13)), which controls the number of explicit feature interactions in each Euler interaction layer. As illustrated in Figure 7, the performance of EulerNet on the Avazu dataset increases as the number of order vectors increases from 20 to 60. Whereas on the Criteo dataset, EulerNet achieves the best performance as the number of order vectors increases to 40. However, the model performance decreases when adding more order vectors. This indicates that including too many feature combinations in the
Figure 4. Visualization of the order vectors in EulerNet under the distribution pattern \(R_{\text{s}}\), which is defined in Table 4.
Figure 5. Visualization of the relationships between feature importance and the distribution of feature embeddings in EulerNet.
\begin{table}
\begin{tabular}{c|c|c c c} \hline
**Dataset** & **Metric** & \(\text{EulerNet}\) & \(\text{EulerNet}_{E}\) & \(\text{EulerNet}_{I}\) \\ \hline \hline \multirow{2}{*}{Criteo} & AUC & 0.8137 & 0.8117 & 0.8124 \\ & Decrease & - & \(-0.25\%\) & \(-0.16\%\) \\ \hline \multirow{2}{*}{Avazu} & AUC & 0.7863 & 0.7847 & 0.7840 \\ & Decrease & - & \(-0.20\%\) & \(-0.29\%\) \\ \hline \multirow{2}{*}{MovieLens-1M} & AUC & 0.9008 & 0.8988 & 0.8966 \\ & Decrease & - & \(-0.22\%\) & \(-0.47\%\) \\ \hline \end{tabular}
\end{table}
Table 5. Performance comparison between different interactions.
multi-order transformation may incorporate the useless feature interactions that hurt the model performance.
## 5. Related Work
**Explicit Feature Interaction Learning.** This line of research explicitly enumerates feature combinations and uses vector operations such as inner product to capture their relationships. Early CTR models (Han et al., 2017; Chen et al., 2017; Chen et al., 2018; Wang et al., 2019) mainly relied on manually designing feature combinations with simple architectures. For example, FM (Wang et al., 2019) assigns an embedding vector to each feature that mainly captures second-order interactions. Inspired by FM, many variants of factorization machines have been proposed (Han et al., 2017; Chen et al., 2018; Wang et al., 2019; Wang et al., 2019). Among them, FFM (Wang et al., 2019) assigns multiple embeddings to explicitly model field-wise feature interactions. Besides, FwFM (Wang et al., 2019) and FmFM (Wang et al., 2019) are proposed to model the field information to improve FM in a parameter-efficient way. These factorization based methods mainly model second-order interactions, which severely limits their performance. To capture more effective feature interactions, xDeepFM (Wang et al., 2019) proposes the CIN to model the high-order feature interactions by incorporating lots of learnable parameters. Besides, DCNV2 (Wang et al., 2019) proposes the CrossNet to capture the high-order feature interactions in an efficient way. Although these methods leverage high-order feature interactions to achieve great performance, their interaction components are empirically predefined, which may lead to the suboptimal learning of restricted feature interactions. As a promising approach, AFN (Chen et al., 2018) uses logarithmic neural networks (LNN) (Chen et al., 2018) to adaptively model the arbitrary-order interactions, but at the expense of restricting feature embeddings to positive real vectors, which may degrade the expressiveness of feature representations and require much more parameters to retain the performance. Different from them, our proposed EulerNet models the feature interactions in a complex vector space by conducting the space mapping via Euler's formula. The feature interactions in our model are adaptively learned from data without additional restrictions, which could largely improve its capacity and better balance the effectiveness and efficiency.
**Implicit Feature Interaction Learning.** In recent years, many deep learning based models (Han et al., 2017; Chen et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019) have been proposed to model the high-order feature interactions via a deep neural network (DNN) component. Among them, the Wide & Deep (Chen et al., 2018) network combines the logit value of a linear regression model with the output of a DNN. Besides, PNN (Wang et al., 2019) introduces an MLP to improve the output of its explicit interaction component, and NFM (Chen et al., 2018) stacks deep neural networks after FMs to model the high-order feature interactions. Different from the explicit feature interactions, the implicit feature interactions modeled by deep neural networks lack good interpretability. Additionally, some recent study (Wang et al., 2019) has found that it is more challenging for an MLP to effectively learn the high-order feature interactions compared to using an inner product in FM. Most deep learning based methods (Han et al., 2017; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019) leverage the implicit feature interactions as the supplemental signal of the explicit feature interaction component. Different from them, in EulerNet, the explicit and implicit feature interactions are learned in a unified architecture: both of them perform the linear transformations on the features in different forms (_i.e._, the polar form for the explicit feature interactions and the rectangular form for the implicit feature interactions). Euler's formula establishes the relationship between different representation forms and also builds a bridge between the explicit and implicit feature interactions. It is observed that there exists a complementary effect between the explicit and implicit interactions in EulerNet, which enables them to promote each other and further improve the model capabilities.
## 6. Conclusion
In this paper, we proposed an adaptive feature interaction learning neural network EulerNet. Different from prior work, EulerNet modeled the arbitrary-order feature interactions in a complex vector space by conducting space mapping according to Euler's formula. In EulerNet, the exponential powers of feature interactions were converted into simple linear combinations of the modulus and phase of the complex features, enabling it to adaptively learn the arbitrary-order feature interactions in an efficient way. Furthermore, EulerNet integrated the implicit and explicit feature interactions into a unified architecture, which can achieve the mutual enhancement and largely boost the model capabilities. As the major contribution, we proposed to conduct feature interaction learning in the complex vector space, which provides a way to enhance the representation capability of models and promote the feature interaction learning in this area.
As future work, we consider incorporating the user behavior features into our method, and further explore the use of attention mechanism in the complex vector space to capture more informative correlations for various recommendation tasks.
###### Acknowledgements.
This work was partially supported by National Natural Science Foundation of China under Grant No. 62222215 and 62102038, Beijing Natural Science Foundation under Grant No. 4222027, and Beijing Outstanding Young Scientist Program under Grant No. BJJWZYJH012019100020098.
Figure 6. Impact of the interaction layer number.
Figure 7. Impact of the number of order vectors. |
2303.15816 | Physical model of end-diastolic and end-systolic pressure-volume
relationships of a heart | Left ventricular (LV) stiffness and contractility, characterized by the
end-diastolic and end-systolic pressure-volume relationships (EDPVR & ESPVR),
are two important indicators of the performance of the human heart. Although
much research has been conducted on EDPVR and ESPVR, no model with physically
interpretable parameters combining both relationships has been presented,
thereby impairing the understanding of cardiac physiology and pathology. Here,
we present a model that evaluates both EDPVR and ESPVR with physical
interpretations of the parameters in a unified framework. Our physics-based
model fits the available experimental data and in silico results very well and
outperforms existing models. With prescribed parameters, the new model is used
to predict the pressure-volume relationships of the left ventricle. Our model
provides a deeper understanding of cardiac mechanics and thus will have
applications in cardiac research and clinical medicine. | Yunxiao Zhang, Moritz Kalhöfer-Köchling, Eberhard Bodenschatz, Yong Wang | 2023-03-28T08:39:52Z | http://arxiv.org/abs/2303.15816v1 | # Physical model of end-diastolic and end-systolic pressure-volume relationships of a heart
###### Abstract
Left ventricular (LV) stiffness and contractility, characterized by the end-diastolic and end-systolic pressure-volume relationships (EDPVR & ESPVR), are two important indicators of the performance of the human heart. Although much research has been conducted on EDPVR and ESPVR, no model with physically interpretable parameters combining both relationships has been presented, thereby impairing the understanding of cardiac physiology and pathology. Here, we present a model that evaluates both EDPVR and ESPVR with physical interpretations of the parameters in a unified framework. Our physics-based model fits the available experimental data and in silico results very well and outperforms existing models. With prescribed parameters, the new model is used to predict the pressure-volume relationships of the left ventricle. Our model provides a deeper understanding of cardiac mechanics and thus will have applications in cardiac research and clinical medicine.
Cardiac mechanics End-diastolic pressure-volume relationship End-systolic pressure-volume relationship Left ventricle Physics-based model
## 1 Introduction
A well-functioning heart is critical to the quality of human life [1]. The pump function of the heart can be captured by the pressure-volume (PV) loop, which is a simple and useful framework for analyzing cardiac mechanics from a physical perspective [2]. Deoxygenated blood is pumped from the right ventricle to the lungs, and in turn, oxygenated blood is pumped from the left ventricle (LV) to the rest of the body. Because the LV is physically subjected to more stress and strain than the right ventricle, the LV is more susceptible to cardiac disease. As a result, cardiac research focuses heavily on the LV.
As shown in Fig. 1, exemplary for a PV loop, the lower right point (point 1) indicates the end-diastolic (ED) state of the LV. Varying ED filling pressures yields a change in ED volume. For a heart, these data points fall roughly on a single curve which is termed the end-diastolic pressure-volume relationship (EDPVR) [3]. The EDPVR is widely used to estimate the mechanical property of myocardium [4, 5, 6, 7]. The upper left point (point 3) on the PV loop indicates the end-systolic (ES) state of the LV, and the related curve is the end-systolic pressure-volume relationship (ESPVR). The ESPVR and its slope are generally used to describe the contractility of the heart. In this work we present a physics-based model for both the EDPVR and ESPVR.
The EDPVR comprises a number of important markers used by both researchers and clinicians in health assessments. Many studies have shown that the EDPVR has a strong association with heart diseases. Despite its long history, the EDPVR continues to gain increased attention. Goto et al. [8] studied the effects of right ventricular ischemia on LV EDPVR in canine hearts. A leftward and upward shift in the LV EDPVR was observed with no change in
LV myocardial performance. Brinke et al. [4] predicted the LV EDPVR in patients with end-stage heart failure (LV ejection fraction \(<40\%\)) using single-beat estimation and concluded that such method facilitated less invasive EDPVR estimation. Schwarzl et al. [9] showed that, due to LV remodeling, the EDPVR was shifted rightward and leftward in heart failures with reduced ejection fraction and heart failure with preserved ejection fraction, respectively, compared with the reference case. In addition, the risk of heart failure for non-heart failure individuals was found to be associated with the changes in LV capacitance and stiffness, which can be extracted from the EDPVR. Witzenburg and Holmes [2] stated that EDPVR contains information not only about the mechanical properties of the myocardium but also about LV geometry. Since cardiac diseases alter the shape or stiffness of the heart and thus the EDPVR, the EDPVR is important and helpful to clinicians.
Despite numerous experimental and clinical studies on the EDPVR, there is relatively little theoretical knowledge, especially on the formulation of the corresponding curves. One commonly used method is to fit the EDPVR to an exponential form [10, 11, 12, 13],
\[P_{ED}=A(e^{B(V_{ED}-V_{0})}-1), \tag{1}\]
where \(P_{ED}\) is the end-diastolic pressure (EDP); \(V_{ED}\) is the end-diastolic volume (EDV); \(A\) and \(B\) are fitting parameters; \(V_{0}\) is the reference volume when the ventricular pressure of the LV is zero. The exponential term in Eq. 1 is to reflect the exponential stress-strain relationship of the myocardial mechanical property. Its nonlinearity reflects the fact that diastolic stiffness steadily increases with loading [4].
Klotz et al. [14] suggested that the EDPVR can be non-dimensionalized so that all values for different species, being dog, rat, or human, fall closely on a single curve, called the Klotz curve,
\[P_{ED}=A_{n}V_{n}^{B_{n}},\text{ with }V_{n}=\frac{V-V_{0}}{V_{30}-V_{0}}, \tag{2}\]
where \(A_{n}\) and \(B_{n}\) are fitting parameters; \(V_{n}\) is the non-dimensionalized volume; \(V_{0}\) and \(V_{30}\) are the ventricular volumes when the ventricular pressures are \(0\,\mathrm{mmHg}\) and \(30\,\mathrm{mmHg}\), respectively. The Klotz curve serves as a reference in some cardiovascular studies. In Nordsletten et al.'s study on human left ventricular diastolic and systolic function [15], the Klotz curve served as a reference to validate the numerical data. Hadjicharalombous et al. [16] took the Klotz curve as a matching target when evaluating the initial parameter set for 3D tagged MRI. Although widely used, the Klotz curve is an ad-hoc empirical function describing the EDPVR, and does not have physical justification. Furthermore, it shows poor agreement with the experimental data and simulation data at small volumes [7, 14].
Figure 1: **Illustration of the PV loop, ESPVR, and EDPVR of the LV.** The EDPVR and ESPVR are highlighted in red and the PV loop is in blue. The arrows on the PV loop correspond to the direction of the LV beating cycle. Curve 3-4-1 corresponds to the diastolic phase. At point 1, the ventricular volume reaches its maximum value as the blood fills into the LV, which is called the EDV. Similarly, curve 1-2-3 corresponds to the systolic phase. At point 3, the ventricular volume reaches its minimum value during LV contraction, which is referred to as ESV. The difference between EDV and ESV is the SV, indicating the amount of blood pumped by the LV per cardiac cycle. At different filling pressure and contractility, points 1 and 3 move on a single curve called the EDPVR and ESPVR, respectively. (EDPVR: end-diastolic pressure-volume relationship; ESPVR: end-systolic pressure-volume relationship; EDV: end-diastolic volume; ESV: end-systolic volume; SV: stroke volume.)
Besides the exponential model and the Klotz curve, other forms of fitting of the EDPVR can be found in the literature [13]. These fittings of different orders are more mathematical in nature and do not have sufficient physical implications. Thus, a deeper understanding of the EDPVR and its interaction with myocardial properties and cardiac disease warrants a physical model derived directly from the fundamentals of cardiac mechanics.
Although the EDPVR and ESPVR share common mechanisms, they have mostly been studied separately. Very idealized the ESPVR is assumed to be linear and can be fitted with \(P_{ES}=E_{ES}(V_{ES}-V_{0})\)[11, 12, 13, 17]. Therein \(P_{ES}\) and \(V_{ES}\) are the pressure and volume at the ES state, respectively; \(E_{ES}\) is the slope of the curve, thus the ES elastance. In reality, however, with different contractility, the ESPVR is nonlinear especially over a large volume range [13, 18, 19]. Some other fitting functions, such as the bilinear form [19] and parabolic form [18] can also be found in the literature. Nakano et al. [20] investigated the nonlinearity of the ESPVR and proposed a contractile index independent of ventricular size. In their work, the LV was mimicked by a thick-walled ellipsoid and the contractile index was used to calculate the wall stress based on the concept of mechanical work. \(P_{ES}\) and \(V_{ES}\) can then be connected with the relationship between wall stress and thickness. Experiments with 25 healthy dogs showed that the proposed contractile index was independent of ventricular size and geometry. Sato et al. [21] measured the ESPVR of rat LV in situ with a catheter and observed contractility dependent nonlinearity in the ESPVR. Habigt et al. [19] focused on the nonlinearity of the ESPVR and investigated the effect of different loading alterations on the shape of the ESPVR in pig hearts. The bilinear behavior of the ESPVR in their experimental data strengthens the argument that the linear model is only a special case of nonlinear ESPVR, which is a strong support for the physics-based ESPVR model with similar nonlinearity that we will present. A recent review of invasive analysis for the PV relationships in the LV, including both the EDPVR and ESPVR, can be found in Ref. [22].
Here we present a physics-based model that characterises both the EDPVR and the ESPVR. The model uses parameters derived from the properties of the heart under consideration. The physical properties, such as myocardial stiffness, thickness, and contractility, replace the extensive use of otherwise conjectured fitting parameters found in previous works, as discussed above. Section 2 presents the physics-based model. Section 3 offers a discussion of the model, including its validation and its predictions. Section 4 considers the implications and limitations of the model. Finally, a conclusion is given in Section 5.
## 2 The Physics-based Model
### Model Definition and Theory
The schematic of our physics-based model is shown in Fig. 2. Matching the simplicity of the single curve for either EDPVR or ESPVR, the cardiac shape is approximated by a thick-walled sphere. The use of such simplified geometries dates back to the early days of cardiac modeling and can be found still in modern research [23, 24]. The reference geometry \(\Omega_{0}\) of the LV is shown on the left-hand side, whereas the right-hand side depicts the geometry in a deformed state. The inner and outer radius of the sphere for the reference geometry are \(R_{endo}\) and \(R_{epi}\), respectively. The wall thickness is thus \(R_{epi}-R_{endo}\). While in the deformed geometry, the inner and outer radius become \(r_{endo}\) and \(r_{epi}\), correspondingly.
A spherical coordinate system is adopted so that its origin is located at the center of the sphere. The three orthogonal basis vectors of coordinate systems are \((\mathbf{e}_{\mathbf{r}},\mathbf{e}_{\theta},\mathbf{e}_{\varphi})\). A deformation maps a point \(\mathbf{X}\) in the reference geometry \(\Omega_{0}\) to point \(\mathbf{x}\) in the deformed geometry \(\Omega\). For a given point \(\mathbf{X}\) in the reference geometry, the radial coordinate is \(R\), while the corresponding radial coordinate of the point \(\mathbf{x}\) in the deformed geometry is \(r\). The second and third coordinates of the point, \(\theta\) and \(\varphi\), keep unchanged under deformation due to the assumption of centrosymmetry, which will be clarified later. The deformation gradient tensor is defined as
\[\mathbf{F}=\frac{\partial\mathbf{x}}{\partial\mathbf{X}}. \tag{3}\]
Under the spherical coordinate system, it is straightforward to get
\[\mathbf{F}=\text{diag}(\lambda_{\rho},\lambda_{\theta},\lambda_{\varphi}), \tag{4}\]
where \(\lambda_{\rho}\) is the radial strain; \(\lambda_{\theta}\) and \(\lambda_{\varphi}\) are two tangential strains.
We assume that the myocardium is incompressible, resulting in the volumetric strain \(J=det(\mathbf{F})=1\), yielding the relation
\[\lambda_{\rho}\lambda_{\theta}\lambda_{\varphi}=1. \tag{5}\]
The sphere only has expansion and contraction deformations, which means that the points in the domain only have radial displacement. We have the symmetry constraint
\[\lambda_{\theta}=\lambda_{\varphi}. \tag{6}\]
The strain \(\lambda_{\theta}\) can be calculated by the ratio of the perimeter \(l\) of the cross-section through the spherical center (which is represented by the red dotted circle in Fig. 2) on the deformed geometry and \(L\) on the reference geometry \(\lambda_{\theta}=l/L\)
yielding
\[\lambda_{\theta}=\frac{r}{R}. \tag{7}\]
Substituting Eqs. 6 and 7 into Eq. 5, we can get the radial strain
\[\lambda_{\rho}=\frac{R^{2}}{r^{2}}. \tag{8}\]
Numerically, the right Cauchy-Green strain tensor \(\mathbf{C}\) is a better choice for solving the balance equation than the deformation gradient tensor \(\mathbf{F}\), since the former is symmetric and positive definite for all points \(\mathbf{X}\in\Omega_{0}\), which reduces computational costs. Said right Cauchy-Green strain tensor is defined as
\[\mathbf{C}=\mathbf{F}^{T}\mathbf{F}. \tag{9}\]
Substituting Eqs. 4 - 6 into Eq. 9, the right Cauchy-Green deformation tensor reads
\[\mathbf{C}=\text{diag}\left(\lambda_{\rho}^{2},\frac{1}{\lambda_{\rho}},\frac {1}{\lambda_{\rho}}\right). \tag{10}\]
The first invariant of the right Cauchy-Green deformation tensor \(I_{1}\) can be expressed as
\[I_{1}=\lambda_{\rho}^{2}+\frac{2}{\lambda_{\rho}}. \tag{11}\]
Due to the incompressibility of the myocardium, the volume between the inner surface and the red dotted spherical surface (see Fig. 2 for reference), stays constant. It follows straightforward that
\[R^{3}-R_{\text{endo}}^{3}=r^{3}-r_{\text{endo}}^{3}. \tag{12}\]
The geometrical parameters are further non-dimensionalized by the inner radius \(R_{endo}\) and the ventricular volume \(V_{0}\) of the reference geometry as follows
\[\hat{R}=\frac{R}{R_{\text{endo}}},\hat{r}=\frac{r}{R_{\text{endo}}},\delta= \frac{R-R_{\text{endo}}}{R_{\text{endo}}},\Delta=\frac{R_{\text{epi}}-R_{ \text{endo}}}{R_{\text{endo}}},\hat{V}=\frac{V}{V_{0}}, \tag{13}\]
where \(\Delta\) is the non-dimensionalized thickness; \(V\) is the ventricular volume at deformed geometry. With Eqs. 12 and 13, the non-dimensionalized radius can be expressed as
\[\hat{r}=\left(\hat{R}^{3}+\hat{V}-1\right)^{1/3}. \tag{14}\]
Figure 2: **Schematic diagram of the geometry and its deformation in the physics-based model.** A thick-walled sphere is used to mimic the LV, whose cross section through the sphere center is shown. The reference geometry \(\Omega_{0}\) with inner radius \(R_{endo}\) and outer radius \(R_{epi}\) is given on the left side. The deformed geometry \(\Omega\) with inner radius \(r_{endo}\) and outer radius \(r_{epi}\) is shown on the right. For any point \(\mathbf{X}\) in the reference geometry \(\Omega_{0}\), the corresponding position on the deformed geometry \(\Omega\) is \(\mathbf{x}\), with radial coordinate \(R\) and \(r\), respectively. Centrosymmetric deformation is assumed for the model. endo: endocardium; epi: epicardium.
The total elastic energy \(W\) stored in the myocardium can be calculated by integrating the energy density function \(\Psi\) over the domain \(\Omega_{0}\)
\[W=\int_{\Omega_{0}}\Psi\mathrm{d}\Omega. \tag{15}\]
The sphere experiences an inner pressure \(P\), representing the blood pressure inside the LV. While the outer pressure is set to zero, indicating a free boundary condition. Based on classical mechanics, it is known that any mechanical work \(W\) performed on the sphere due to a given internal pressure \(P\) follows the relation
\[P=\frac{\mathrm{d}W}{\mathrm{d}V}. \tag{16}\]
Substituting Eqs.9 - 15 into Eq.16, we get an important relationship
\[P=-2\int_{0}^{\Delta}\frac{\lambda_{\rho}^{2}}{\hat{r}}\frac{\mathrm{d}\Psi}{ \mathrm{d}\lambda_{\rho}}\mathrm{d}\delta. \tag{17}\]
Therein, the ventricular pressure is expressed as the integral of the energy density function \(\Psi\) over the domain defined by the non-dimensionalized thickness. The EDPVR and ESPVR can be further deducted based on this relationship. Besides such mechanical work approach, another approach based on stress analysis can also be found in Ref. [24].
### End-diastolic Pressure-volume Relationship
The myocardium is considered to be a homogeneous, incompressible, anisotropic, and fiber-reinforced soft material that generates active forces. Inspired by experimental data, several constitutive laws (energy density function \(\Psi\)) for the myocardium have been developed in the last decades, including the orthotropic Holzapfel-Ogden model [25] and our recently developed squared generalized structure-tensor (SGST) models [26], in which the fiber dispersion of the myocardium is taken into account. Considering the microstructure of the myocardium, these constitutive laws contain different contributions of isotropic, fibrous, and laminar structures as well as their coupling, and are used for simulations at tissue or organ level [24, 27, 28].
As a first approximation, by neglecting the anisotropy of the cardiac tissue, the isotropic energy function is considered in this work
\[\Psi=\frac{a}{2b}\left(e^{b(I_{1}-3)}-1\right), \tag{18}\]
where \(a\) and \(b\) are mechanical parameters representing the material property, i.e. the stiffness, which can be obtained from experiments.
Incorporating the constitutive law Eq. 18 into Eq. 17, one gets the passive contribution of the myocardium on the ventricular pressure, implying the EDPVR
\[\boxed{P_{ED}=2a\int_{0}^{\Delta}\frac{1-\lambda_{\rho}^{3}}{\hat{r}}e^{b(I_{1 }-3)}\mathrm{d}\delta} \tag{19}\]
Therein, the input parameters are \(a\), \(b\) and \(\Delta\), while the output is a function indicating the relation between the ventricular volume and the ventricular pressure at the ED state.
### End-systolic Pressure-volume Relationship
There are two contributions to the stresses and strains in cardiac muscle tissue: the passive and the active components. As for the passive contribution, the tissue generates resistive stress when it is deformed. This tension contributes to the ventricular pressure of the LV. On the other hand, the tissue actively contracts and the active force generated inside the tissue also contributes to the ventricular pressure. These two contributions can be treated as either additive or multiplicative in the constitutive laws. We assume that the two are additive, resulting in the total pressure \(P=P_{p}+P_{a}\) and the energy function \(\Psi=\Psi_{p}+\Psi_{a}\). The indices \(p\) and \(a\) represent the passive and active parts, respectively.
To model the ESPVR, an active contribution, which indicates the active force generated by the myocardium during the ES state, is added onto the passive contribution Eq. 18. Such an active contribution reads
\[\Psi_{a}=T_{a}\left(\frac{\lambda^{2}}{2}-\lambda_{0}\lambda\right), \tag{20}\]
where \(T_{a}\) is the maximum active stress; \(\lambda_{0}=l_{0}/l_{r}\) with the sarcomere smallest length \(l_{0}=1.58\,\mu\mathrm{m}\) and rest length \(l_{r}=1.85\,\mu\mathrm{m}\)[29]. The strain of the sarcomere \(\lambda\) is defined as
\[\lambda=\sqrt{\mathbf{C}:\mathbf{H_{a}}}. \tag{21}\]
The active force structure tensor \(\mathbf{H_{a}}\) is defined such that contractile forces act in the tangential plane of the myocardium
\[\mathbf{H_{a}}=\mathbf{I}-\mathbf{e_{r}}\otimes\mathbf{e_{r}}, \tag{22}\]
Substituting Eqs. 21 and 22 into Eq. 20, the energy function for active force reads
\[\Psi_{\mathrm{a}}=T_{a}\left(\frac{1}{\lambda_{\rho}}-\lambda_{0}\sqrt{\frac{2 }{\lambda_{\rho}}}\right). \tag{23}\]
Incorporating Eq. 23 into Eq. 17, we obtain the active contribution of pressure
\[P_{a}=2T_{a}\int_{0}^{\Delta}\frac{1-\lambda_{0}\sqrt{\frac{\lambda_{\rho}}{2} }}{\hat{r}}\mathrm{d}\delta. \tag{24}\]
The pressure at the ES state contains both the passive and active parts. Adding Eq. 24 onto Eq. 19, we get the ESPVR
\[\boxed{P_{ES}=2\int_{0}^{\Delta}\frac{a(1-\lambda_{\rho}^{3})e^{b(I_{1}-3)}+T_{ a}(1-\lambda_{0}\sqrt{\lambda_{\rho}/2})}{\hat{r}}\mathrm{d}\delta}. \tag{25}\]
## 3 Validation and Discussion
### Physics-based Model: EDPVR
To validate the physics-based EDPVR model, we fit different models to the dataset from Refs. [14, 30] and compare them in Fig. 3. The dataset contains ex vivo EDPVR data for 80 human hearts. The fitting was implemented by minimizing the mean squared error (MSE) for the models. The full dataset with pressure up to \(30\,\mathrm{mmHg}\), and a subset with a physiologically reasonable pressure range (up to \(20\,\mathrm{mmHg}\)), were considered respectively.
As we can see in Figs. 3(a) and (b), the green solid lines representing the physics-based model show a good fit to the experimental data. We compared our model with two other widely used EDPVR models, i.e. the exponential model [10, 11, 12] and the Klotz curve [14]. The Klotz curve (Eq. 2) belongs to the family of polynomial power functions, while the former (Eq. 1) is classified in the form of exponential functions. Our physics-based model entails the combination of an exponential energy function (Eq. 18) with a volume integral, hence resulting in exponential behaviour. It should be noted that during the curve fitting the original exponential model was adapted to the same non-dimensionalized form as the Klotz curve. This was done by replacing the term \(V-V_{0}\) with \(V_{n}\), which yields \(P_{ED}=A(e^{BV_{n}}-1)\).
The optimized parameters for the three models are given in Table 1. Contrary to previous models, the parameters in our model have a physical meaning. For example, \(a\) and \(b\) together reflect the stiffness of the material. The fitted values of \(a\) and \(b\) of our model are close to most values from experiments and simulations in the literature [28, 31, 32], although parameter estimates themselves often vary considerably across different datasets and experimental protocols. These two parameters are exactly the same as those in the isotropic constitutive law (Eq. 18). \(\Delta\) is the non-dimensionalized thickness of the LV wall, which is an important measure of the ventricular geometry.
The resulting curves of the three models in the full dataset are shown in Fig. 3(a). The MSEs of the original exponential model, the Klotz curve, and the physics-based model are 2.82, 3.19, and 3.08, respectively. Since our physics-based model uses an exponential constitutive law, it is fundamentally similar to the original exponential model. This is why the curves of the physics-based model and the original exponential model are almost identical, and both perform better than the Klotz curve. Fig. 3(c) presents the residuals of the exponential model and the Klotz curve, using the physics-based model as a reference. It can be seen that the exponential model is much closer to the physics-based model, especially in the region with small volumes. It is also worth mentioning that the Klotz curve is not consistent with its definition at the maximum volume or pressure. When the non-dimensionalized volume \(V_{n}\) is equal to 1, the resulting pressure in the Klotz curve is the same as the value of \(A_{n}\), which, by design, may be different from the expected \(30\,\mathrm{mmHg}\).
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{Exponential model, Eq. 1} & \multicolumn{3}{c|}{Klotz curve, Eq. 2} & \multicolumn{3}{c}{Physics-based model, Eq. 19} \\ & \(A\) (kPa) & \(B\) & MSE & \(A_{n}\) (kPa) & \(B_{n}\) & MSE & \(\Delta\) & \(a\) (kPa) & \(b\) & MSE \\ \hline Full dataset & 0.16 & 3.18 & 2.82 & 3.70 & 2.76 & 3.19 & 0.27 & 1.15 & 3.82 & 3.08 \\ Sub dataset & 0.18 & 3.07 & 2.48 & 3.10 & 2.27 & 2.69 & 0.27 & 2.10 & 8.71 & 2.46 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The least-square fits for the three EDPVR models with respect to the datasets presented in Fig. 3. MSE: Mean Squared Error.
The physics-based model shows its strong utility for small volumes. The non-dimensionalized volume of this region ranges from 0 to 0.8, corresponding to the pressure 0-\(20\,\mathrm{mmHg}\), which covers the EDP of the human heart. To better evaluate these three models within this physiologically reasonable range, we generated a sub-dataset with pressures no more than \(20\,\mathrm{mmHg}\). The results are shown in Figs. 3(b) and (d). Here, the physics-based model shows the best fit with a MSE of 2.46. The MSE for the exponential model is 2.48. The Klotz curve has the worst fit with a MSE of 2.69, indicating its weakness at small ventricular volumes.
The Klotz curve is often used as a reference when estimating material parameters of myocardium, like stiffness, in numerical simulations [33, 34, 35]. These simulations mostly use exponential constitutive laws to describe the mechanical properties of the myocardium. Based on the above-mentioned comparison, the new physics-based model shows the capacity to replace the Klotz curve in similar simulations in the future.
Because of its bottom-up, physical nature, the model can be used to predict the EDPVR of a ventricle with given information, such as mechanical properties and thickness of the myocardium. In Fig. 4(a), Dokos2002 [36] represents the parameters for pig hearts, Demiray1972 [37] and Marx2022 [38] are for the human hearts. We further performed finite element simulations with the same parameter sets. The curves predicted by the physics-based EDPVR model agree with simulation results excellently, as shown in Fig. 4(b). By changing the thickness \(\Delta\) in our model, we studied how myocardium thickness affects the EDPVR of the LV, as shown in Fig. 5(a). An increased myocardium thickness leads to an upward lift of the EDPVR curve. To keep the same volume, higher pressure is needed when increasing the thickness of the LV, as shown in Fig. 5(b). This reflects the strong utility of a physics-based model over that of simply fit, i.e, the physics-based model is predictive over a wide range of parameters while a fit cannot.
Figure 3: **Comparison between the exponential model (Eq. 1), the Klotz curve (Eq. 2) and the physics-based model (Eq. 19) for the EDPVR. (a,c) The fitted curves to the full experimental dataset and the corresponding residuals. MSEs: exponential model, 2.82; Klotz curve, 3.19; physics-based model, 3.08. (b,d) The fitted curves to the sub-dataset and the corresponding residuals, with pressure no more than \(20\,\mathrm{mmHg}\). MSEs: exponential model, 2.48; Klotz curve, 2.69; physics-based model, 2.46. Model parameters for subfigures (a,c) and (b,d) are listed in Table 1.**
### Physics-based Model: ESPVR
The ESPVR describes the relationship between ventricular pressure and volume at the ES state of the LV as loading conditions change. It is composed of two parts, as shown in Eq. 25. The first part is identical to the EDPVR and the second part comes from the active force generated by the myocardium. The two contributions are shown in Fig. 6(a). Therein, the solid blue line is the overall ESPVR, while the dashed green one is from the active contraction. A large proportion of the pressure in the LV during the contraction is due to the active force generated by the myocardium. The shape of the ESPVR, especially when the non-dimensionalized ESV is less than 1.0, is mainly determined by the passive resistance due to the deformation of the myocardium. In addition, the ESPVR with small volume is roughly linear, while the overall curve is almost bilinear. This implies that both the linear or bilinear forms of the ESPVR used in the literature have some validity. Please also notice that for the passive part of the ESPVR, the pressure is shown as a negative value. Negative pressure means that the tissue is resisting the deformation caused by its own active contraction. The greater the deformation at ES state, the more negative pressure (resistance stress) is generated, so that the required ventricular pressure is lower. It should be noted that the resistance stress increases in a nonlinear fashion with the decrease of the ESV, due to the nonlinear stress/strain behavior of the tissue. This results in the nonlinear shape of the
Figure 4: **The physics-based model predicts the effect of varying mechanical properties on the EDPVR with in silico validation.****(a)** The EDPVR curves predicted by the physics-based model with different parameter sets. **(b)** In silico validation of the physics-based model. Simulation results agree with the physics-based model very well for each set of parameters. Parameters used in the physics-based model: Klotz2007: \(a=1.15\) kPa, \(b=3.82\); Dokos2002: \(a=2.52\) kPa, \(b=6.79\); Demiray1972: \(a=1.00\) kPa, \(b=6.5\); Marx2022: \(a=1.98\) kPa, \(b=6.19\). For all curves: \(\Delta=0.27\).
Figure 5: **The physics-based model predicts the effect of varying myocardial thickness on the EDPVR.****(a)** The EDPVR curve moves upward with the increase of thickness \(\Delta\), while \(a\) and \(b\) remain constant. **(b)** Nonlinear relationship between the ventricular pressure and the thickness, with constant non-dimensionalized volume 2.0. Parameters used in the physics-based model: \(a=1.15\) kPa and \(b=3.82\).
ESPVR. Our model also shows that the positive slope of the ESPVR is mainly due to the decrease of the resistance stress as ESV increases.
To validate the proposed ESPVR model, we performed an additional finite element simulation. The geometry of the solid region in the simulation was a sphere with dimensionless inner radius 1.0 and wall thickness 0.27. The inner surface was subjected to a constant pressure mimicking the ventricular pressure from blood, while the outer surface was free. In order to enforce incompressibility of the myocardium, we employed a penalty function \(\Psi_{v}=\kappa(J^{2}-1)/2-\log(J)\). The bulk modulus \(\kappa\) was \(1\,\mathrm{GPa}\); \(J=\text{det}(\mathbf{F})\) was the volumetric strain. The active force generated by the myocardium was determined by the energy function Eq. 20. For the passive response of the myocardium, the energy function was chosen according to Eq. 18. Parameters used in both the physics-based ESPVR model and numerical simulation are \(a=$1.15\,\mathrm{kPa}$\), \(b=$3.82$\), \(\lambda_{0}=0.85\), and \(T_{a}=$76.90\,\mathrm{kPa}$\). In Fig. 6(b), the orange dots present simulation results while the blue solid line is from the physics-based ESPVR model. In a large region of the non-dimensionalized volume, the ESPVR predicted by the physics-based model agrees with the simulation results very well.
To further check the validity of our physics-based ESPVR model, we compared it with experimental ESPVR data in Fig. 7. The experimental data was extracted from Ref. [19], by changing afterload pressure. The stress-free volume of the LV is \(54\,\mathrm{m}\). The parameters used in our physics-based ESPVR model are: \(\Delta=$0.27$\), \(a=$2.52\,\mathrm{kPa}$\), \(b=$6.79$\), \(\lambda_{0}=0.85\), and \(T_{a}=$85\,\mathrm{kPa}$\). Our model shows good agreement with the experimental data.
## 4 Implications and Limitations
Adjusting the parameters of the physics-based model allows the study of left ventricular diseases, such as left ventricular hypertrophy, decreased contractility, and diastolic heart failure. For example, for patients with hypertrophic cardiomyopathy, the heart wall thickens to maintain pump function. This effect is easily visualized with our physics-based model (see Fig. 5). In addition, the passive parameters \(a\) and \(b\) can be manipulated to better understand such disease.
With the physics-based model, one can generate both the EDPVR and ESPVR for the same LV in the same plot, as shown in Fig. 8. With given information such as pressure or volume in the ED and ES states, the PV loop will be determined. Therefore, indicators of pump function, such as the stroke volume (SV) and ejection fraction (SV/EDV) can also be calculated. Furthermore, considering new therapies for heart failure, such as engineered muscle tissue [39, 40], Fig. 8 also shows that increasing wall thickness by implanting contractile tissue patch will increase the pump function of the LV. This is because when the wall thickness is increased, the EDPVR hardly changes, while the ESPVR is lifted to the upper left.
Our physics-based model provides even more latitude by combining pressure, volume, shape, active force, and mechanical properties of the LV into a unified framework. We notice that 3D printed artificial hearts [41, 42] or ventricles [43] are attracting increased attention recently, bringing new opportunities for the treatment of heart diseases. The physics-based model proposed in this work can be used to guide 3D printing. For example, with certain mechanical properties and the targeted pump function, our model can predict the required thickness of the heart chamber. For a
Figure 6: **The intrinsic structure and in silico validation of the physics-based ESPVR model.** (a) The pressure in the ESPVR has two contributions, one from the active contraction and the other one from the passive resistance of the tissue. The overall level of pressure is mainly determined by the active stress. The shape, especially at the lower ventricular volume region from 0.5 to 0.7, is mostly influenced by passive resistance. (b) The theoretical ESPVR curve fits very well with the simulation results. The parameters for both the theoretical and simulation are \(\Delta=$0.27$\), \(a=$1.15\,\mathrm{kPa}$\), \(b=$3.82$\), \(\lambda_{0}=0.85\), and \(T_{a}=$76.9\,\mathrm{kPa}$\).
given ventricular pressure \(p\) for which the artificial heart is designed to experience, there is a threshold that the active force \(T_{a}\) must exceed, which can also be obtained from our model.
Furthermore, considering the dynamic cardiac cycle, the ratio of active force \(T_{a}\) to ventricular pressure \(p\) gradually increases from diastole to systole. When this ratio exceeds a certain threshold, the LV starts to contract, which means that the volume of the LV is smaller than the stress-free volume \(V_{0}\). This stress-free or pressure-free geometry is widely used in heart simulations. Our model shows that this threshold is only related to the LV thickness and with this value one can obtain the stress-free volume and the associated reference time.
Last but not least, by reducing \(b\) to zero, the model reduces to a rubber spherical shell of Neo-Hookean's material. The elastic instabilities [44] of spherical inflation can also be reproduced with our model, revealing possible applications of our model beyond the heart.
We are aware that our reductionistic approach cannot fully describe cardiac mechanics. Yet, this simplicity makes it a powerful tool to support comprehensive simulations and diagnostics. The ventricular geometry is not perfectly spherical and, whereas the myocardium is layered and fiber-reinforced soft matter with rotated and dispersed fibers
Figure 8: **Effect of wall thickness on the EDPVR and ESPVR predicted by the physics-based model. \(\Delta_{0}=0.27\)** is the LV wall thickness of a healthy human heart and is therefore considered here as a reference case. The change in wall thickness has a significant effect on the ESPVR, but a relatively small effect on the EDPVR. Parameters used in the physics-based model: \(a=1.15\,\mathrm{kPa}\), \(b=3.82\), \(\lambda_{0}=0.85\), and \(T_{a}=76.9\,\mathrm{kPa}\).
Figure 7: **Comparison of the physics-based model and experimental data for the ESPVR. The physics-based ESPVR model is shown in blue. The experimental data, shown as orange crosses, is from Ref. [19]. The volume of the original data is non-dimensionalized, assuming stress-free volume \(V_{0}=54\,\mathrm{ml}\). Parameters used in the physics-based model: \(\Delta=0.27\), \(a=2.52\,\mathrm{kPa}\), \(b=6.79\), \(\lambda_{0}=0.85\), and \(T_{a}=85\,\mathrm{kPa}\).**
[26, 45, 46, 47, 48]. If all these factors are to be considered, the complexity of the model increases significantly. For such a complex situation, it is recommended to use numerical simulations rather than theoretical models. If the overall pump function and the pressure-volume relationship of the left ventricle are to be considered only, the current simplifications are believed sufficient.
An easy-to-use Python code, for both the physics-based EDPVR and ESPVR, is provided as Supplementary Material.
## 5 Conclusions
To conclude, we proposed a bottom-up physics-based model incorporating the EDPVR and ESPVR of the LV. The two contributions in this model show the sources of pressure in the end-diastolic and end-systolic states. The model fits existing experimental data well and shows good agreement with simulation results. The model has been shown to be suitable for evaluating LV stiffness and contractility. Conversely, the model can predict the EDPVR and ESPVR of the LV based on the parametric and geometric information of the myocardium. It can also be used to study the relationship between the mechanical properties of the LV and its pump function. The proposed model might provide insight into the study of cardiac mechanisms and be used in clinical medicine.
## Author contributions
YXZ and MKK contributed equally. MKK proposed the model for the EDPVR. YXZ proposed the model for the ESPVR. YXZ, MKK and YW performed data analysis, and drafted the manuscript. YXZ, MKK, EB and YW reviewed the manuscript.
## Acknowledgements
This work was supported by the Max Planck Society and the German Center for Cardiovascular Research. We thank Wolfram Zimmermann and Tim Meyer for their continuous support and inspiring scientific discussions.
## Supplementary material
A code of the physics-based model is provided online.
|
2306.16519 | Astreaks: Astrometry of NEOs with trailed background stars | The detection and accurate astrometry of fast-moving near-Earth objects
(NEOs) has been a challenge for the follow-up community. Their fast apparent
motion results in streaks in sidereal images, thus affecting the telescope's
limiting magnitude and astrometric accuracy. A widely adopted technique to
mitigate trailing losses is non-sidereal tracking, which transfers the
streaking to background reference stars. However, no existing publicly
available astrometry software is configured to detect such elongated stars. We
present Astreaks, a streaking source detection algorithm, to obtain accurate
astrometry of NEOs in non-sidereal data. We validate the astrometric accuracy
of Astreaks on 371 non-sidereally tracked images for 115 NEOs with two
instrument set-ups of the GROWTH-India Telescope. The observed NEOs had V-band
magnitude in the range [15, 22] with proper motion up to
140$^{\prime\prime}$/min, thus resulting in stellar streaks as high as
6.5$^\prime$ (582 pixels) in our data. Our method obtained astrometric
solutions for all images with 100% success rate. The standard deviation in
Observed-minus-Computed (O-C) residuals is 0.52$^{\prime\prime}$, with O-C
residuals <2$^{\prime\prime}$(<1$^{\prime\prime}$) for 98.4% (84.4%) of our
measurements. These are appreciable, given the pixel scale of
$\sim$0.3$^{\prime\prime}$ and $\sim$0.7$^{\prime\prime}$ of our two instrument
set-ups. This demonstrates that our modular and fully-automated algorithm helps
improve the telescope system's limiting magnitude without compromising
astrometric accuracy by enabling non-sidereal tracking on the target. This will
help the NEO follow-up community cope with the accelerated discovery rates and
improved sensitivity of the next-generation NEO surveys. Astreaks has been made
available to the community under an open-source license. | Kritti Sharma, Harsh Kumar, Harsh Choudhary, Varun Bhalerao, Vishwajeet Swain, Bryce Bolin, G. C. Anupama, Sudhanshu Barway, Simran Joharle, Vedant Shenoy | 2023-06-28T19:22:00Z | http://arxiv.org/abs/2306.16519v1 | # Astreaks: Astrometry of NEOs with trailed background stars
###### Abstract
The detection and accurate astrometry of fast-moving near-Earth objects (NEOs) has been a challenge for the follow-up community. Their fast apparent motion results in streaks in sidereal images, thus affecting the telescope's limiting magnitude and astrometric accuracy. A widely adopted technique to mitigate trailing losses is non-sidereal tracking, which transfers the streaking to background reference stars. However, no existing publicly available astrometry software is configured to detect such elongated stars. We present Astreaks, a streaking source detection algorithm, to obtain accurate astrometry of NEOs in non-sidereal data. We validate the astrometric accuracy of Astreaks on 371 non-siderually tracked images for 115 NEOs with two instrument set-ups of the GROWTH-India Telescope. The observed NEOs had V-band magnitude in the range [15, 22] with proper motion up to 140\({}^{\prime\prime}\)/min, thus resulting in stellar streaks as high as 6.5\({}^{\prime}\) (582 pixels) in our data. Our method obtained astrometric solutions for all images with 100% success rate. The standard deviation in Observed-minus-Computed (O-C) residuals is 0.52\({}^{\prime\prime}\), with O-C residuals <2\({}^{\prime\prime}\) (<1\({}^{\prime\prime}\)) for 98.4% (84.4%) of our measurements. These are appreciable, given the pixel scale of \(\sim\)0.3\({}^{\prime\prime}\) and \(\sim\)0.7\({}^{\prime\prime}\) of our two instrument set-ups. This demonstrates that our modular and fully-automated algorithm helps improve the telescope system's limiting magnitude without compromising astrometric accuracy by enabling non-sidereal tracking on the target. This will help the NEO follow-up community cope with the accelerated discovery rates and improved sensitivity of the next-generation NEO surveys. Astreaks has been made available to the community under an open-source license.
keywords: techniques: image processing - software: data analysis - astrometry - minor planets, asteroids: general - planets and satellites: detection
## 1 Introduction
Small Solar System bodies are remnants of the formation stage of the Solar System. These bodies encompass small natural objects like Near-Earth Objects (NEOs), main-belt asteroids, trans-Neptunian objects, and various other smaller groups of asteroids and comets. Their properties, such as size, shape, rotation and surface composition, are the result of collisions and dynamical evolution that has led to their formation. Asteroid science encompasses studies ranging from formation mechanisms, population, collisional evolution, orbital dynamics, compositional properties and physical mechanisms such as Yarkovsky and YORP effects (Michel et al., 2015). Out of all small solar system bodies, NEOs are of particular interest to the planetary science community, not only from a scientific perspective but also because of the hazardous consequences of their impacts on civilization (Perna et al., 2013). The planetary defence missions to devise efficient mitigation strategies critically depend on timely detection and accurate knowledge of orbits and physical properties of these potentially hazardous asteroids (Reddy, 2022; Nakano et al., 2022). The Double Asteroid Redricection Test (DART) mission is a planetary defence-driven test of technologies for preventing an asteroid's impact on Earth, aimed at demonstrating the kinetic impactor technique for changing the motion of the moonlet of asteroid (65803) Didymos (Rivkin et al., 2021; Naidu et al., 2020; Thomas et al., 2023; Terik Daly et al., 2023). Several survey telescopes scan the night sky daily and report these NEO candidates to Minor Planet Center (MPC). With the current ground-based facilities, NEOs are typically discovered at a distance of less than 1 AU (Jedicke et al., 2016). At this distance, their apparent rate of motion can be high, thus challenging their follow-up and recovery.
Most of the telescope facilities operate in the sidereal mode during their regular operations. The high apparent motion of NEOs with respect to the far-situated background astrophysical objects results in a streak in the astronomical images. The quest of discovering and characterizing the NEOs is steered by robotic survey telescopes
with wide fields of view (Seaman et al., 2021). These surveys detect NEOs by their apparent motion between successive exposures and submit the NEO candidates to the MPC. Usually, this discovery data consists of at least two detections, known as a "tracklet" (Kubica et al., 2007). The short observation arcs from the discovery data result in high uncertainties in the preliminary orbit estimate, which could lead to hundreds of arcseconds of uncertainties in the sky positions within a few hours after the discovery for the fastest, nearby, objects. Therefore, well-timed subsequent follow-up observations by meter-class telescopes like GROWTH-India Telescope (GIT; Kumar et al., 2022), with relatively wide fields of view are needed to affirm the candidacy of an NEO.
As discussed, due to sensitivity limitations of the current survey facilities, most of the candidates are discovered at a distance \(\lesssim 1\) AU on their discovery apparition (Jedicke et al., 2016). This results in a high apparent motion \(\gtrsim 10\arcsec\)/min that degrades the signal-to-noise ratio (SNR) of such candidates as the photons spread over a larger number of pixels in the form of a streak (Shao et al., 2014). Bright candidates create bright streaks that are detectable in such images, but fainter objects get blended into the background, causing a reduction in the detection limit of these objects compared to other sidereal targets. Furthermore, most of the NEO discovery engines like Zwicky Transient Facility (ZTF; Bellm et al., 2018), Catalina Sky Survey (CSS; Christensen et al., 2018), Asteroid Terrestrial-impact Last Alert System (ATLAS; Tonry et al., 2018) and Panoramic Survey Telescope and Rapid Response System (Pan-STARRS; Chambers et al., 2016) are located in North America and Hawaii island, which means that there are large geographical gaps between the discovery system and follow-up systems, typically in North and South America. Together, these factors make the recovery of NEOs in sidereally-tracked data quite challenging.
The 70-cm fully-robotic GIT was set up as a part of the international collaboration "Global Relay of Observatories Watching Transients Happen" (GROWTH; Kasliwal et al., 2019). GIT (MPC Observatory Code: N51), located at the Indian Astronomical Observatory, Hanle-India, has a 16.8-megapixel sensor which provides a large field of view of 0.5 deg\({}^{2}\), thus making it an excellent tool of the trade for NEO follow-up campaigns. Furthermore, its geographical location at Hanle is on the opposite side of Earth to the major NEO discovery engines, allowing us to observe the candidates before the positional uncertainties blow up to unmanageable scales (Sharma et al., 2021).
The next generation NEO survey programs like NEO Surveillance Mission (NEOSM; Grav et al., 2020) and Vera C. Rubin Observatory's Legacy Survey of Space and Time (LSST; Vera C. Rubin Observatory LSST Solar System Science Collaboration et al., 2021) are expected to increase the number of NEOs discoveries with absolute magnitude, H-22 mag by 26% (corresponding discovery rate \(\sim 2\times\)) against the existing surveys over a period of 10 years of its planned operations (Jones et al., 2018), along with the need of deep limiting magnitude follow-up \(r\sim 24.5\) mag (Chesley and Veres, 2017). As typical depth probed by small telescopes in a reasonable sidereal exposure is around \(V\sim 22\) mag, LSST and NEOSM together will increase the depth requirements of follow-up facilities by couple of magnitudes (Seaman et al., 2021). This indicates an emerging need to improve the limiting magnitude of the follow-up telescope campaigns of such fast-moving NEOs. The NEO follow-up community can mitigate the damaging effects of trailing losses by tracking at the rate of NEO's apparent motion instead of the stars with existing facilities, thus preserving the point spread function of the NEO (Veres et al., 2012). This non-sidereal tracking improves the system's limiting magnitude for fast-moving objects by transferring the trailing to background reference stars. However, the astrometric measurement with the trailing stars is a challenging task with existing software that assume largely symmetric point spread functions. These software are not designed to obtain astrometry with elongated reference stars. Moreover, most of these software require human intervention to perform various tasks, which will be a bottleneck in the follow-up campaigns with the advent of the next-generation survey telescopes. Therefore, in order to complement the self-follow-up strategies of these surveys, we look forward to robust astrometry techniques to support the improvement in NEO orbital catalogues.
Here, we present a novel source detection algorithm, Astreaks -- Astrometry with Streaking Stars, to obtain an astrometry solution for astronomical images with elongated reference stars1. Developed for GIT, Astreaks achieves the goal of accurate astrometry using the image segmentation technique, implemented using publicly available astronomy python packages. The pipeline has been validated on two different instrument set-ups and achieves sub-arcsecond astrometric accuracy. The operations of our fully-automated pipeline can be easily transferred to any other telescope system due to its modular nature with minor modifications in the configuration file. This article is organized as follows. In SS 2, we review the loss in astrometric accuracy and limiting magnitudes due to trailing losses, existing methods to recover these trailing losses, and the benefits of non-sidereal tracking. We elaborate on the analysis framework of our elongated source detection algorithm and validate the astrometric accuracy of Astreaks in SS 3. Finally, we conclude with a summary and future outlook in SS 4.
Footnote 1: This work was also presented at the European Planetary Science Congress 2021 (Sharma et al., 2021).
## 2 Sidereal & Non-sidereal Observations
Sidereal observations are where we track the motion of the stars across the sky. To perform the observations in sidereal mode, the telescope motion is corrected for the motions of the stars around the north celestial pole. This helps us keep the star's image focused on the same group of pixels even when the stars are moving around the pole. However, solar system objects usually have a different sky plane motion compared to the stars. Therefore, unlike stars, these objects do not result in the typical 2D Gaussian profile in an image. The minor planets are an example of such objects which leave trails in sidereally-tracked exposures due to their significant apparent rate of motion. The light from these objects spread over many pixels along their motion, causing streaks in the images (Figure 1). These streaks result in a decrease in the signal-to-noise ratio of the source, which leads to two effects - a decrease in the detection sensitivity and a decrease in astrometric accuracy for such objects.
As discussed above, the trailing of minor planets in the sidereal image spreads the flux from the object over a larger area. This causes a reduction in the target's apparent magnitude per unit area and hence signal-to-noise ratio (Krugly, 2004). To demonstrate these effects, we observed the NEO, 2000 NM in sidereal as well as non-sidereal mode, where we tracked the motion of the NEO rather than the stars in the background. We conducted these observations on 2022-08-04 at 16:31:57.0 UTC, where we took 5 minute-long sidereal and non-sidereal exposures. Figure 1 shows the trailing loss in sidereal exposure for the NEO when compared against its non-sidereal exposure. The target was moving at an apparent rate of 4.01\(\arcsec\)/min, and the typical seeing during our observations was 2.65\(\arcsec\). Therefore, the target streaked in the sidereal exposure with an aspect ratio of
\(\ell/w\sim 3.8\). The signal-to-noise ratio of the target's streak in sidereal exposure is 13.0. A similar measurement on the non-sidereal exposure for the target yields a signal-to-noise ratio of 16.4. Therefore, we observe a \(\sim 20\%\) loss in the signal-to-noise ratio of the target in the sidereal exposure compared to non-sidereal exposure. This loss of signal-to-noise ratio of the trailed object reduces the limiting magnitude of the system for such fast-moving asteroids, thus reducing the probability of detection for these NEOs in sidereally-tracked astronomical images (Rabinowitz, 1991).
Along with the decrease in the limiting magnitude of the system, these streaks in the sidereal observations also affect the astrometric measurements. Astrometric measurements in images with significant elongations require non-trivial techniques like "trail fitting" (Veres et al., 2012). Furthermore, it has been tested that the astrometric accuracy of the trail-fitting algorithm is compromised at low SNR (Veres et al., 2012). Thus, the reduced SNR of the target in sidereally-tracked astronomical images results in increased uncertainty in astrometric measurements of the streaked NEOs. This loss in astrometric accuracy due to trailing losses is even more important for fast-moving NEOs in the discovery algorithm since their recovery is dependent on accurate astrometry.
Over the years, several image processing techniques have been proposed to recover the target and improve both astrometry and photometry of the target in sidereally-tracked data. The working principle of a majority of these is based on the "shift-and-add" technique to enhance sensitivity (Cochran et al., 1995; Tyson et al., 1992; Parker and Kavelaars, 2010). In this technique, we acquire several sidereally-tracked short exposures of the field, which are then shifted based on the known velocity vector of the minor planet and co-added to improve its signal-to-noise ratio. We illustrate a basic implementation of this technique in Figure 2 on sidereally-tracked data for Comet C/2021 A4 (NEOWISE; Mainzer et al., 2021). These observations were conducted at the discovery apparition of this comet and were consequently submitted to MPC. The comet had a proper motion of \(1.33\arcsec\)/min at a position angle of \(342.9^{\circ}\) during our observations. The average signal-to-noise ratio in five individual sidereally-tracked images is \(\sim\)25, which got amplified to \(\sim\)88 in the stacked image. This improved signal-to-noise ratio enables us to recover faint targets, overcome trailing loss, and confidently detect cometary activity. However, this technique is sub-optimal for fast-moving objects due to its requirement of astrometry solutions for individual images. Such objects require short exposure times to retain their Gaussian point spread function, thus resulting in less reference stars in the field to obtain astrometry solutions. Furthermore, short exposure time causes a significant loss in observing efficiency due to multiple readout cycles. Moreover, this technique requires the exposure to be long enough to detect the faint objects in individual images. This results in measurable elongation in the point spread function of the NEO, thus compromising the astrometric accuracy.
Veres et al. (2012) used an analytical form of trailing function for trail fitting to yield accurate astrometry and photometry. However, this technique has limited usage for astrometry of faint, fast-moving NEOs observed with small telescopes. Moreover, the loss in photometric and astrometric accuracy increases with higher trail aspect ratios and lower signal-to-noise ratios. Gural et al. (2005) developed a matched-filter-based trail detection technique, which attempts to integrate multiple frames by shifting and stacking based on a hypothesis of the target's velocity and then matched-filtering using the hypothesized template. Shucker and Stuart (2008) proposed a similar velocity matched-filter-based approach to integrate multiple frames, which increases the aggregate signal-to-noise ratio using a multi-hypothesis velocity vector. These methods are computationally ex
Figure 1: Comparison of sidereal (upper panel) and non-sidereal (lower panel) images for 2000 NM, a NEO with an apparent motion of \(-4\arcsec\)/min during our observations. The target streaks in the sidereal exposure and this streaking gets transferred to background reference stars in the non-sidereal exposure. The resulting loss in the signal-to-noise ratio of the target in sidereal exposure is \(\sim 20\%\), primarily due to trailing losses.
Figure 2: Illustration of shift and add operation when implemented on sidereally-tracked data of Comet C/2021 A4 (NEOWISE). The comet had a sky-plane motion of \(1.33\arcsec\)/min and an apparent magnitude >20 at the time of our observations. We observe that on co-adding the frames with appropriate shifts for the motion of the comet, we get \(\gtrsim\)3-fold amplification in the signal-to-noise ratio of the comet in our data as compared to single images.
pensive due to the huge size of the velocity hypothesis set. Also, the storage of multiple intermediate image products requires a significant amount of computer memory. These techniques have also exhibited high false alarms near the limiting magnitudes of images suggesting that they are not suitable for smaller telescopes as they have low limiting magnitudes (Shucker and Stuart, 2008).
Shao et al. (2014) demonstrated an enhanced implementation of the shift-and-add technique called "synthetic tracking" to enhance the signal-to-noise ratio of the target and avoid astrometric errors due to trailing losses. This method yields the best results on a large telescope with a CMOS camera for rapid frame acquisition and low readout noise. However, it fails to impress on the CCD camera, where the frame rate is not high enough. Further, this method has limited yields on a small telescope with shallow limiting magnitude. In addition, it requires GPUs for data processing since integrating images with a grid of tracking velocities is computationally expensive. As with other techniques, a drop in the number of reference stars in short exposures compromises the astrometric solution of images.
Zhang et al. (2021) uses an "image fusion" technique to obtain images superimposed on background stars and the NEO, followed by fusion of these two superimposed images to get a single image with all sources where the point spread function (PSF) is the same as that of the telescope system (no streaks). Similar to Shao et al. (2014), this method also requires a camera that enables a rapid frame rate. Also, it involves exposure time tweaking based on the apparent limiting magnitude for a particular exposure to identify the minimum exposure time such that the NEO is observable in each frame. The method performs poorly for fast-moving, faint NEO images obtained with small telescopes.
As discussed above, sidereal data reduction techniques may need high computational power for processing multiple short exposures, as required by techniques such as synthetic tracking, and has limitations depending on the type of method used for processing. To rescue from the trailing losses of the faint, fast-moving NEOs, a widely adopted technique is to use a non-sidereal tracking mode where we track at the apparent rate of motion of the target (Kaminski et al., 2017; Ramasawmy et al., 2022; Krantz et al., 2018; Weiner et al., 2018). This preserves the PSF of the target NEO. However, since the skylane motion of NEOs can be tens to hundreds of arcseconds per minute, long exposures with non-sidereal tracking yield streaking reference stars (for an example, see the bottom panel of Figure 1). The conventional astrometry methods based on detecting symmetric, Gaussian-like sources, are inefficient for obtaining astrometry solutions of images with such elongated reference stars. Hence, calculating accurate asteroid positions and magnitudes from non-sidereally tracked images is challenging. In this work, we have developed a new automated pipeline, Astreaks, to process such images and obtain accurate astrometry of fast-moving NEOs in non-sidereal images.
## 3 Astreaks workflow
In order to accurately measure the coordinates of the asteroid in non-sidereal images, we need to robustly calculate the astrometric solution for the image based on stellar streaks. We calibrate the images by applying bias correction and flat-fielding, followed by cosmic rays removal, as implemented in default GIT data reduction software (Kumar et al., 2022). Astreaks achieves this goal by detecting sources using a "streak spread function", coupled with an accurate back
Figure 3: Flowchart of the GROWTH-India Astrometry Pipeline, Astreaks, for astrometry and photometry of NEOs in non-sidereally tracked data.
ground estimation technique. It then creates a synthetic image which is eventually used to obtain the astrometric solution. The following sections highlight the detailed working of the pipeline.
### Sky Background Estimation
Accurate background estimation is crucial for detecting faint sources and determining the correct fluxes from each source. The presence of streaking background reference stars affects the background statistics, due to which, the traditional methods fail to get an accurate background map of the image. To estimate the background, we overlay a grid on the entire image and calculate the mode of counts in each of the cells in this grid: giving a nominal background level for each grid point. We note that mode is a better estimator than the median, as there can be a large number of extended sources which add a heavy tail to the histogram of counts in each grid cell. The grid spacing has to be chosen by keeping in mind the presence of extended sources and occasional crowded fields. If cells are too small, extended sources may dominate some cells and the background estimate will be compromised. If the cells are too large, we will be insensitive to background variations at smaller scales. We recommend that the mesh size should be at least greater than the streaks in the image to avoid detecting the streak itself in the background when using mode as an estimator. Future versions of Astreaks will incorporate an automated mesh size based on the tracking rate and exposure time that determines the length of the stellar streaks. Next, we create a smooth background estimate from these background measurements. In our data, we found that using a linear background variation over the entire image gives satisfactory results. Hence, we fit a plane to this background data by least-squares minimization and subtract this from the original (un-gridded) image. Typically the NEOs observed with GIT have a proper motion of the order of \(10^{\prime\prime}\)/min, and the typical exposure time is 3 minutes. An average pixel scale of \(0.5^{\prime\prime}\)/pix amounts to streaks of length \(\sim 50\) pixels. Testing 371 images of 115 NEOs, we observed that meshes of size \(60-100\) pixels work well.
Figure 4 shows our sky background estimation technique on the observations of the minor planet 2020 XB. A background gradient is present in the image because the moon was only \(38^{\circ}\) from the target. This background gradient is appropriately captured in our corresponding sky background estimate. We subtract this sky background from our image before further analysis.
pixel-wise threshold level. Next, the pipeline identifies the group of pixels corresponding to the same source using the streak length. The segmented sources are then deblended to separate the overlapping sources. The flux inside the elliptical apertures measures the flux from each source. The deblending routine further computes the deblended flux and corresponding flux errors for overlapping sources. The photutils python package provides a good measure of flux for deblended sources, hence we chose to use the flux of these sources using the elliptical aperture. We create a "detected source catalogue" comprised of the centroids, total flux and flux errors of each source.
The sources detected by the pipeline in a representative non-siderally tracked image of 2020 UA1 are highlighted in the top panel of Figure 6. The target NEO is marked with a yellow rectangular box. The blue dots represent the positions of background stars at the mid-time of the exposure that is used as reference stars in a later stage. A few of the streaks lie close to the edge; hence, their centroid location and flux estimation is incorrect as shown by orange marks in the top panel of Figure 6. To remove any biases due to these sources, we remove the streaks whose centroid falls within half the streak length from the edges, denoted by red dashed lines.
### Obtaining an Astrometric Solution
Next, we use our detected source catalog to generate a synthetic image. A specific advantage of this method is that the synthetic image is broadly similar to the images that would be acquired by the telescope with sidereal tracking. Hence, the usual astrometry pipelines used for that telescope can be used directly for processing this synthetic image. We use a 2-D gaussian PSF, with the same sigma as the "SSF" in SS3.2 as our model for point sources in the image. Starting with a blank image, we use this PSF model to inject a source with appropriate flux at the location of each detected source. We do not consider the sources near edges whose properties are known to be inaccurate. The median sky background and its root mean square value estimated from the original image is added to the synthetic image. The resulting synthetic image is shown in lower panel of Figure 6. This synthetic image is then solved for the World Coordinate System (WCS) using the offline engine of astrometry.net(Lang et al., 2010). We use the WCS of this synthetic image as the astrometric solution for the original image at the mid-time of the exposure.
### Astrometry, Photometry, and Orbit Determination
The astrometric positions of the target NEO at the mid-time of the exposure are obtained using the WCS of the synthetic image. Target asteroids are often faint even in non-sideral tracked exposures. Automated searches for it often give false positives. Since the user is aware of the approximate location of the NEO, at this stage, the NEO is first visually identified. Then, its exact position is determined by fitting the 2-D Gaussian PSF from SS3.4. We compute the instrumental magnitude of the target using aperture photometry (Bradley et al., 2020), using the source fluxes measured in the source detection step (SS3.3). The magnitudes of streaked stars are cross-matched with the Pan-STARRS catalogue (Flewelling, 2018) using VizieR query to calculate the zero points. This zero point is directly used to calculate the magnitude of the point source NEO in the image. Our photometry is accurate to \(\sim 0.2\) mag and further improvement in the photometric accuracy in progress. We append our astrometric and photometric measurements to the existing observations of the target at MPC and attempt an orbit fit using find_orb(Gray, 2011) to
Figure 5: Comparison of a reference star streak in the 2020 UA1 non-siderally tracked image (left panel) and the corresponding SSF generated by our pipeline (right panel). The computed start and end points of the streak are marked. The SSF model generated by convolving the PSF model from the non-sideral image and ideal streak model, which is a line joining these endpoints, appropriately models the reference star streak.
Figure 6: Detection of background reference stars using Astreaks in a non-siderally tracked image of 2020 UA1. The detected sources at mid-time of the exposure are marked as dots on top of the corresponding source’s streak in the top panel. We replace the streaked sources with their flux-scaled gaussian equivalent sources in a synthetic image, as displayed in the bottom panel and obtain the WCS solution of the resultant synthetic image. During this process we neglect the sources lying close to the image edge (sources marked with orange color in top panel) and perform astrometry of the target at the mid-time of the exposure.
compute the Observed-minus-Computed (O-C) residuals. These O-C residuals are a direct measurement of the astrometric uncertainty.
### Hyperparameters and Parameters
Astraeks requires various inputs for its modular functioning, which are summarized in Table 1. The astrometry configuration of the pipeline comprises several _hyperparameters_ that depend on the instrument properties and are the same for all targets being analysed. Hyperparameters like mesh size, pixel scale, threshold level, the maximum size of the PSF model etc., are required at different stages of source extraction and measurements. An appropriate mesh size depends on the typical length of the streaks in data and plays a primary role in estimating a reliable sky background. The SSF model generation requires pixel scale information to compute streak parameters. Based on the instrument's sensitivity, the threshold and contrast levels are required for source detection. The synthetic image generation requires the maximum PSF model's size which is fixed based on the typical full width at half maximum (FWHM) of the PSF for the telescope system. The photometry of the target requires the camera properties like the gain of CCD, which is typically fixed for a single instrument. All these hyperparameters must be set up for the best results before using the pipeline on a new instrument.
Apart from the hyperparameters discussed above, Astraeks requires a set of _parameters_ that must be updated for each target. These include tracking rate, velocity position angle and exposure time which are different based on the target of interest. A combination of these parameters determines the streak parameters and size. The pipeline extracts these parameters automatically using the information stored in the image headers and performs the astrometric operations on the image. Further, the pipeline can generate an automated MPC report once we specify the pixel coordinates of the target.
### Astreaks Validation
Astraeks has been developed on data acquired with GIT. For this purpose, we used observations of 115 NEOs covering a wide range of proper motion. To test the robustness of Astraeks, we acquired data on two CCD camera set-ups: Apogee KAF3200EB (Apogee hereafter) and Andor iKon-XL 230 4k back-illuminated CCD (Andor hereafter). As both cameras have different properties, they provide a nice framework to test the reliability and modularity of Astraeks. Apogee camera has a narrow \(\sim 11^{\prime}\times 7.5^{\prime}\) field of view (FOV) camera with a pixel scale of \(0.307^{\prime\prime}\)/pix. On the other hand, Andor has a \(\sim 45.87^{\prime}\times 45.74^{\prime}\) FOV with \(0.67^{\prime\prime}\) pixel scale.
The data used for Astraeks development and validation was acquired from September 2020 to December 2022. 64% of these data are wide field images acquired with Andor camera, the rest were obtained with the narrow FOV Apogee Camera. A total of 18 NEOs were observed with the Apogee camera, amasing 133 non-sidereal images during the initial phase of Astraeks development. These targets had a proper motion in the range of 4 - 120\({}^{\prime\prime}\)/min at the time of observation, resulting in 14 - 140\({}^{\prime\prime}\) long streaks of reference stars in exposures up to 5 minutes. Using Astraeks, we obtain astrometric measurements of these NEOs following the procedure outlined in SS3. The O-C residuals in right ascension and declination were computed using the find_orb(Gray, 2011) software by appending all observations of these targets at the MPC database. These O-C residuals have a standard deviation of \(0.41^{\prime\prime}\), which qualifies the \(<2^{\prime\prime}\) quality of measurements criterion, as demanded by MPC 2.
Footnote 2: MPC’s Guide to Minor Body Astrometry: [https://cgi.minorplanetcenter.net/iau/info/Astrometry.html](https://cgi.minorplanetcenter.net/iau/info/Astrometry.html)
Once the working and scientific reliability were established with Apogee data, we used 238 non-sidereally tracked images for 96 NEOs acquired with Andor camera data to test the performance of the pipeline. By virtue of the modular nature of our pipeline, we could easily shift from the Apogee to Andor instruments by a simple tweak
Figure 7: Validation of the astrometric accuracy of Astraeks. Upper panel: Validation using O-C residuals in right ascension (RA) and declination (DEC) for 371 positions of 115 NEOs observed with GIT using non-sidereal tracking. The standard deviation of all these measurements is \(0.52^{\prime\prime}\). Lower panel: Comparison of O-C residuals from Astraeks reduction of non-sidereal data and slow-moving asteroids in sidereal images reduced using standard procedures. The performance of Astraeks on non-sidereal data is similar to that of normal astrometric measurements from sidereal images.
of hyperparameters, (see SS 3.6). The NEOs had a proper motion in the range of 1- 45''/min, resulting in 8-396'' long stellar streaks in exposures of 1-9 minutes. The computed O-C residuals for astrometric measurements using Astreaks have an standard deviation of 0.58''. The O-C residuals in RA and declination of all observed positions of 115 NEOs (371 images) are shown in the upper panel of Figure 7. More than 84.4% of our measurements have O-C residuals \(<\) 1'', with 98.4% measurements exhibiting a \(<\) 2'' astrometric accuracy. We found that the astrometric O-C residuals do not correlate with the streak length (Pearson r value: 0.0074).
We acknowledge that Astrometrica software is not designed to detect sources and obtain astrometry solutions with significant elongation in reference stars (Raab, 2012). However, since no streak-analysis packages are publicly available, we compare our source detection capability and astrometric accuracy with Astrometrica. We illustrate this comparison on a representative example of non-siderically acquired data for 2020 UA1. We obtained a total of 9 exposures for 2020 UA1 among which only three images were astromercally solved with manual cross-match using Astrometrica. On the other hand, Astreaks successfully solved all nine images with the algorithm discussed in SS3. Furthermore, the number of sources detected by the Astrometrica software is less than that of Astreaks. Despite a manual cross-match, many bright sources are missed by Astrometrica, potentially responsible for the failure of the astrometric solution on the other six images. The number of sources plays a direct role in the quality of the astrometry solution and is higher for Astreaks due to the detection of a large number of sources as compared to Astrometrica. The standard deviation of astrometric measurements on nine images using Astreaks (0.2'') is better than the astrometric accuracy on three images using Astrometrica (0.3''), thus validating that our algorithm is more accurate and robust for obtaining astrometric solutions of non-sideral images.
As a final test, we also compare the astrometric capabilities of Astreaks with standard astrometry procedures on sidereal data. The total astrometric uncertainty for NEOs may include a component from the orbital uncertainty. Hence, for a fair comparison, we use a sample of 50 sidereally acquired images of six slow-moving asteroids with apparent motion less than 0.58 arcsec/min during the exposures. These targets appeared as a point source in our astronomical images of exposure times less than 3 min, given the typical FWHM of PSF in sidereal data of \(\sim\) 3''. Astrometry on these images was performed using our usual, standard astrometry procedures for sidereal data, as described in Kumar et al. (2022). A comparison of O-C residuals in sidereal data reduced following standard astrometry procedures and non-sideral data reduced using Astreaks is shown in the lower panel of figure 7. We observe that the performance of Astreaks on non-sideral data is comparable to normal astrometry methods on sidereal data.
## 4 Conclusion and Future Prospects for Astreaks
This work presents the first open-source astronomy package to do accurate astrometry of non-siderally tracked NEO images with significant elongation in reference stars -- Astreaks. We use a novel method to perform astrometry on the non-siderally tracked images. The pipeline uses a background estimation technique that appropriately captures the background variations while also considering the presence of elongated sources in the field. Further, we methodically compute the SSF of elongated sources by leveraging the knowledge of the telescope system PSF and observation parameters of the target. The pipeline has been designed to meet the goal of accurate astrometry with the novel technique of image segmentation and source de-blending on the background-subtracted image. The catalog of sources detected in this image is used to generate a synthetic image that imitates a sidereal image, had it been taken at the mid-time of the exposure. The astrometry solution of this synthetic image gives the astrometric measurements of the minor planet.
We validate the performance and results of the astrometric accuracy of Astreaks on non-siderally acquired data with GIT using two different instrument set-ups. We achieved an astrometric accuracy of 0.52'' for 115 NEOs from 371 images, which had stellar streaks up to 6.5'. This is well below the 2'' threshold considered acceptable by the Minor Planet Centre. The astrometric results have been tested against the widely used software, Astrometrica, where we observe that Astreaks outperforms it in detecting elongated reference stars as well as in the accuracy of the astrometric solution for non-siderally tracked images. A comparison of astrometry accuracy revealed that the performance of Astreaks on non-sideral data is as good as point source astrometry in sidereal data. The modular nature and validation of the pipeline on two different set-ups emphasize that the pipeline can be integrated with other set-ups with simple tweaks in hyperparameters.
Astreaks is being used regularly on GIT for the analysis of astero-oid data. Some results have been presented in Sharma et al. (2021). In the future, we aim to further improve the astrometric accuracy of Astreaks by accounting for the acceleration of NEOs when estimating the streak length for SSF model generation. Typically, the apparent rate of motion of NEOs is between 10\({}^{-1}\) - 10\({}^{2}\) arcsec/s with accelerations varying between 10\({}^{-7}\) - 10\({}^{-2}\) arcsec/s\({}^{2}\)(Veres et al., 2012). For a NEO moving with an apparent angular velocity of 10\({}^{2}\) arcsec/s and an acceleration of 10\({}^{-2}\) arcsec/s\({}^{2}\), the change in streak length during a 100 s exposure will be 1''. This is a significant change to impact our astrometric accuracy for such long- streaked objects, thus pointing towards the scope of further improvement in the astrometric measurement by Astreaks. In addition to astrometry, there is significant scope of improvement in photometric accuracy of Astreaks (work in progress). Secondly, we wish to
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Parameter Type** & **Name** & **Description** \\ \hline Hyperparameters & mesh size & The size of each block in the grid used for background estimation. Units: pixels \\ & pixel scale & The plate scale to convert the size of mesh in data to image pixels. Units: \({}^{\prime\prime}\)/pixel \\ & thresholding level & The assumed background level to threshold the image and identify streaks. \\ & PSF model size & The maximum size of the PSF model to be used for generating the SSF. Units: pixels \\ & gain & The detector gain to scale instrumental counts to the number of electrons per pixel. Units: e\({}^{-}\)/ADU \\ \hline Parameters & NEO tracking rate & The sky-plane velocity of the NEO to estimate the streak length. Units: \({}^{\prime\prime}\)/min \\ & exposure time & The total non-sideral exposure time to estimate the streak length. Units: sec \\ & velocity position angle & The velocity position angle of the NEO to determine the orientation of the SSF. Units: degrees \\ & image coordinates of the target & Pixel coordinates of the target in the image to fit a PSF and extract coordinates. \\ \hline \end{tabular}
\end{table}
Table 1: The description of the set of parameters and hyperparameters used by Astreaks.
eliminate even the small manual step of identifying the approximate NEO location in the image, which is needed for centroiding and PSF fitting for astrometry and photometry. Lastly, the image segmentation process involves a convolution operation, which is an O(N\({}^{2}\)) operation for an N\(\times\)N image, due to which the current implementation of Asterasks is time-consuming (\(\sim\) minutes on a generic desktop). Therefore, for faster data processing and submission of observations, we will attempt to make the aforementioned step more time efficient in subsequent versions of the pipeline.
## Acknowledgments
The GROWTH India Telescope (GIT) is a 70-cm telescope with a 0.7-degree field of view, set up by the Indian Institute of Astrophysics and the Indian Institute of Technology Bombay with support from the Indo-US Science and Technology Forum (IUSSTF) and the Science and Engineering Research Board (SERB) of the Department of Science and Technology (DST), Government of India. It is located at the Indian Astronomical Observatory (Hanle), operated by the Indian Institute of Astrophysics (IIA). We acknowledge funding by the IITP 4lumni batch of 1994, which partially supports operations of the telescope. Telescope technical details are available at [https://sites.google.com/view/growthindia/](https://sites.google.com/view/growthindia/).
Kriti Sharma thanks Michael S. P. Kelley (U. Maryland) for his valuable guidance in comet photometry. Kriti Sharma thanks Kunal Deshmukh (IITB) for his guidance during sidereal observations. Harsh Kumar thanks the LSSTC Data Science Fellowship Program, which is funded by LSSTC, NSF Cybertraining Grant #1829740, Brinson Foundation, and Moore Foundation; his participation in the program has benefited this work. B.T.B. is supported by an appointment to the NASA Postdoctoral Program at the NASA Goddard Space Flight Center, administered by Oak Ridge Associated Universities under contract with NASA. This research has made use of data and/or services provided by the International Astronomical Union's Minor Planet Center ([https://www.minorplanetcenter.net/iau/mpc.html](https://www.minorplanetcenter.net/iau/mpc.html)). This work has made use of the find_orb (Gray 2011) software supplied by Project Pluto ([https://www.projectpluto.com/find_orb.html](https://www.projectpluto.com/find_orb.html)). This research has made use of Astrometrica (Raab 2012) software ([http://www.astrometrica.at/](http://www.astrometrica.at/)). This research has made use of the VizieR catalogue access tool, CDS, Strasbourg, France. (DOI:10.26093/cds/vizier). The original description of the VizieR service was published in 2000, A&AS 143, 23. This research has made use of Astropy (Astropy Collaboration et al. 2013, 2018), NumPy (Harris et al. 2020), SciPy (Virtanen et al. 2020), Matplotlib (Hunter 2007), Astro-SCRAPPY (McCully & Tewes 2019), SExtractor (Bertin 2011a), PSFEx (Bertin 2011b) and astrometry_net (Lang et al. 2010) software. This research has made use of NASA's Astrophysics Data System.
## Data Availability
Asterasks has been made available to the community under an open source license.
|
2304.00502 | CNNs with Multi-Level Attention for Domain Generalization | In the past decade, deep convolutional neural networks have achieved
significant success in image classification and ranking and have therefore
found numerous applications in multimedia content retrieval. Still, these
models suffer from performance degradation when neural networks are tested on
out-of-distribution scenarios or on data originating from previously unseen
data Domains. In the present work, we focus on this problem of Domain
Generalization and propose an alternative neural network architecture for
robust, out-of-distribution image classification. We attempt to produce a model
that focuses on the causal features of the depicted class for robust image
classification in the Domain Generalization setting. To achieve this, we
propose attending to multiple-levels of information throughout a Convolutional
Neural Network and leveraging the most important attributes of an image by
employing trainable attention mechanisms. To validate our method, we evaluate
our model on four widely accepted Domain Generalization benchmarks, on which
our model is able to surpass previously reported baselines in three out of four
datasets and achieve the second best score in the fourth one. | Aristotelis Ballas, Christos Diou | 2023-04-02T10:34:40Z | http://arxiv.org/abs/2304.00502v1 | # CNNs with Multi-Level Attention for Domain Generalization
###### Abstract.
In the past decade, deep convolutional neural networks have achieved significant success in image classification and ranking and have therefore found numerous applications in multimedia content retrieval. Still, these models suffer from performance degradation when neural networks are tested on out-of-distribution scenarios or on data originating from previously unseen data _Domains_. In the present work, we focus on this problem of Domain Generalization and propose an alternative neural network architecture for robust, out-of-distribution image classification. We attempt to produce a model that focuses on the causal features of the depicted class for robust image classification in the Domain Generalization setting. To achieve this, we propose attending to multiple-levels of information throughout a Convolutional Neural Network and leveraging the most important attributes of an image by employing trainable attention mechanisms. To validate our method, we evaluate our model on four widely accepted Domain Generalization benchmarks, on which our model is able to surpass previously reported baselines in three out of four datasets and achieve the second best score in the fourth one.
domain generalization, representation learning, visual attention, deep learning +
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
Footnote †: journal: Acm Referenxant
+
self-attention_ mechanism for providing additional contextual information to each word in an embedded sentence. Since their outstanding success in several NLP tasks, Transformers and self-attention mechanisms have slowly but steadily gained ground in the Computer Vision community (Han et al., 2017), achieving significant advances in the field (Han et al., 2017; Wang et al., 2018; Wang et al., 2018). In this work, we argue that by attending to features extracted from multiple layers of a convolutional neural network via _multi-head self-attention_ mechanisms, a model can be trained to learn representations which reflect class-specific, domain-invariant attributes of an image. As a result, the trained model will be less affected by out-of-distribution data samples as it will base its predictions on the causal characteristics of the depicted class. Our contributions can be summarized in the following points:
* We introduce a novel neural network architecture that utilizes self-attention (Wang et al., 2018) to attend to representations extracted throughout a Convolutional Neural Network (CNN), for robust out-of-distribution image classification in the DG setting
* We evaluate our proposed method on the widely adopted DG benchmarks of (Li et al., 2019), VLCS (Wang et al., 2018), Terra Incognita (Tacognita, 2018) and Office-Home (Zhou et al., 2019) and demonstrate its effectiveness
* We provide qualitative visual results of our model's inference process and its ability to focus on the invariant and causal features of a class via saliency maps
In the next section we briefly present the most important contributions in DG, along with relative previous work in visual attention, from which we drew inspiration for our proposed algorithm.
## 2. Related Work
### Domain Generalization
There have been numerous efforts to address challenges related to domain shift in the past ((Dosovitskiy et al., 2016), (Xu et al., 2017), (Xu et al., 2017)), however DG methods are different in that the model does not have _any_ samples from the target domain(s) during training.
DG problems can be broadly categorized into two main settings, namely _multi-source_ and _single-source_ DG (Xu et al., 2017). In multi-source DG, all algorithms assume that their training data originate from \(K\) (where \(K>1\)) distinct but known data domains. These algorithms take advantage of domain labels in order to discover invariant representations among the separate marginal distributions. Most previously proposed methods fall under this category. The authors of (Wang et al., 2018) propose deep CORAL, a method which aligns the second-order statistics between source and target domains in order to minimize the domain shift among their distributions. In (Wang et al., 2018), Style-Agnostic networks, or SagNets, use an adversarial learning paradigm to disentangle the style encodings of each domain and reduce style-biased predictions. With a different approach, the authors of (Xu et al., 2017) investigate the usage of data augmentation and style-mixing techniques for producing robust models. Another popular approach in multi-source DG is Meta-learning, which focuses on learning the optimal parameters for a source model from previous experiment metadata. (Han et al., 2017; Wang et al., 2018) and Adaptive Risk Minimization (ARM) (Wang et al., 2018), all propose meta-learning algorithms for adapting to unseen domains. Finally, (Xu et al., 2017) uses episodic training in the meta-learning setting to extract invariant representations across source domains. On the other hand, single-source DG methods hold no information about the presence of separate domains in their training data, but assume that it originates from a single distribution. Therefore, all single-source DG algorithms, such as our own, operate in a domain-agnostic manner and do not take advantage of domain labels. In (Xu et al., 2017), the authors combine self-supervised learning with a jigsaw solving objective in order to reduce the model's proneness to learning semantic features. Additionally, in (Xu et al., 2017) the authors attempt to remove feature dependencies in their model via sample weighting. Finally, RSC (Rao et al., 2018) is a self-challenging training heuristic to discard representations associated with very high gradients, which forces the network to activate features correlated with the class and not the domain.
### Visual Attention
Attention mechanisms have long been introduced in CV (Xu et al., 2017), inspired by the human visual system's ability to efficiently analyze complex scenes. More recently, attention mechanisms have been proposed for the interpretation of the output of Convolutional Neural Networks (CNNs), where they act as dynamic re-weighting processes which _attend_ to the most important features of the input image. In (Xu et al., 2017), the authors propose CAM, a post-hoc model interpretation algorithm for estimating attention maps in classification CNNs. Methods incorporating attention mechanisms into CNNs for image classification have also been proposed in the past (Xu et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). In (Xu et al., 2017), the authors introduce an end-to-end trainable mechanism for CNNs, by computing compatibility scores between intermediate features of the network and a global feature map. In (Wang et al., 2018), the Convolutional Block Attention Module, or CBAM, leverages both spatial and channel attention modules for adaptive feature refinement. Recently, several methods have been proposed which replace CNNs with _self-attention_ and _multi-head attention_ mechanisms (Wang et al., 2018) applied directly on the image pixels (Xu et al., 2017; Wang et al., 2018; Wang et al., 2018), leading to transformer-based methods for CV (Xu et al., 2017).
## 3. Methodology
Information passed through popular Convolutional Neural Network architectures, such as ResNets (Huang et al., 2017), tends to get _entangled_ with non-causal attributes of an image due to correlations in the data distribution (Wang et al., 2018). Our method is built around the hypothesis that this problem can be mitigated if we allow the network to select intermediate feature maps throughout a CNN for representation learning. We therefore extract feature maps at multiple network layers and pass them through a _multi-head attention_ mechanism (Figure 2). In our implementation we consider self dot-product attention with 3 heads. Given an intermediate feature map \(\mathbf{M}\in\mathbf{R}^{b\times c\times b\times w}\), where \(\mathbf{b}\) is the batch size, \(\mathbf{c}\) is the number of channels and \(\mathbf{h}\) and \(\mathbf{w}\) are the height and width of the feature map, we aim to attend to each of the channels. As a first step, we flatten the feature maps \(\mathbf{M}\) into a dimension of \((b,c,h\times w)\). We follow by linearly projecting the flattened feature maps into a \((b,c,d_{embed})\) dimension Tensor, where \(d_{embed}\) is the size of each channel's embedded feature map. Each channel can be thought of as the token in the classic Transformer architecture. Given the embedded feature maps \(\mathbf{X}\in\mathbf{R}^{b\times c\times d_{embed}}\) and trainable weight matrices \(\mathbf{W}^{\mathbf{O}},\mathbf{W}^{\mathbf{K}},\mathbf{W}^{\mathbf{V}}\in \mathbf{R}^{d_{embed}\times d_{k}}\) (\(d_{k}\) the inner self-attention layer dimension), we create the query, key and value vectors: \(\mathbf{Q}=\mathbf{X}\mathbf{W}^{\mathbf{O}},\mathbf{K}=\mathbf{X}\mathbf{W}^{ \mathbf{K}},\mathbf{V}=\mathbf{X}\mathbf{W}^{\mathbf{V}},\mathbf{R}^{b\times c \times d_{k}}\), which
are fed to the multi-head attention block. The self-attention layer is defined as:
\[Attention(\mathbf{Q,K,V})=softmax(\frac{\mathbf{QK}^{T}}{\sqrt{d_{k}}})\mathbf{V} \tag{1}\]
while the multi-head attention is:
\[MultiHead(\mathbf{Q,K,V})=Concat(head_{1},...,head_{h})\mathbf{W}^{O} \tag{2}\]
where:
\[head_{i}=Attention(\mathbf{QW}^{O}_{i},\mathbf{K}W^{K}_{i},\mathbf{V}\mathbf{W }^{V}_{i}) \tag{3}\]
After the extracted feature maps have been attended to and re-weighted, we pass them through a Multi-Layer Perceptron (MLP) in order to allow our model to learn a mapping between the processed features. The MLP consists of two Linear layers, activated by the GELU function (Golovolovolov et al., 2012). Finally, the projected features are flattened, concatenated and passed through a fully connected classification layer for the final decision. Our proposed framework is visualized in Figure 1.
## 4. Experimental Setup
In our experiments, we build our method on a vanilla ResNet-50 (He et al., 2016) model, pre-trained on ImageNet. For our method, we choose to extract intermediate feature maps from the 3rd, 7th and 13th bottleneck blocks of the backbone ResNet-50 model, as shown in Fig 1. We train our model with the SGD optimizer for 30 epochs and a batch size of 32 images. The learning rate is set at 0.001 and decays with a rate 0.1 at epoch 24. The proposed framework was implemented with the PyTorch library (Paszke et al., 2017) and trained on a NVIDIA RTX A5000 GPU.
We evaluate our method against 8 previous state-of-the-art algorithms, which use a ResNet-50 as their base model. Specifically, the baseline models we select are: ERM(Wang et al., 2017), RSC (He et al., 2017), MIXUP (Wang et al., 2017), CORAL(Wang et al., 2017), MMD (Wang et al., 2017), SagNet (Wang et al., 2017), SelfReg (Wang et al., 2017) and ARM (Wang et al., 2017). The above algorithms are a mix of both multi-source and single-source methods allowing us to demonstrate the effectiveness of our proposed method. The hyperparameters of each algorithm are set to reflect the ones in the original papers. All baselines are implemented and executed using the DomainBed (He et al., 2016) codebase for a fair comparison. The presented experimental results are averaged over 3 runs.
### Datasets
To evaluate the robustness of our method we experiment on four well-known and publicly available DG benchmark datasets, namely PACS (Zhu et al., 2018), VLCS (Wang et al., 2017), Terralncopnita (Bengio et al., 2018) and Office-Home (Zhu et al., 2018). Specifically:
* **PACS** contains images originating from the Photo, Art Painting, Cartoon and Sketch domains. It also contains a total of 9,991 images and 7 class labels.
* **VLCS** incorporates 10,729 real-world images from the PASCAL VOC, LabelMe, Caltech 101 and SUN09 datasets (or domains) and depicts 5 classes in total.
* **Terra Incapnita** contains photographs of wild animals taken by trap cameras at 4 different locations (L100, L38, L43 and L46). This dataset contains 10 classes and 24,788 images in total.
* **Office-Home** comprises four domains of Art, Clipart, Product and Real-World images. The dataset contains 15,588 examples and 65 classes in total.
For each respective dataset we follow the standard _leave-one-domain-out cross-validation_ DG protocol, as described in (He et al., 2016; Zhu et al., 2018). In this setting, a target domain is selected and held out from the model's training data split. The generalizability of the trained model is then measured by its accuracy on the unseen data originating from the target domain. For example, in the first experiment with the PACS dataset, the domains of Photo, Cartoon and Sketch are selected as _Source_ domains while the Art Painting domain is held out as the _Target_. Therefore, the model is trained on data from the source domains and evaluated on previously unseen art images.
### Results
The results of our experiments are presented in Table 1. The effectiveness of our method is demonstrated in the experimental outcome, as our model is able to surpass previously proposed state-of-the-art algorithms in the PACS, Terra Incapnita and Office-Home datasets, while achieving the second best performance in VLCS. In PACS, our model surpasses the previous best model by 1.06%, while in TerraIncapnita and Office-Home our implementation exceeds the baselines by 0.98% and 1.33% respectively. What's more, even though our algorithm is not able to achieve the top score in VLCS, it remains highly competitive and ranks as second best among its predecessors.
To further support our claims, we also provide visual examples of our model's inference process via saliency maps. Specifically, we select to implement the _Image-Specific Class Saliency_ method as proposed in (Zhu et al., 2018). In the above method, a visual map of the pixels contributing the most to the model's prediction is produced by
Figure 2. Visualization of the Multi-Head Attention mechanism. In our implementation, intermediate feature maps are extracted from a backbone ResNet-50 model and passed through a multi-head attention layer with 3 heads (\(h=3\)). We propose attending to each channel of the extracted feature maps. For the compatibility metric in the self-attention module, we select to use the Scaled-Dot product.
computing and visualizing the gradient of the loss function with respect to the input image. As depicted in Figure 3, the darker a pixel, the more significant it is to the model. We choose to visualize 4 images of the "elephant class" from the four different domains in PACS. When compared to the baseline ERM model, our method seems to base its decisions on features of the depicted object (e.g. task of the elephant in the Art image) and pay less attention to irrelevant attributes, such as the noisy backgrounds (e.g. tree leaves in the Photo domain). This visual evidence proves promising towards researching alternative architectures containing both convolutional and attention layers for the DG setting.
## 5. Conclusions
In this paper, we introduced a novel approach for image classification in the Domain Generalization setting. The basic idea behind our implementation was to allow the model to select the most class-discriminative and domain-invariant representations via multi-head self-attention mechanisms which attend to intermediate feature maps extracted from multiple layers of a convolutional neural network. The generalization ability of our model is supported by extensive experimental results on four publicly available and well-known DG benchmarks, in which our model either surpasses previously proposed algorithms or remains highly competitive. In addition, we provide visual qualitative examples of our model's inference process through saliency maps. The visual results demonstrate the fact that our model tends to disregard spurious correlations in its input images, such as background noise, and is able to base its predictions on class-specific attributes. However, our method still has room for improvement. The employment of multiple multi-head attention mechanisms and concatenation of embedded feature maps adds a significant computation and memory overhead, which is reflected by the relatively small image batch size in our experiments. For future work, we aim to further research the intersection between visual attention and fully convolutional networks in order to propose mechanisms which will be able to explicitly pay attention to the causal features of a class.
## Acknowledgments
The work leading to these results has received funding from the European Union's Horizon 2020 research and innovation programme under Grant Agreement No. 965231, project REBECCA (REsearch on BrEast Cancer induced chronic conditions supported by Causal Analysis of multi-source data).
|
2305.01535 | tmfast fits topic models fast | tmfast is an R package for fitting topic models using a fast algorithm based
on partial PCA and the varimax rotation. After providing mathematical
background to the method, we present two examples, using a simulated corpus and
aggregated works of a selection of authors from the long nineteenth century,
and compare the quality of the fitted models to a standard topic modeling
package. | Daniel J. Hicks | 2023-05-02T15:43:59Z | http://arxiv.org/abs/2305.01535v1 | # TMFAST FITS Topic Models Fast
###### Abstract
tmfast is an R package for fitting topic models using a fast algorithm based on partial PCA and the varimax rotation. After providing mathematical background to the method, we present two examples, using a simulated corpus and aggregated works of a selection of authors from the long nineteenth century, and compare the quality of the fitted models to a standard topic modeling package.
## Table of contents
* 1 Introduction
* 2 Mathematical background
* 3 Example 1: A simulated corpus
* 3.1 Simulation parameters
* 3.2 Draw true topic distributions
* 3.3 Draw true word distributions
* 3.4 Document lengths
* 3.5 Draw corpus
* 3.6 Fit the topic model
* 3.7 Fitting a conventional topic model (stm)
* 3.8 Assessing accuracy: Word-topic distributions
* 3.9 Renormalization: Topic-document distributions
* 4 Example 2: Literature from the long nineteenth century
* 4.1 Corpus assembly
* 4.2 Vocabulary selection
* 4.3 Fit topic models
* 4.4 Topic exploration
* 5 Reproducibility
## 1 Introduction
Topic modeling is a natural language processing (NLP) technique popular among digital humanists, computational social scientists, and data scientists working with textual data (eg, product reviews) (Roberts, Stewart, and Tingley, 2019). Compared to methods such as vector space embeddings or general-use clustering algorithms such as \(k\)-means, a key advantage of topic modeling is that it simultaneously clusters both text units (terms or phrases) and documents, enabling analysts to provide human-meaningful, domain-specific labels to the clusters (topics).
However, a major disadvantage of topic modeling is that the models are relatively computationally intensive and slow to fit. This strongly discourages analysts from fitting and comparing multiple models, which is arguably the best way to determine to what extent results are sensitive to researcher degrees of freedom (Gelman and Loken, 2013; Steegen et al., 2016). Instead, typically analysts fit a few models to a given corpus and focus interpretation on a single "best" model, often chosen by informal assessments of "interpretability" of the fitted topics, introducing additional researcher degrees of freedom.
This paper reports tmfast, an R package designed to facilitate a multiple-model approach by using a significantly faster fitting algorithm. After giving a brief mathematical background in Section 2, we walk through two examples of tmfast in action: generating and fitting models to a simulated text corpus (Section 3), and then fitting models to a collection of books by different authors retrieved from Project Gutenberg (Section 4). Note that both of these examples are supervised cases -- the true topics are known _a priori_ -- and we use a method from Malaterre and Lareau (2022) to assess goodness of fit. In addition, we also fit models using the stm package (Roberts, Stewart, and Tingley, 2019) -- generally regarded as the state of the art in topic modelling in R -- and compare the models fitted by the two packages. tmfast is available at <[https://github.com/dhicks/tmfast](https://github.com/dhicks/tmfast)>.
## 2 Mathematical background
Topic modeling is typically framed using a generative model. A corpus \(C\) is defined by a fixed vocabulary or collection of terms \(T\); a collection of \(k\) topics \(B\), where each topic \(\beta\in B\) is a multinomial distribution over \(W\); and parameters \(\lambda>0\) and \(\alpha=(\alpha_{1},\ldots,\alpha_{k})\) with each \(\alpha_{i}>0\). Then a document \(d\) is generated as follows:
1. Draw the total length \(N_{d}\) of \(d\) from a Poisson distribution, \(N_{d}s\sim\text{Poisson}(\lambda)\) (other distributions over the whole numbers might be used here, eg, negative binomial)
2. Draw a (\(k\)-element) topic distribution \(\theta_{d}\) from the Dirichlet distribution defined by \(\alpha\), \(\theta_{d}\sim\text{Dir}(\alpha)\)
3. For each token \(t_{i}\) (\(i=1,\ldots,N\)), 1. Draw a topic \(b_{i}\sim\text{Multinomial}(\theta_{d})\) 2. Draw a term from the topic, \(t_{i}\sim b_{i}\)(Blei, Ng, and Jordan, 2003, 996).
This generative model is used to define a joint probability distribution that is fit to the data (observed document lengths and token counts) using numerical methods such as variational Bayes.
Rohe and Zeng (2020) take a different approach to topic modeling, viewing it through the lens of principal component analysis (PCA) and the varimax rotation.
Consider a rectangular dataset \(X\) with \(n\) observations of \(p\) variables (\(n\times p\)). In a statistics or data science context, PCA is used for _dimension reduction_, representing these data with \(k<p\) dimensions while preserving as much of the original variance as possible. Contemporary approaches to PCA use the singular value decomposition
\[X=U\Sigma V^{t}=UL^{t}\]
where \(U\) is a \(n\times n\) orthogonal matrix (the column vectors are orthogonal and length 1), \(\Sigma\) is a \(n\times p\) diagonal matrix (all non-diagonal entries are 0), and \(V\) is a \(p\times p\) orthogonal matrix. \(L=V\Sigma^{t}\) is a \(p\times n\) matrix called the _loadings_. When \(p<n\) (that is, more observations than variables) then columns \(p+1,p+2,\ldots,n\) of the loadings will be zero, and columns \(1,2,\ldots,p\) can be interpreted as a new set of \(p\) variables constructed from the observed \(p\) variables. The rows of \(U\) are called the _scores_; they represent the values of the observations in the new variables.
If \(X\) is centered (mean of each column/variable is 0) then the SVD is related to the covariance of the original variables in such a way that the new variables are ordered from greatest to least variance, and the original and new variables have the same total variance. So if we restrict our attention to the first \(k\) new variables we will have a smaller representation of the original dataset that captures as much of the original variance as possible. Formally, let \(U_{k}\) be the \(n\times k\) matrix with columns \(1,\ldots,k\) of \(U\) and \(L_{k}=V_{k}\Sigma_{k}^{t}\) the corresponding \(p\times k\) partial loadings matrix. Then \(X\approx U_{k}\Sigma_{k}V_{k}^{t}\).
The loadings matrix is generally not easy to interpret, because the new variables are arbitrary linear combinations of the original variables. Such interpretations are essential in factor analysis, which attempts to identify interpretable latent variables from the data, such as psychological constructs corresponding to (weighted) sets of items in a survey instrument.1 Psychometricians proposed to address this problem by finding a \(k\times k\) orthogonal2 matrix \(T\)
\[U_{k}L_{k}^{t}=U_{k}T^{t}TL_{k}^{t}=U_{k}T^{t}(L_{k}T)^{t}\]
that (roughly) makes the "rotated" scores and loadings, \(U_{k}T^{t}\) and \(L_{k}T\), as _sparse_ as possible, that is, have as few non-zero entries as possible. This makes the new variables much more interpretable, as generalizations or abstractions of a small collection of observed variables. Because orthogonal matrices generalize rotations and the method for finding this \(T\) involves maximizing a total variance, this method is called the _varimax rotation_.
Finally, to semi-formally motivate a connection between PCA and topic modeling, consider \(r_{td}\), the occurrence rate of term \(t\) in document \(d\). This rate estimates the conditional probability of \(t\) given \(d\):
\[r_{td}\approx\Pr(t|d)=\sum_{i}\Pr(t|b_{i})\Pr(b_{i}|d)=\sum_{i}b_{i}\theta_{d},\]
with a slight abuse of notation, where \(i\in 1,\ldots,k\) indexes topics. In other words, topic modeling can be seen as factoring the (more-or-less observed) term-document distribution into two sets of latent distributions, term-topic and topic-document, much like PCA factors a data matrix into scores and loadings in latent variables. See Rohe and Zeng (2020) lemma 5.2 for a formal development of this connection.
The upshot is that the latent variables constructed using PCA + varimax can be interpreted as topics. Sparsity means that a given document will have near-zero value for all but a few topics, and a given topic will have near-zero value for all but a few documents.
The most obvious potential advantage of this approach is speed. Text data is typically extremely sparse -- documents typically contain only a small fraction of the words in the full vocabulary -- and efficient algorithms have been developed for partial SVD of sparse matrices (James Baglama and Reichel, 2005).
The tmfast package implements this PCA + varimax approach to topic modeling in R, with specific support for the widely-used ididverse idiom. The irlba package (Jim Baglama, Reichel, and Lewis, 2022) is used for efficient SVD (by default; users can specify an alternative SVD method if they prefer). tmfast is available at [https://github.com/dhi](https://github.com/dhi) cks/tmfast.
## 3 Example 1: A simulated corpus
tmfast includes a collection of functions to generate a simulated corpus according to the standard generative model. In this section, we use these functions to generate a corpus, fit topic models using tmfast and stm (Roberts, Stewart, and Tingley, 2019) -- widely used for topic modeling in R -- and compare their respective ability to identify the true topics used to generate the corpus.
We first load the tidyverse suite, the lpSolve package to match fitted and true topics, the tictoc package to calculate wall compute times, and tmfast and stm. The tidytext package is also loaded for its stm tidiers (eg, functions to represent a fitted stm model as a dataframe).
library(tidverse) # infrastructure theme_set(theme_minimal()) # make plots not look bad library(lpSolve) # used to match fitted and true topics library(tictoc) # timing library(tmfast) # fit topic models fast! library(stm) # standard topic model package library(tidyext) # tidiers for stm models
### Simulation parameters
We create simulated text data following the data-generating process assumed by LDA. Specifically, each document will be generated from one of several "journals." Each journal corresponds to a topic, and vice versa, in that documents from journal \(j\) will tend to have a much greater probability for topic \(j\) than the other topics.
We first specify the number of topics/journals k, and the number of documents to draw from each journal Mj, for a total of M = Mj * k documents in the corpus. We also specify the length of the vocabulary (total unique words) as a multiple of the total number of documents M. Document lengths are generated using a negative binomial distribution, using the size-mean parameterization. Per?NegBinomial, the standard deviation of document lengths in this parameterization is \(\sqrt{\mu+\frac{\mu^{2}}{\text{size}}}\).
k = 10 # Num. topics / journals Mj = 100 # Num. documents per journal M = Mj*k # Total corpus size vocab = M # Vocabulary length
_## Negative binomial distribution of doc lengths_ size = 10 # Size and mean mu = 300 sqrt(mu + mu^2/size) # Resulting SD of document sizes
[1] 96.43651
Topic-document and word-topic distributions are both sampled from Dirichlet distributions. For topic-docs, we use an asymmetric Dirichlet distribution3 where one component will have (in expectation) most of the probability mass (eg, 80%) and the remaining probability mass will be (in expectation) distributed evenly over the remaining components (eg, \(0.2/(k-1)\)). For word-topics we use a symmetric Dirichlet distribution (parametized only by the scaling factor). tmfast includes utility functions for constructing and drawing both kinds of Dirichlet distributions.
_## Dirichlet distributions for topic-docs and word-topics_ topic_peak =.8 topic_scale = 10 peak_alpha(k, 1, peak = topic_peak, scale = topic_scale)
[1] 8.000000 0.2222222 0.2222222 0.2222222 0.22222222 0.22222222 [8] 0.2222222 0.22222222 0.222222222 peak_alpha(k, 2, peak = topic_peak, scale = topic_scale)
[1] 0.2222222 8.000000 0.2222222 0.2222222 0.2222222 0.2222222 0.22222222 [8] 0.22222222 0.22222222 0.22222222 word_beta = 0.1
Because the simulations involve drawing samples using a RNG, we set a seed.
set.seed(2022-06-19)
### Draw true topic distributions
We generate the true topic-document distributions \(p(\theta=t|\mathrm{doc}_{m})\), often simply notated \(\theta\) or \(\gamma\). In this vignette we use \(\theta\) for the true distribution and \(\gamma\) for the fitted distribution in the topic model. Each document's \(\theta\) is sampled from a Dirichlet distribution (rdirichlet()), with the parameter \(\alpha\) corresponding to the document's journal \(j\). The variable theta is a Mb k matrix; theta_df is a tidy representation with columns doc, topic, and prob. The visualization confirms that documents are generally most strongly associated with the corresponding topics, though with some noise: in the median document, 82% of its topic probability mass is associated with the single dominant topic.
_## Journal-specific alpha, with a peak value (80%) and uniform otherwise;_ _## For each topic, draw Mj documents_ theta = map(1:k, -rdirichlet(Mj,
peak_alpha(k,.x, peak = topic_peak, scale = topic_scale))) |> reduce(rbind)
theta_df = theta |> as_tibble(rownames = 'doc', .name_repair = tmfast::::make_colnames) |> mutate(doc = as.integer(doc)) |> pivot_longer(starts_with('V'), names_to = 'topic', values_to = 'theta') theta_df
A tibble: 10,000 x 3 doc topic theta <int> <chr> <dbl> 1 1 V01 0.872 2 1 V02 0.0644 3 1 V03 0.0278 4 1 V04 0.00169 5 1 V05 0.00249 6 1 V06 0.00137 7 1 V07 0.00693 8 1 V08 0.000124 9 1 V09 0.00000643 10 1 V10 0.0235
#... with 9,990 more rows
ggplot(theta_df, aes(doc, topic, fill = theta)) + geom_tile()
V10 V09 V008 V07 V07 V08 V06 V03 V07 V08 V07 V06 V08 V09 V01 V00 V02 V03 V04 V05 V06 V07 V08 V09 V01 V00 V03 V02 V04 V06 V07 V08 V09 V01 V00 V01 V00 V02 V05 V06 V07 V08 V09 V01 V01 V02 V03 V04 V05 V06 V07 V08 V09 V01 V00 V03 V07 V08 V09 V01 V00 V02 V03 V04 V05 V06 V07 V08 V09 V01 V00 V03 V07 V09 V01 V00 V04 V05 V06 V07 V08 V09 V01 V01 V02 V03 V04 V06 V07 V08 V09 V01 V00 V03 V07 V09 V01 V00 V02 V03 V04 V05 V06 V07 V08 V09 V01 V00 V07 V09 V01 V00 V03 V04 V05 V06 V07 V08 V09 V01 V01 V02 V03 V04 V05 V06 V07 V08 V09 V01 V01 V02 V03 V04 V05 V06 V07 V08 V09 V01 V01 V02 V03 V04 V05 V06 V07 V08 V09 V01 V01 V02 V03 V04 V05 V06 V07 V08 V09 V01 V01 V02 V03 V04 V05 V06 V07 V08 V09 V01 V01 V02 V03 V04 V05 V06 V07 V08 V09 V01 V01 V02 V03 V04 V05 V06 V07 V08 V09 V01 V01 V02 V03 V04 V05 V06 V07 V08 V09 V01 V02 V04 V05 V06 V07 V08 V09 V01 V01 V02 V03 V04 V05 V06 V07 V08 V09 V01 V03 V07 V09 V01 V03 V04 V05 V06 V07 V08 V09 V01 V07 V08 V09 V01 V01 V02 V03 V04 V05 V06 V07 V08 V09 V01 V07 V09 V01 V02 V03 V04 V05 V06 V07 V08 V09 V01 V01 V02 V03 V04 V05 V06 V07 V08 V09 V01 V01 V03 V07 V09 V01 V02 V04 V05 V06 V07 V08 V09 V01 V03 V07 V09 V01 V01 V02 V03 V04 V05 V06 V07 V08 V09 V01 V01 V02 V03 V04 V05 V06 V07 V08 V09 V01 V01 V02 V05 V07 V08 V09 V01 V03 V01 V04 V05 V06 V07 08 V09 V01 V07 V08 V09 V01 V01 V02 V03 V04 V05 V06 V07 V08 V09 V01 V01 V02 V05 V07 V08 V09 V01 V01 V02 V03 V04 V05 V06 V07 V08 V09 V01 V01 V02 V05 V07 V08 V09 V01 V03 V01 01 V02 V05 V06 V07 V08 V09 V01 V01 V02 V03 V04 V05 V06 V07 08 V09 V01 V08 V09 V01 V01 V02 05 V00 07 V01 01 V03 V04 V05 V06 V07 V08 V09 V01 00 V02 V05 V07 08 V09 V01 V01 V02 V03 V04 V05 V06 V07 08 V09 V01 V03 V07 V09 V01 V04 V05 V06 V07 V08 V09 V01 V07 V08 V09 V01 01 V00 02 V05 V06 V07 08 V09 V01 V01 00 V10 V02 05 V00 07 V01 V01 V00 0 V02 05 V00 07 V01 V01 00 V02 V03 V04 V05 V06 V07 V08 V09 01 V01 00 V02 V05 00 V04 V05 06 V07 08 V09 V01 00 V03 V01 00 V02 V05 00 V06 V07 08 V09 V01 00 V02 V05 00 V04 V05 06 V07 08 V09 V01 00 V03 V01 00 V02 05 V06 07 V01 00 V02 05 V07 08 V09 01 V03 V01 00 V04 05 V06 V07 08 V09 V01 00 V04 05 V06 V07 08 V09 V01 00 V02 05 V07 08 V09 V010 V06 V08 V09 01 V01 00 V01 00 V01 00 V01 00 V05 00 V04 V05 V06 07 V08 09 V01 00 V10 00 V02 05 V07 08 V09 V01 00 V02 05 V00 07 V01 00 V02 05 V06 07 V08 V09 01 V01 00 V05 00 V04 05 V04 05 V06 07 V08 09 V01 00 V03 01 V07 08 V09 00 V06 09 V010 0 V07 09 V01 00 V05 00 V04 05 V06 07 V08 09 V01 00 V07 09 V01 00 V08 09 V01 00 V05 00 V04 05 V06 07 V08 09 V09 00 V10 00 0 V11 00 V02 05 V07 09 V08 09 V01 00 0 V08 09 V010 00 V11 00 V23 0 05 V09 00 V11 00 V20 05 V09 00 V01 00 0 V011 00 V23 0 05 V09 01 0 V01 00
theta_df|> group_by(doc)|> summarize(max=max(theta))|> pull(max)|> summary()
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.3567 0.7148 0.8192 0.7958 0.8963 0.9901
### Draw true word distributions
Next we generate the true word-topic distributions \(p(\phi=w|\theta=t)\), often designed as either \(\phi\) or \(\beta\). We use \(\phi\) for the true distribution and \(\beta\) for the fitted distribution. We sample these distributions from a symmetric Dirichlet distribution over the length of the vocabulary with \(\alpha=.01\). Tile and Zipfian (probability vs. rank on a log-log scale) plots confirm these distributions are working correctly.
_## phi_j: Word distribution for topic j_ phi=rdirichlet(k, word_beta, k=vocab) phi_df=phi|> t()|> as_tibble(rownames='token', name_repair=tmfast::make_colnames)|> pivot_longer(starts_with('V'), names_to='topic', values_to='phi') phi_df
A tibble: 10,000 x 3 tokentopic phi <chr><chr><dbl> 1 1 V01 2.28e-4 2 1 V02 2.51e-18 3 1 V03 4.06e-4 4 4 1 V04 5.86e-4 5 1 V05 5.73e-8 6 1 V06 1.04e-5 7 1 V07 2.67e-4 8 1 V08 9.02e-6 9 1 V09 1.00e-25 10 1 V10 2.03e-4 #...with9,990 more rows
_## Word distributions_ ggplot(phi_df, aes(topic, token, fill=(phi))) + geom_tile()+ scale_y_discrete(breaks=NULL)
Ziptian plot phi_df |> group_by(topic) |> mutate(rank = rank(desc(phi))) |> arrange(topic, rank) |> filter(rank < vocab/2) |> ggplot(aes(rank, phi, color = topic)) + geom_line() + scale_x_log10() + scale_y_log10()
### Document lengths
Again, document lengths are drawn from a negative binomial distribution.
_## N_i: Length of document i_ N = rnbinon(M, size = size, mu = mu) summary(N)
Min. 1st Qu. Median Mean 3rd Qu. Max. 93.0 240.8 300.5 308.6 364.5 774.0
sd(N)
[1] 95.1555
hist(N)
### Draw corpus
Finally we draw the corpus, the observed word counts for each document. This is the most time-consuming step in this script, much slower than actually fitting the topic model. Experimenting with this simulation, we found that log1p() scaling of the word counts produced better results than other scaling techniques (eg, dividing by the total length of each document, scaling words by their standard deviation) for accounting for radical differences in document length.4
Footnote 4: Since \(\log(0+1)=0\), this transformation also preserves sparsity and does not introduce infinite values.
tic() corpus = draw_corpus(N, theta, phi) toc()
25.927 sec elapsed
dtm = mutate(corpus, n = log1p(n))
### Fit the topic model
Fitting the topic model is extremely fast. Note that we can request multiple values of \(k\) (numbers of topics) in a single call. Other topic modelling packages typically fit only a single value of \(k\) at a time.
Under the hood, we cast the document-term matrix to a sparse matrix class if necessary. Then we extract the maximum number of desired principal components using irlba::prcomp_irlba(), centering but not scaling the logged word counts. (Experiments with this simulation indicated that scaling makes it more difficult to construct probability distributions later.) Next we use the base R function stats:varimax() to construct a preliminary varimax rotation of the principal components. Because the direction of factors is arbitrary as far as varimax is concerned, but meaningful when we convert things to probability distributions, we check the skew of each factor's loadings in the preliminary fit, and reverse the factors with negative skew (long left tails with relatively large negative values).
tic() fitted = tmfast(dtm, c(3, 5, k, 2*k)) loc()
0.576 sec elapsed
The object returned by tmfast() has a simple structure (pun intended) and the tmfast S3 class. totalvar and sdev come from the PCA step, giving the total variance across all feature variables and the standard deviation of each extracted principal component. (Note that these PCs do not generally correspond to the varimax-rotated factors/topics.) n contains the sizes (number of factors/topics) fitted for the models, and varimaxes contains the varimax fit for each value of n. The varimax objects each contain three matrices, the rotated loadings (word-topics), the rotation matrix rotmat, and the rotated scores (document-topics). Note that these are not stored as probability distributions.
str(fitted, max.level = 2L)
List of 9 $ totalvar: num 138 $ sdev : num [1:20] 3.26 3.06 3.01 2.97 2.93... $ rows : chr [1:1000] "1" "2" "3" "4"... $ cols : chr [1:999] "5" "7" "8" "11"... $ center : Named num [1:999] 0.4016 0.0943 0.4254 0.2396 0.4093... ..- attr(*, "names")= chr [1:999] "5" "7" "8" "11"... $ scale : logi FALSE $ rotation: num [1:999, 1:20] -0.00112 0.02725 0.01223 0.00457 -0.00883... ..- attr(*, "dimnames")=List of 2 $ n : num [1:4] 3 5 10 20 $ varimax :List of 4 ..$ 3 :List of 3 ..$ 5 :List of 3 ..$ 10:List of 3 ..$ 20:List of 3 - attr(*, "class")= chr [1:3] "tmfast" "varimaxes" "list"
str(fitted@varimax[as.character(k)])
List of 1 $ 10:List of 3 ..$ loadings: num [1:999, 1:10] -0.0479 0.1025 0.0436 -0.0414 -0.0505... ....- attr(*, "dimnames")=List of 2 ......$ : chr [1:999] "5" "7" "8" "11"... .....$ : NULL ..$ rotmat : num [1:10, 1:10] 0.784 -0.2228 -0.23 -0.0823 0.3342... ...$ scores : num [1:100, 1:10] -0.384 -0.137 -0.253 -0.298 -0.388... ....- attr(*, "dimnames")=List of 2 ......$ : chr [1:1000] "1" "2" "3" "4"... ......$ : NULL
Because the model contains a sdev component, screeplot() works out of the box. Note that the first \(k\) PCs have much higher variance than the others, and often the \(k\)th PC is somewhat lower than the first \(k-1\). This reflects the highly simplified structure of the simulated data. Real datasets often have a much more gradual decline in the screeplot, likely reflecting the complex hierarchy of topics in actual documents.
screeplot(fitted, npcs = k + 5)
It's also straightforward to calculate the share of total variance covered by successive principal components. Experimenting with this simulation, it's common for \(k\) principal components to cover less than half of the total variance. Again, note that the rotated varimax factors don't correspond to the principal components, but the total covered variance remains the same.
cumsum(fitted$sdev^2) / fitted$totalvar
[1] 0.07689789 0.14466433 0.21018176 0.27420309 0.33636478 0.39492291
[7] 0.45196354 0.50563725 0.55705654 0.57380104 0.57606900 0.57828256
[13] 0.58048004 0.58264465 0.58479395 0.58691426 0.58902127 0.59107016
[19] 0.59311409 0.59514063
data.frame(PC = 1:length(fitted$sdev), cum_var = cumsum(fitted$sdev^2) / fitted$totalvar) |> ggplot(aes(PC, cum_var)) + geom_line() + geom_point()
### Fitting a conventional topic model (stm)
For comparison, we'll also fit a conventional topic model using the stm package. To address the challenge of picking a number of topics, stm::stm() conducts a topic estimation process when passed K = 0. With the simulation parameters and the random seed used here, this process takes almost 12 seconds and produces a model with 33 topics. We therefore do not run the code below.
tic() corpus |> cast_sparse(doc, word, n) |> stm(K = 0, verbose = FALSE) loc()
Setting K = k gives us a fitted topic model in a few seconds, about an order of magnitude slower than tmfast(). Profiling experiments indicated that tmfast() is about 20x faster than stm().
tic() fitted_stm = corpus |> cast_sparse(doc, word, n) |> stm(K = k, verbose = FALSE) loc()
### Assessing accuracy: Word-topic distributions
Using simulated data with true word-topic and topic-document distributions enables us to check the accuracy of both tmfast and stm models. Here we'll develop a method proposed by Malaterre and Lareau (2022), comparing distributions using Hellinger distance. For discrete probability distributions \(p,q\) over the same space \(X\), the Hellinger distance is given by
\[d(p,q)=\frac{1}{\sqrt{2}}\sqrt{\sum_{x\in X}(\sqrt{p(x)}-\sqrt{q(x)})^{2}}= \frac{1}{\sqrt{2}}\|\sqrt{p}-\sqrt{q}\|_{2}.\]
The last equation means that the Hellinger distance is the Euclidean (\(L^{2}\)-norm) distance between the _square roots_ of the distributions. Some authors working with topic models sometimes compare distributions using the \(L^{2}\)-norm of the distributions themselves, without the square root. But this approach is flawed, since probability distributions can have different lengths in the \(L^{2}\) norm. (For example, the distribution \((1,0)\) has \(L^{2}\) length 1, while \((\frac{1}{2},\frac{1}{2})\) has \(L^{2}\) length approximately 1.19.) Cosine similarity, which is also widely used by text analysts, is directly related to the \(L^{2}\)-norm and has the same problem.
Hellinger distance satisfies the equation
\[1-d^{2}(p,q)=\sum_{x\in X}\sqrt{p(x)q(x)}.\]
When working with topic models, we're interested in pairwise sets of Hellinger distances, either between all pairs of distributions from a single set (for example, the topic distributions for each document, as used in "discursive space" analysis; Hicks 2021) or two sets (such comparing fitted vs. true word-topic distributions as below; or word-topic distributions for two models fitted on the same corpus but different vocabularies, Malaterre and Lareau 2022). Working with two sets of distributions \(P=\{p_{i}|i\in I\}\) and \(Q=\{q_{j}|j\in J\}\), the right-hand side of the last equation is equivalent to a matrix multiplication.5 The tmfast::hellinger() function provides S3 methods for calculating Hellinger pairwise distances given a single dataframe, single matrix, or two dataframes or matrices.
Footnote 5: For \(P\), each row corresponds to the elementwise square root of one distribution \(\sqrt{p}_{i}\) and each column to one component \(x\in X\), i.e., a cell contains the value \(\sqrt{p_{i}(x)}\). \(Q\) is the transpose, with each row corresponding to one component \(x\in X\) and each column corresponding to the square root of a distribution \(\sqrt{q}_{j}\). The product of these matrices is a \(i\times j\) matrix with each cell the desired sum for \(p\) and \(q\).
First, however, we need to extract the word-topic distributions. tmfast provides a tidy() method, following the pattern of the topic model tidiers in the tidytext package. Unlike other topic models, tmfast objects can contain multiple models for different values of \(k\). So, in the second argument to tidy(), we need to specify which number of topics we want. The third argument specifies the desired set of distributions, either word-topics ('beta') or topic-documents ('gamma').
```
##beta:fittedvarimaxloadings,transformedtopprobabilitydistributions beta=tidy(fitted,k,'beta') beta
```
#Atibble:2,734x3 tokentopicbeta <chr><chr><dbl>
15V020.0198
25V080.00454
37V010.00344
47V020.00276
57V060.000318
68V010.00146
78V020.00522
88V090.0195
911V020.0114
1011V040.00610
#...with2,724morerows ```
Word-topic distributions correspond to the varimax factor loadings. These loadings can take any real value. To convert them to probability distributions, within each factor (topic), we trim negative values to 0 and divide each loading by the sum of all loadings. The Zipfian plot below compares the fitted and true word-topic distributions. Consistently across experiments with this simulation, fitted distributions started off a little flatter, then dropped sharply after about 100 words. In other words, the varimax topic model highlights a relatively long list of characteristic words for each topic -- the actual distributions have fewer characteristic words -- and then ignores the other words.
``` ##CompareZipfiandistributions bindrows(mutate(beta,type='fitted'),
{phi_df |> rename(beta = phi) |> mutate(type = 'true'))) |> group_by(type, topic) |> mutate(rank = rank(desc(beta))) |> arrange(type, topic, rank) |> filter(rank < vocab/2) |> ggplot(asas(rank, beta, color = type, group = interaction(topic, type))) + geom_line() + scale_y_log10() + scale_x_log10() The Zipfian distribution doesn't tell us which fitted topics might correspond to which true topics. For that, following Malaterre and Lareau (2022), we'll use pairwise Hellinger distances. There's one complication, however. The parameters chosen for this simulation typically end up not drawing some of the words from the vocabulary, and they don't end up in the same order as the true word-topic matrix phi. Fortunately words are represented as the integers 1:vocab, so it's relatively painless to put them back in order and fill in the gaps (setting the probability for the missing words to be 0 across all topics). In the code block below, we first fix these issues with the words, widen the long dataframe, convert it to a matrix, and then calculate pairwise Hellinger distances with the true word-topic matrix phi.
```
##Hellingerdistanceofword-topicdistributions beta_mx=beta|> ##Fixorderofwords mutate(token=as.integer(token))|> arrange(token)|> ##Anddroppedwords complete(token=1:vocab,topic,fill=list(beta=0))|> build_matrix(token,topic,beta,sparse=FALSE)
```
* hellinger(phi,t(beta_mx)) |> print(digits = 3)
V01 V02 V03 V04 V05 V06 V07 V08 V09 V10
[MISSING_PAGE_POST]
In this distance matrix, the rows are the true topics and the columns are the fitted topics. Low values correspond to greater similarity. It's clear that the topics don't match up perfectly -- the minimum in each row is about 0.17 -- but there is a clear minimum. We treat this as a linear assignment problem, which is solved rapidly using the lpSolve package. The solution -- which matches true to fitted topics -- can then be used as a rotation with both the loadings and scores (topic-document distributions). After rotating, the true-fitted pairs are on the diagonal of the Hellinger distance matrix, making it easy to extract and summarize the quality of the fit.
_## Use lpSolve to match fitted topics to true topics_
* dist = hellinger(phi,t(beta_mx)) rotation = lp.assign(dist)$solution rotation
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
[1,] 0 1 0 0 0 0 0 0 0 [2,] 0 0 0 0 1 0 0 0 [3,] 0 0 0 0 0 0 0 0 0 0 0 0 [4,] 0 0 1 0 0 0 0 0 0 [5,] 0 0 0 0 0 0 0 1 0 0 0 0 [6,] 0 0 0 0 0 1 0 0 0 0 [7,] 1 0 0 0 0 0 0 0 0 0 0 [8,] 0 0 0 0 0 0 0 0 0 0 0 [9,] 0 0 0 1 0 0 0 0 [10,]
hellinger(phi, rotation %*% t(beta_mx)) |> diag() |> summary()
Min. 1st Qu. Median Mean 3rd Qu. Max. 0.1569 0.1639 0.1674 0.1693 0.1755 0.1829
And we do the same thing with the stm topic model. **stm is somewhat more accurate than tmfast, with a median Hellinger distance of about 0.07 compared to 0.18. But stm is significantly slower.**
beta_stm_mx = tidy(fitted_stm, matrix = 'beta') |> ## Fix order of words mutate(term = as.integer(term)) |> arrange(term) |> ## And dropped words complete(term = 1:vocab, topic, fill = list(beta = 0)) |> build_matrix(term, topic, beta, sparse = FALSE)
hellinger(phi, t(beta_stm_mx)) |> print(digits = 3)
1 2 3 4 5 6 7 8 9 10 [1,] 0.0855 0.843 0.8425 0.8821 0.8744 0.8716 0.8505 0.8654 0.879 0.868 [2,] 0.8433 0.854 0.0823 0.8700 0.8585 0.8696 0.8616 0.8703 0.860 0.854 [3,] 0.8481 0.085 0.8497 0.8587 0.8349 0.8773 0.8822 0.8583 0.873 0.845 [4,] 0.8873 0.865 0.8760 0.872 0.8614 0.9034 0.8730 0.8366 0.867 0.873 [5,] 0.8651 0.842 0.8420 0.8669 0.8518 0.8549 0.8460 0.8605 0.838 0.079 [6,] 0.8928 0.855 0.8715 0.8369 0.8477 0.8910 0.8628 0.8088 0.854 0.863 [7,] 0.8760 0.875 0.8694 0.8950 0.8614 0.9843 0.8747 0.8954 0.867 0.857 [8,] 0.8618 0.885 0.8710 0.8777 0.8576 0.8821 0.0927 0.8705 0.845 0.853 [9,] 0.8827 0.840 0.8683 0.8580 0.0904 0.8601 0.8518 0.8531 0.853 0.858 [10,] 0.8822 0.878 0.8630 0.8688 0.8516 0.8734 0.8460 0.8559 0.091 0.846
rotation_stm = hellinger(phi, t(beta_stm_mx)) |> lp.assign() |> magritr::extract2('solution')
hellinger(phi, rotation_stm %*% t(beta_stm_mx)) |> diag() |> summary()
Min. 1st Qu. Median Mean 3rd Qu. Max. 0.07902 0.08445 0.08634 0.08661 0.08999 0.09268
The tidied word-topic distributions can be used in standard ways for further analysis, such as a Silge plot of the highest probability words for each topic. But because the "words" in this simulation are just integers, and not semantically meaningful, we don't construct such a plot here.
### Renormalization: Topic-document distributions
Finally, we compare fitted and true topic-document distributions. We extract topic-document distributions using the same tidy() function, specifying the matrix gamma and including the rotation above to align the fitted and true topics. Tile and parallel coordinates plots can be used to visualize all of the topic-document distributions. These show that the tmfast models successfully recover the overall association of each document's journal with a distinctive topic.
gamma_df=tidyd(fitted,k,'gamma', rotation=rotation)|> mutate(document=as.integer(document), journal=(document-1)%/%Mj+1)
Warning in tidy.tmfast(fitted, k,"gamma", rotation=rotation): Rotating scores
ggplot(gamma_df,aes(document,topic,fill=gamma))+ geom_raster()+ scale_x_continuous(breaks=NULL)
ggplot(gamma_df, aes(topic, gamma, group=document, color=as.factor(journal)))+ geom_line(alpha=.25)+ facet_wrap(vars(journal))+ scale_color_discrete(guide='none')+ scale_x_discrete(guide='none')
However, the fitted topic-document distributions are flatter than the true ones. Consider the true and fitted distributions for document 1. Compared to the true distribution, the fitted distribution has a somewhat lower probability for topic V01 and a somewhat higher probability for the other topics.
ggplot(mapping = aes(topic, group = 1L)) + geom_line(mapping = aes(y = theta, color = 'true'), data = filter(theta_df, doc == '1')) + geom_line(mapping = aes(y = gamma, color = 'fitted'), data = filter(gamma_df, document == '1'))
Figure 10: The fitted topic-document distributions are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones for document 1 are flatter than the true ones. The fitted distributions for document 1 are flatter than the true ones for document 1 are flatter than the true ones for document 1. The fitted distributions for document 1 are flatter than the true ones for document 1 are flatter than the true ones for document 1. The fitted distributions for document 1 are flatter than the true ones for document 1 are flatter than the true ones for document 1. The fitted distributions for document 1 are flatter than the true ones for document 1. The fitted distributions for document 1 are flatter than the true ones for document 1 are flatter than the true ones for document 1. The fitted distributions for document 1 are flatter than the true ones for document 1 are flatter than the true ones for document 1. The fitted distributions for document 1 are flatter than the true ones for document 1. The fitted distributions for document 1 are flatter than the true ones for document 1 are flatter than the true ones for document 1. The fitted distributions for document 1 are flatter than the true ones for document 1.
This flatter distribution corresponds to greater entropy. In this simulation, the entropy of the fitted distributions are about 1 bit greater than those of the true distributions. This discrepancy tends to become worse with greater values of \(k\).
theta_df |> group_by(doc) |> summarize(H = entropy(theta)) |> pull(H) |> summary()
Min. 1st Qu. Median Mean 3rd Qu. Max. 0.1006 0.6614 0.9715 1.0100 1.3311 2.5821
gamma_df |> group_by(document) |> summarize(H = entropy(gamma)) |> pull(H) |> summary()
Min. 1st Qu. Median Mean 3rd Qu. Max. 0.8486 1.7040 1.9319 1.9129 2.1592 2.7673 To mitigate this problem, we add an optional renormalization step when converting document scores to topic-document distributions. Given a discrete probability distribution \(P\) with components \(p_{i}\) and entropy \(H\), and a parameter \(\beta\), we can define a new distribution \(P^{\prime}\) with components
\[p^{\prime}_{i}=\frac{p^{\beta}_{i}}{\sum_{i}p^{\beta}_{i}}=\frac{p^{\beta}_{i}} {Z}\]
which has entropy
\[H^{\prime}=\frac{1}{Z}\sum_{i}[p^{\beta}_{i}\beta\log p_{i}]-\log Z.\]
That is, we can choose a parameter \(\beta\) that renormalizes \(P\) to achieve a target entropy \(H^{\prime}\). In LDA, the target entropy is the expected entropy for topic-document distributions drawn from the asymmetric Dirichlet prior. tmfast provides convenience functions for calculating this expected entropy; compare this to the mean entropy of the distributions in theta above. **In actual applications, where the Dirichlet prior is an idealization, choosing \(\alpha\) to set the target entropy is an important researcher degree of freedom.** It is equivalent to choosing prior parameters in other topic modeling packages.
expected_entropy(peak_alpha(k, 1, topic_peak, topic_scale))
[1] 0.997604 Since solving the equation for \(H^{\prime}\) for \(\beta\) requires numerical optimization, it's inefficient to do this every time we call tidy(), especially with large corpora. Instead, tmfast::target_power() is used to run this optimization once, and then return the mean value across all documents. We then use this single value of \(\beta\) in all future calls to tidy().
gamma_power = tidy(fitted, k, 'gamma') |> target_power(document, gamma, expected_entropy(peak_alpha(k, 1, topic_peak,
topic_scale)))
gamma_power
[1] 1.539377
The renormalized topic-document distributions have closer entropy to \(\theta\). The keep_original argument lets us compare the original and renormalized distributions.
gamma_df = tidy(fitted, k, 'gamma', rotation = rotation, exponent = gamma_power, keep_original = TRUE) |> mutate(document = as.integer(document), journal = (document - 1) %/% Mj + 1)
Warning in tidy.tmfast(fitted, k, "gamma", rotation = rotation, exponent = gamma_power, : Rotating scores
gamma_df |> group_by(document) |> summarize(across(c(gamma, gamma_rn), entropy)) |> summarize(across(c(gamma, gamma_rn), mean))
# A tibble: 1 x 2 gamma gamma_rn <dbl> <dbl>
1 1.91 1.03
We can now assess accuracy of the topic-document distributions. Above we used the hellinger() method for two matrices. The method for two dataframes requires specifying the id, topic, and probability columns. The tile plot shows that the true and fitted topics are aligned (because we used the rotation when extracting gamma_df above), and so again we can get an overall summary from the diagonal. Without renormalization, in the current simulation the mean Hellinger distance is 0.24 -- not too bad, but perhaps larger than one would like. With larger values of \(k\), this accuracy increases significantly. Renormalization keeps the mean distance around 0.13, comparable to the word-topic distributions.
_## w/o renormalization, mean distance is.24_ hellinger(theta_df, doc, probl = theta, topicsdf2 = gamma_df, id2 = document, prob2 = gamma, df = FALSE) |> diag() |> summary()
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.08499 0.20131 0.23733 0.23770 0.27244 0.37585
_## w/ renormalization, mean distance drops to.13_ doc_compare = hellinger(theta_df, doc, probl = theta, topicsdf2 = gamma_df, id2 = document, prob2 = gamma_rn, df = TRUE)
* doc_compare |> filter(doc == document) |> pull(dist) |> summary()
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.04808 0.10195 0.12378 0.12518 0.14564 0.24868
ggplot(doc_compare, aes(as.integer(doc), as.integer(document), fill 1 = 1 - dist)) + geom_raster() + scale_x_discrete(breaks = NULL, name = 'true') + scale_y_discrete(breaks = NULL, name = 'fitted')
STM has a slightly closer fit, with a mean Hellinger distance of 0.08.
fitted_stm_gamma = tidy(fitted_stm, matrix = 'gamma') |> build_matrix(document, topic, gamma, sparse = FALSE)
hellinger(theta, fitted_stm_gamma %% t(rotation_stm)) |> diag() |> summary()
Min. 1st Qu. Median Mean 3rd Qu. Max.
0.03216 0.07148 0.08638 0.08823 0.10260 0.19884
## 4 Example 2: Literature from the long nineteenth century
Our second example will analyze a literary corpus from the long nineteenth century, attempting to recover the author of each document. In order to reduce the number of requests sent to Project Gutenberg, we construct a convenience
function to identify and retrieve fulltext works given an author's Gutenberg ID, and wrap this in memoise:memoise() to create a local cache.
```
library(tidverse)#infrastructure theme_set(theme_minimal()) library(ggbbeeswarm) library(memoise) library(tittoc) library(glue) library(gutenberger)#textretrievalandmanipulation library(tidytext) library(tmfast)#topicmodeling library(stm)#topicmodeling get_author=function(author_id){ gutenberg_works(gutenberg_author_id==author_id, has_text)|> gutenberg_download(meta_fields=c('author', 'title'), mirror='[http://aleph.gutenberg.org](http://aleph.gutenberg.org)') } get_author=memoise(get_author, cache=cache_filesystem('realbooks'))
```
### Corpus assembly
We first retrieve all works in Project Gutenberg by our target authors: Jane Austen, Charlotte and Emily Bronte, Louisa May Alcott, George Eliot, Charles Dickens, HG Wells, and HP Lovecraft. For these authors, the memoise local cache ends up at about 286 MB.
``` ##JaneAustenisauthor68 #gutenberg_authors|> #filter(str_detect(author, 'Austen')) austen_df=get_author(68)
##AnneBronteis404 #filter(gutenberg_authors,str_detect(author, 'Bronte')) a_bronte_df=get_author(404)
##CharlotteBronteis408 #filter(gutenberg_authors,str_detect(author, 'Bronte')) c_bronte_df=get_author(408)
##EmilyBronteis405 #filter(gutenberg_authors,str_detect(author, 'Bronte')) e_bronte_df=get_author(405)
##LouisaMayAlcottis102 #filter(gutenberg_authors,str_detect(author, 'Alcott')) alcott_df=get_author(102)
# George Eliots is 90
# filter(gutenberg_authors, str_detect(author, 'Eliot')) eliot_df = get_author(90)
# Mary Wollstonceraft Shelley is 61
# filter(gutenberg_authors, str_detect(author, 'Shelley')) shelley_df = get_author(61)
# Charles Dickens is 37
# filter(gutenberg_authors, str_detect(author, 'Dickens')) dickens_df = get_author(37)
# HG Wells is 30
# filter(gutenberg_authors, str_detect(author, 'Wells')) wells_df = get_author(30)
# HP Lovecraft is 34724
# filter(gutenberg_authors, str_detect(author, 'Lovecraft')) lovecraft_df = get_author(34724)
We combine these results, and use tidytext::unnest_tokens() to convert the result into a long-format document-term matrix. Note that token extraction can take a long moment. We also construct a dataframe to link titles to authors in the topic model output.
dataf = bind_rows(austen_df, a_bronte_df, c_bronte_df, e_bronte_df, alcott_df, eliot_df, shelley_df, dickens_df, wells_df, lovecraft_df) |> unnest_tokens(term, text, token = 'words') |> count(gutenberg_id, author, title, term)
dataf
# A tibble: 1,812,904 x 5 gutenberg_id author title term n <int> <chr> <chr> <int>
1 35 Wells, H. G. (Herbert George) The Time Machine _can_ 1
2 35 Wells, H. G. (Herbert George) The Time Machine _cancan_ 1
3 35 Wells, H. G. (Herbert George) The Time Machine _down_ 1
4 35 Wells, H. G. (Herbert George) The Time Machine _four_ 1
5 35 Wells, H. G. (Herbert George) The Time Machine _him_ 1
6 35 Wells, H. G. (Herbert George) The Time Machine _how_ 1
7 35 Wells, H. G. (Herbert George) The Time Machine \(i\) 1
8 35 Wells, H. G. (Herbert George) The Time Machine _instantan- 1
9 35 Wells, H. G. (Herbert George) The Time Machine _minus_ 1
10 35 Wells, H. G. (Herbert George) The Time Machine _nil_ 1
#... with 1,812,894 more rows
meta_df = distinct(dataf, author, title)
The number of works by each author varies widely, as does the total token count.
distinct(dataf, author, title) |> count(author)
A tibble: 10 x 2 author <chr> n 1 Alcott, Louisa May 45 2 Austen, Jane 10 3 Bronte, Anne 2 4 Bronte, Charlotte 6 5 Bronte, Emily 1 6 Dickens, Charles 77 7 Eliot, George 18 8 Lovecraft, H. P. (Howard Phillips) 7 9 Shelley, Mary Wollstonecraft 17 10 Wells, H. G. (Herbert George) 70
with(dataf, n_distinct(author, title))
[1] 253
dataf |> group_by(author, title) |> summarize(n = sum(n)) |> summarize(min = min(n), median = median(n), max = max(n), total = sum(n)) |> arrange(desc(total))
'summarise()' has grouped output by 'author'. You can override using the.groups' argument.
A tibble: 10 x 5 author <chr> min median max total 1 Dickens, Charles 1364 31226 360502 6785632 2 Veils, H. G. (Herbert George) 3958 64936.47057 5224147 3 Alcott, Louisa May 2660 55483 194549 2977676 4 Eliot, George 1871 108236.320413 2247001 5 Austen, Jane 23192 101879 784790 1652092 6 Shelley, Mary Wollstonecraft 12514 53643 183856 1434844 7 Bronte, Charlotte 1416 138921 219783 699938 8 Bronte, Anne 68716 119946.171177 239893 9 Lovecraft, H. P. (Howard Phillips) 3654 12073 99008 160200 10 Bronte, Emily 117082 117082 117082 117082
dataf |> group_by(author, title) |> summarize(n = sum(n)) |>
ggplot(aes(author, n, color = author)) + geom_boxplot() + geom_beeswarm() + scale_color_discrete(guide = 'none') + coord_flip()
'summarise()' has grouped output by 'author'. You can override using the '.groups' argument.
### Vocabulary selection
In line with a common rule of thumb in topic modeling, we aim for a vocabulary of about 10 times as many terms as documents in the corpus.
vocab_size = n_distinct(dataf$author, dataf$title) * 10 vocab_size
[1] 2530
tmfast provides two information-theoretic methods for vocabulary selection. Both are based on the idea of a two-player guessing game. I pick one of the documents from the corpus, then one of the terms from the document. I tell you the term, and you have to guess which document I picked. More informative terms have greater information gain (calculated as the Kullback-Leibler divergence) relative to a "baseline" distribution based purely on the process used to pick the document. The difference between the two methods is in the document-picking process. The ndH method assumes the document was picked uniformly at random from the corpus, so that no document is more likely to be picked than any other. The ndR method assumes document probability is proportional to the document length, so that shorter documents are less likely to be picked. This method implies that terms that are distinctive of shorter documents have high information gain, since they indicate "surprising" short documents.
On either method, the most informative terms are often typographical or OCR errors, since these only occur in a single document. To balance this, we multiply the information gain (\(\Delta H\) for the uniform process, \(\Delta R\) for the length-weighted process) by the log frequency of the term across the entire corpus (\(\log n\)). So ndfl is shorthand for \(\log(n)\Delta H\) while ndfl is shorthand for \(\log(n)\Delta R\).
tic() H_df = ndH(dataf, title, term, n) R_df = ndR(dataf, title, term, n) |> mutate(in_vocab = rank(desc(ndR)) <=vocab_size) loc()
17.518 sec elapsed
H_df
# A tibble: 116,449 x 5 term + <chr> <dbl> <dbl> <int> <dbl>
1 kipps 0.125 7.86 1454 82.6
2 domby 0.386 7.60 1618 81.0
3 boffin 0.108 7.87 1127 79.8
4 pecksniff 0.366 7.62 1320 79.0
5 gwendolen 0.239 7.74 1048 77.7
6 lydgate 0.0235 7.96 867 77.7
7 deronda 0.404 7.58 1155 77.1
8 nicholas 0.977 7.01 1931 76.5
9 tito 0.0835 7.90 811 76.3
10 squeers 0.253 7.73 895 75.8
#... with 116,439 more rows
R_df
# A tibble: 116,449 x 5 term n dR ndR in_vocab <chr> <int> <dbl> <dbl> <dbl> <lgl>
1 kipps 1454 7.48 78.6 TRUE 2 hoopdriver 469 8.50 75.4 TRUE 3 scrooge 1007 7.54 75.3 TRUE 4 lewisham 575 8.03 73.6 TRUE 5 benham 689 7.56 71.3 TRUE 6 melville 243 8.97 71.1 TRUE 7 bealby 458 8.04 71.0 TRUE 8 veronica 659 7.52 70.4 TRUE 9 bert 556 7.71 70.3 TRUE 10 christie 1797 6.43 69.5 TRUE
#... with 116,439 more rows
The resulting term ranking of the two methods tend to be similar, but ndfl is preferable in the current case because of the additional weight it gives to distinctive terms from shorter documents.
inner_join(H_df, R_df, by = 'term') |> ggplot(aes(ndH, ndR, color = in_vocab)) + geom_point(aes(alpha = rank(desc(ndH)) <= vocab_size))
Warning: Using alpha for a discrete variable is not advised.
inner_join(H_df, R_df, by = 'term') |> mutate(ndH_rank = rank(desc(ndH)), ndR_rank = rank(desc(ndR))) |> ggplot(aes(ndH_rank, ndR_rank, color = in_vocab)) + geom_point(aes(alpha = ndH_rank <= vocab_size)) + scale_x_log10() + scale_y_log10()
Warning: Using alpha for a discrete variable is not advised.
vocab = R_df |> filter(in_vocab) |> pull(term) head(vocab, 50)
[1] "kipps" "hoopdriver" "scrooge" "lewisham" "benham" [6] "melville" "bealby" "veronica" "bert" "christie" [11] "sylvia" "smitchey" "boldheart" "britling" "castruccio" [16] "bounderby" "lillian" "maggie" "marjorie" "craggs" [21] "n't" "kemp" "bab" "redwood" "harman" [26] "cavor" "chateris" "brumley" "ammi" "heatchiff" [31] "tackleton" "gladys" "helwyze" "tetterby" "montgomery" [36] "lodore" "trafford" "treherne" "jill" "ludovico" [41] "tito" "lomi" "canaris" "trotty" "williers" [46] "falkner" "doubleclick" "amanda" "gradgrind" "linton" dataf |> filter(term %in% vocab) |> group_by(author, title) |> summarize(n = sum(n)) |> ggplot(aes(author, n, color = author)) + geom_boxplot() + geom_beeswarm() + scale_color_discrete(guide = 'none') + coord_flip()
'summarise()' has grouped output by 'author'. You can override using the '.groups' argument.
### Fit topic models
dtm = dataf |> filter(term %in% vocab) |> mutate(n = log1p(n))
n_authors = n_distinct(dataf$author)
tic() fitted_tmf = tmfast(dtm, n = c(5, n_authors, n_authors + 5), row = title, column = term, value = n) toc()
0.801 sec elapsed
screeplot(fitted_tmf, npcs = n_authors + 5)
### Topic exploration
Without renormalization, most of the works are spread across a few topics, and the topics don't clearly correspond to authors.
tidy(fitted_tmf, n_authors, 'gamma') |> left_join(meta_df, by = c('document' = 'title')) |> ggplot(aes(document, gamma, fill = topic)) + geom_col() + facet_wrap(vars(author), scales = 'free_x') + scale_x_discrete(guide = 'none') + scale_fill_viridis_d()
To renormalize, we need to choose a theoretical Dirichlet distribution.
alpha = peak_alpha(n_authors, 1, peak =.8, scale = 10) target_entropy = expected_entropy(alpha) target_entropy
[1] 0.997604
exponent = tidy(fitted_tmf, n_authors, 'gamma') |> target_power(document, gamma, target_entropy) exponent
[1] 4.064884
tidy(fitted_tmf, n_authors, 'gamma', exponent = exponent) |> left_join(meta_df, by = c('document' = 'title')) |> ggplot(aes(document, gamma, fill = topic)) + geom_col() + facet_wrap(vars(author), scales = 'free_x') + scale_x.discrete(guide = 'none') + scale_fill_viridis_d()
tidy(fitted_tmf, n_authors, 'gamma', exponent = exponent) |> left_join(meta_df, by = c('document' = 'title')) |> ggplot(aes(document, topic, fill = gamma)) + geom_raster() + facet_grid(cols = vars(str_wrap(author, width = 20)), scales = 'free_x', switch = 'x') + scale_xdiscrete(guide = 'none')
After renormalization, there are distinctive topics for Alcott (4) and Wells (1 and 9). Austen, Anne Bronte, Emily Bronte, and some of Shelley's works appear together in topic 3. Charlotte Bronte and some of Eliot's and Shelley's
works split topic 5. Eliot and Lovecraft share topic 10. And Dickens' works are spread across multiple topics, with 2, 6, and 8 appearing to be distinctive to him.
To aid interpretation, we create a crosswalk dataframe connecting topics to authors.
topic_author = tribble( - topic, - authors, 'V01', 'Wells', 'V02', 'Dickens', 'V03', 'Austin, A & E Bronte', 'V04', 'Alcott', 'V05', 'Dickens', 'V06', 'Dickens', 'V07', 'C Bronte, Eliot, Shelley', 'V08', 'Dickens', 'V09', 'Wells', 'V10', 'Eliot, Lovecraft' )
To explore these topics further, we turn to the word-topic distribution. These distributions could be renormalized, as with the topic-doc distributions. But the exponent for the word-topic distributions is usually quite close to 1, meaning renormalization doesn't change these distributions very much.
target_entropy_term = expected_entropy(.1, k = vocab_size) target_entropy_term
[1] 8.597192
exponent_term = tidy(fitted_tmf, n_authors, 'beta') |> target_power(topic, beta, target_entropy_term) exponent_term
[1] 1.066448
We therefore skip renormalization and move directly to a Silge plot, showing the top 10 terms for each topic. tidytext::reorder_within() and tidytext::scale_x_reordered() are useful for constructing this plot.
beta_df = tidy(fitted_tmf, n_authors, 'beta')
top_terms = beta_df |> group_by(topic) |> arrange(topic, desc(beta)) |> top_n(15, beta) |> left_join(topic_author, by = 'topic') top_terms
# A tibble: 150 x 4
# Groups: topic [10] token topic beta authors <chr> <chr> <dbl> <chr>
1 empire V01 0.0162 Wells 2 britain V01 0.0124 Wells 3 peoples V01 0.0122 Wells 4 russia V01 0.0117 Wells 5 king V01 0.0111 Wells 6 asia V01 0.0104 Wells
7socialism V01 0.00995 Wells
8section V01 0.00971 Wells
9egypt V01 0.00926 Wells
10ii V01 0.00892 Wells
#...with140 more rows
top_terms |> mutate(token = reorder_within(token, by = beta, within = topic)) |> ggplot(aes(token, beta)) + geom_point() + geom_segment(aes(xend = token), yend = O) + facet_wrap(vars(topic, authors), scales = 'free_y') + coord_flip() + scale_x_reordered()
Most topics (2, 3, 4, 5, 6, 8, 9) focus on character names, with three of the four Dickens topics corresponding to _The Pickwick Papers_ (topic 2), _Oliver Twist_ (5), and _David Copperfield_ (8). Wells' topics appear to distinguish non-fiction essays (topic 1) from fiction (9). Topic 7 groups together Charlotte Bront\(\acute{\text{e}}\), Eliot, and Shelley based on the use of French. Topic 10 has a mix of character names with months of the year; it appears to be a "miscellaneous" topic, often created by topic models to accommodate documents that don't fit elsewhere.
## 5 Reproducibility
sessioninfo::session_info()
- Session info
setting value version R version 4.1.2 (2021-11-01) os macOS Big Sur 10.16 system x86_64, darwin17.0 ui K11 language (EN) collate en_US.UTF-8 ctype en_US.UTF-8 tz America/Los_Angeles date 2023-05-02 pandoc 2.16.2 @ /usr/local/bin/ (via rmarkdown)
- version date (UTC) lib source assertthat 0.2.1 2019-03-21 [1] CRAN (R 4.1.0) backports 1.4.1 2021-12-13 [1] CRAN (R 4.1.0) beeswarm 0.4.0 2021-06-01 [1] CRAN (R 4.1.0) broom 1.0.2 2022-12-15 [1] CRAN (R 4.1.2) cachem 1.0.7 2023-02-24 [1] CRAN (R 4.1.2) cellranger 1.1.0 2016-07-27 [1] CRAN (R 4.1.0) cli 3.6.0 2023-01-09 [1] CRAN (R 4.1.2) colorspace 2.0-3 2022-02-21 [1] CRAN (R 4.1.2) crayon 1.5.1 2022-03-26 [1] CRAN (R 4.1.2) data.table 1.14.2 2021-09-27 [1] CRAN (R 4.1.0) DBI 1.1.2 2021-12-0 [1] CRAN (R 4.1.0) dbplyr 2.2.1 2022-06-27 [1] CRAN (R 4.1.2) digest 0.6.31 2022-12-11 [1] CRAN (R 4.1.2) dplyr
* 1.0.10 2022-09-01 [1] CRAN (R 4.1.2) ellipsis 0.3.2 2021-04-29 [1] CRAN (R 4.1.0) evaluate 0.20 2023-01-17 [1] CRAN (R 4.1.2) fansi 1.0.3 2022-03-24 [1] CRAN (R 4.1.2) farver 2.1.1 2022-07-06 [1] CRAN (R 4.1.2) fastmap 1.1.1 2023-02-24 [1] CRAN (R 4.1.2) forcats
* 0.5.2 2022-08-19 [1] CRAN (R 4.1.2) fs 1.6.1 2023-02-06 [1] CRAN (R 4.1.2) generics 0.1.3 2022-07-05 [1] CRAN (R 4.1.2) ggbeeswarm
* 0.6.0 2017-08-07 [1] CRAN (R 4.1.0) ggplot2
* 3.4.0 2022-11-04 [1] CRAN (R 4.1.2) glue
* 1.6.2 2022-02-24 [1] CRAN (R 4.1.2) gtable 0.3.0 2019-03-25 [1] CRAN (R 4.1.0) gutenberger
* 0.2.3 2022-12-14 [1] CRAN (R 4.1.2) haven 2.5.1 2022-08-22 [1] CRAN (R 4.1.2) hms 1.1.2 2022-08-19 [1] CRAN (R 4.1.2) htmtools 0.5.4 2022-12-07 [1] CRAN (R 4.1.2) htr 1.4.4 2022-08-17 [1] CRAN (R 4.1.2) irlba 2.3.5 2021-12-06 [1] CRAN (R 4.1.0) janeaustern 0.1.5 2017-06-10 [1] CRAN (R 4.1.0) jsonlite 1.8.4 2022-12-06 [1] CRAN (R 4.1.2) knitr 1.42 2023-01-25 [1] CRAN (R 4.1.2) labeling 0.4.2 2020-10-20 [1] CRAN (R 4.1.0) lattice 0.20-45 2021-09-22 [1] CRAN (R 4.1.2) lifecycle 1.0.3 2022-10-07 [1] CRAN (R 4.1.2) lpSolve
* 5.6.15 2020-01-24 [1] CRAN (R 4.1.0) lubridate 1.9.0 2022-11-06 [1] CRAN (R 4.1.2) magnittr 2.0.3 2022-03-30 [1] CRAN (R 4.1.2) Matrix 1.3-4 2021-06-01 [1] CRAN (R 4.1.2) memoise
* 2.0.1 2021-11-26 [1] CRAN (R 4.1.0) mnormt 2.0.2 2020-09-01 [1] CRAN (R 4.1.0) modelr 0.1.10 2022-11-11 [1] CRAN (R 4.1.2)
munshell 0.5.0 2018-06-12 [1] CRAN (R 4.1.0) nlme 3.1-153 2021-09-07 [1] CRAN (R 4.1.2) pillar 1.8.1 2022-08-19 [1] CRAN (R 4.1.2) pkgconfig 2.0.3 2019-09-22 [1] CRAN (R 4.1.0) plyr 1.8.7 2022-03-24 [1] CRAN (R 4.1.2) psych 2.1.9 2021-09-22 [1] CRAN (R 4.1.0) purrr * 1.0.0 2022-12-20 [1] CRAN (R 4.1.2) R6 2.5.1 2021-08-19 [1] CRAN (R 4.1.0) Rcpp 1.0.9 2022-07-08 [1] CRAN (R 4.1.2) readr * 2.1.3 2022-10-01 [1] CRAN (R 4.1.2) readxl 1.4.1 2022-08-17 [1] CRAN (R 4.1.2) reprex 2.0.2 2022-08-17 [1] CRAN (R 4.1.2) reshape2 1.4.4 2020-04-09 [1] CRAN (R 4.1.0) rlang 1.1.0 2023-03-14 [1] CRAN (R 4.1.2) rmarkdown 2.14 2022-04-25 [1] CRAN (R 4.1.2) rstudioapi 0.13 2020-11-12 [1] CRAN (R 4.1.0) rvest 1.0.3 2022-08-19 [1] CRAN (R 4.1.2) scales 1.2.0 2022-04-13 [1] CRAN (R 4.1.2) sessioninfo 1.2.2 2021-12-06 [1] CRAN (R 4.1.0) SnowballC 0.7.0 2020-04-01 [1] CRAN (R 4.1.0) stm * 1.3.6 2020-09-18 [1] CRAN (R 4.1.0) stringi 1.7.12 2023-01-11 [1] CRAN (R 4.1.2) stringr * 1.5.0 2022-12-02 [1] CRAN (R 4.1.2) tibble * 3.1.8 2022-07-22 [1] CRAN (R 4.1.2) tictoc * 1.0.1 2021-04-19 [1] CRAN (R 4.1.0) tidyr * 1.2.1 2022-09-08 [1] CRAN (R 4.1.2) tidyselect 1.1.2 2022-02-21 [1] CRAN (R 4.1.2) tidytext * 0.3.2 2021-09-30 [1] CRAN (R 4.1.0) tidyverse * 1.3.1 2021-04-15 [1] CRAN (R 4.1.0) timechange 0.1.1 2022-11-04 [1] CRAN (R 4.1.2) tmatst * 0.0.0.2023-04-15 2023-04-15 [1] local tmvsim 1.0-2 2016-12-15 [1] CRAN (R 4.1.0) tokenizers 0.2.1 2018-03-29 [1] CRAN (R 4.1.0) tzdb 0.3.0 2022-03-28 [1] CRAN (R 4.1.2) utf8 1.2.2 2021-07-24 [1] CRAN (R 4.1.0) vctrs 0.6.0 2023-03-16 [1] CRAN (R 4.1.2) vipor 0.4.5 2017-03-22 [1] CRAN (R 4.1.0) vridisLite 0.4.0 2021-04-13 [1] CRAN (R 4.1.0) vithr 2.5.0 2022-03-03 [1] CRAN (R 4.1.2) xfun 0.37 2023-01-31 [1] CRAN (R 4.1.2) xml2 1.3.3 2021-11-30 [1] CRAN (R 4.1.0) yaml 2.3.7 2023-01-23 [1] CRAN (R 4.1.2)
[1] /Library/Frameworks/R.framework/Versions/4.1/Resources/library
--------------------------------
|
2306.10168 | Beyond Geometry: Comparing the Temporal Structure of Computation in
Neural Circuits with Dynamical Similarity Analysis | How can we tell whether two neural networks utilize the same internal
processes for a particular computation? This question is pertinent for multiple
subfields of neuroscience and machine learning, including neuroAI, mechanistic
interpretability, and brain-machine interfaces. Standard approaches for
comparing neural networks focus on the spatial geometry of latent states. Yet
in recurrent networks, computations are implemented at the level of dynamics,
and two networks performing the same computation with equivalent dynamics need
not exhibit the same geometry. To bridge this gap, we introduce a novel
similarity metric that compares two systems at the level of their dynamics,
called Dynamical Similarity Analysis (DSA). Our method incorporates two
components: Using recent advances in data-driven dynamical systems theory, we
learn a high-dimensional linear system that accurately captures core features
of the original nonlinear dynamics. Next, we compare different systems passed
through this embedding using a novel extension of Procrustes Analysis that
accounts for how vector fields change under orthogonal transformation. In four
case studies, we demonstrate that our method disentangles conjugate and
non-conjugate recurrent neural networks (RNNs), while geometric methods fall
short. We additionally show that our method can distinguish learning rules in
an unsupervised manner. Our method opens the door to comparative analyses of
the essential temporal structure of computation in neural circuits. | Mitchell Ostrow, Adam Eisen, Leo Kozachkov, Ila Fiete | 2023-06-16T20:11:38Z | http://arxiv.org/abs/2306.10168v3 | Beyond Geometry: Comparing the Temporal Structure of Computation in Neural Circuits with Dynamical Similarity Analysis
###### Abstract
How can we tell whether two neural networks are utilizing the same internal processes for a particular computation? This question is pertinent for multiple subfields of both neuroscience and machine learning, including neuroAI, mechanistic interpretability, and brain-machine interfaces. Standard approaches for comparing neural networks focus on the spatial geometry of latent states. Yet in recurrent networks, computations are implemented at the level of neural dynamics, which do not have a simple one-to-one mapping with geometry. To bridge this gap, we introduce a novel similarity metric that compares two systems at the level of their dynamics. Our method incorporates two components: Using recent advances in data-driven dynamical systems theory, we learn a high-dimensional linear system that accurately captures core features of the original nonlinear dynamics. Next, we compare these linear approximations via a novel extension of Procrustes Analysis that accounts for how vector fields change under orthogonal transformation. Via four case studies, we demonstrate that our method effectively identifies and distinguishes dynamic structure in recurrent neural networks (RNNs), whereas geometric methods fall short. We additionally show that our method can distinguish learning rules in an unsupervised manner. Our method therefore opens the door to novel data-driven analyses of the temporal structure of neural computation, and to more rigorous testing of RNNs as models of the brain.
## 1 Introduction
Comparing neural responses between different systems or contexts plays many crucial roles in neuroscience. These include comparing a model to experimental data (as a measure of model quality, Schrimpf et al. (2020)), determining invariant states of a neural circuit (Chaudhuri et al. (2019)), identifying whether two individual brains are performing a computation in the same way, aligning recordings across days for a brain-machine interface (Degenhart et al. (2020)), and comparing two models (e.g. to probe the similarity of two solutions to a problem, Pagan et al. (2022)). Current similarity methods include \(R^{2}\) via Linear Regression, Representational Similarity Analysis, Singular Vector Canonical Correlation Analysis, Centered Kernel Alignment, and Procrustes Analysis (Schrimpf et al. (2020); Kriegeskorte et al. (2008); Raghu et al. (2017); Williams et al. (2021); Duong et al. (2022)). Crucially, these all compare the _geometry_ of states in the latent space of a neural network. The importance of identifying shared neural computations between systems calls for comparison methods that capture similarity at the relevant levels of abstraction.
In the brain and in RNNs, a popular model of biological neural networks, computations are instantiated by the emergent dynamical properties of the network, such as fixed points, invariant and fixed point
manifolds, limit cycles, and transitions between them (Hopfield (1984), Hopfield and Tank (1986), Zhang (1996), Mongillo et al. (2008), Burak and Fiete (2009), Churchland et al. (2008), Sussillo and Abbott (2009), Churchland et al. (2012), Khona and Fiete (2022), Vyas et al. (2020). When applied to dynamical systems, geometric similarity metrics fall short in two manners. First, response geometries can differ due to sampling differences or other non-fundamental causes even though the dynamics are topologically equivalent. In such cases, measures of state space geometry do not fully capture the core similarities between two systems (Maheswaranathan et al. (2019)), a false negative situation. Conversely, systems may exhibit distinct dynamics over similar geometries in the state space (Galgali et al. (2023)). In this case, geometric measures may fail to distinguish two distinct systems, a false positive situation. Therefore, we suggest that dynamical systems theory should be a lens through which to compare neural systems.
Here, we develop a data-driven method called Dynamical Similarity Analysis (DSA), which combines work from two previously-unrelated fields of machine learning for dynamical systems and statistical shape analysis. In brief, DSA returns a similarity metric describing how two systems compare at the level of their dynamics. DSA utilizes a high-dimensional embedding to identify a linear vector field representation of a nonlinear system, which captures spatiotemporally coherent features of its dynamics. These features, referred to as Koopman (1931), Mezic (2005)) and identified via the Dynamic Mode Decomposition (DMD), describe a system's core dynamic structure (Snyder and Song (2021), Brunton et al. (2021)). Subsequently, a statistical shape analysis metric is used to compare these vector fields, thereby assessing the similarity between the systems' dynamics. This shape analysis can be viewed as comparing the dynamical structure of the systems: both the dynamic modes (the vector field's eigenvectors) in each system and their temporal properties (its eigenvalues), such as whether the modes grow, shrink, or oscillate, are compared. Unlike most spatial shape analyses, which require averaging over multiple noisy trials in the same condition to arrive at one'representation' of each condition-time point (but see Duong et al. (2022)), DSA factors noise into its estimate of neural dynamics. The importance of these noise perturbations in capturing dynamics has been highlighted in Galgali et al. (2023), as their evolution over time can uniquely identify dynamic structure. Thus, DSA takes advantage of all available information.
Our results demonstrate that DSA effectively identifies the dynamic structure of neural computations, whereas standard geometric methods fall short. DSA therefore has great relevance for neuroscience, where knowledge of the underlying dynamics can only be inferred through measurements. Consequently, DSA provides another method to validate how well a model fits to neurophysiology data, and opens the door to novel data-driven analyses of the temporal structure of neural computation. We expect our method to be interesting to computational neuroscientists due to its theoretical richness, valuable to deep learning researchers as another tool through which to interpret neural networks, and useful for experimental neuroscientists due to its simplicity of implementation and ease of interpretation.
ContributionsWe develop a general method, DSA, for comparing two dynamical systems at the level of their dynamics. To do so, we introduce a modified form of Procrustes Analysis that accounts for how vector fields change under orthogonal transformations. We apply our method on four test cases, each demonstrating novel capabilities in neural data analysis: **(a)** Our method can identify dynamical similarities underlying systems with different geometries, which shape metrics cannot. **(b)** Conversely, DSA can distinguish systems with different dynamics despite similar geometries, which shape metrics cannot. **(c)** We show that our method empirically identifies topological similarities and differences between dynamical systems, as demonstrated by invariance under geometric deformation and variance under topological transformation. **(d)** We demonstrate how to use DSA to disentangle different learning rules in neural networks without supervision, by comparing how representations change across training.
## 2 Methods
### Theoretical Motivation: Dynamic Mode Decomposition (DMD) and Embeddings
A major goal in dynamical systems theory is to reduce complex nonlinear dynamics to linear systems that are easily interpretable. One classical method involves searching for fixed points and identifying their properties via local linearization (Lyapunov's indirect method, Sussillo and Barak (2013)), but
this is not possible for experimental systems as full access and control are required. Other methods such as Switching Linear Dynamical Systems (Linderman et al. (2016)) are limited in capturing global structure.
In light of these concerns, a class of data-driven methods based on the DMD (Schmid (2010)) have recently emerged. These methods approximate the Koopman operator, which linearizes a nonlinear system by embedding it into an infinite-dimensional Hilbert space (Brunton et al. (2016, 2017); Snyder and Song (2021); Dubois et al. (2020)). The DMD identifies a linear transition operator between local dynamics: \(X(t+\Delta t)=AX(t)\), where \(X\) comprises a data matrix of observables (functions of the state of the dynamical system). When applied to a sufficiently rich \(X\), the DMD can capture global dynamical structure. One challenge of the method is identifying a nonlinear measurement subspace, which must be closed under the operation of \(A\) (a Koopman invariant subspace, Brunton et al. (2016)). Here, we utilize the Hankel Alternative View of Koopman (HAVOK) (Brunton et al. (2017)), which fits the DMD to a delay-embedding of the data (a Hankel Matrix), such that one column of \(X=[-x(t)--x(t-\tau)--...-x(t-p\tau)-]^{T}\). The delay-embedding approach eliminates the difficulty of learning a nonlinear transformation of the data, and provides improves estimation capabilities over partially-observed systems (Kamb et al. (2020)). The latter capability arises thanks to Takens' delay embedding theorem, which states that a delay embedding is diffeomorphic to the original system-that is, a partially-observed system can be fully reconstructed using a sufficient number of delays (Takens (1981)).
### Dynamical Similarity Analysis (DSA)
Now, we introduce the Dynamical Similarity Analysis, a general method that leverages advances in both Statistical Shape Analysis and Dynamic Mode Decomposition to compare two neural systems at the level of their dynamics. Simply put, we fit a DMD or its extensions to observations from two neural systems, and compare their dynamic mode matrices via a modified shape analysis metric.
Suppose we have two dynamical systems defined by the following equations:
\[\mathbf{\dot{x}}=f(\mathbf{x},t)\quad\mathbf{x}\in\mathbb{R}^{n}\qquad\quad \mathbf{\dot{y}}=g(\mathbf{y},t)\quad\mathbf{y}\in\mathbb{R}^{m} \tag{1}\]
We sample data from each system \(X\in\mathbb{R}^{c\times k_{1}\times t_{1}\times n}\) and \(Y\in\mathbb{R}^{c\times k_{2}\times t_{2}\times m}\). Here, \(c\) indicates the number of conditions observed, \(k\) indicates the number of trials per condition, and \(t\) indicates the number of time steps observed per trial. Note that the observations must be sampled uniformly in time (Takens (1981)). Next, we produce delay-embedded Hankel tensors with a lag of \(p\): \(H_{x}\in\mathbb{R}^{c\times k_{1}\times(t-p-1)\times np}\) and likewise for \(H_{y}\). Using the Hankel Alternative View of Koopman (HAVOK) approach (Brunton et al. (2017)), we flatten all but the last dimension of \(H\) and fit reduced-rank regression models with rank \(r\) to the eigen-time delay coordinates of the data, where the target is the coordinates of the next time step (\(V^{\prime}_{x,r}\)):
\[V^{\prime}_{x,r}=A_{x}V_{x,r}\text{ where }H_{x}=U_{x}\Sigma_{x}V^{T}_{x} \text{ and }V_{x,r}=V_{x}[:,1:r] \tag{2}\]
Only the rank of the HAVOK models explicitly need to be the same so that the two DMD matrices can be compared, but one could also be zero-padded such that the dimensions match (Williams et al. (2021)). We can assess the predictive capability of our model with MSE or \(R^{2}\). Hyperparameters of the DMD (lag \(p\) and rank \(r\)) for each experiment were chosen to ensure both sufficient capability to fit the dynamics in question as well as tractable computation times. Fitting the HAVOK model to each system returns our DMD matrices: \(\mathbf{A}_{x},\mathbf{A}_{y}\in\mathbb{R}^{r\times r}\), which we compare using the modified Procrustes Analysis algorithm detailed next.
Importantly, both the reduced-rank regression and delay-embedding are required to to isolate dynamical structure. We demonstrate this in Fig. 1e, where we compare an RNN and GRU solving the Sine Wave task (Sussillo and Barak (2013)) using the standard Procrustes Analysis (via the netrep package, Williams et al. (2021)), delay-embedded Procrustes Analysis (an ablation), DSA with no delay embedding (another ablation), and DSA score. Only DSA identifies the systems as highly similar, indicating that both the delay embedding and the DMD are necessary to abstract geometric particulars from each system.
Procrustes Analysis over Vector FieldsProcrustes Analysis is a valuable similarity metric, but to use it in DSA, we must modify the notion of orthogonal transformations to be compatible with vector
fields. Recall that the Procrustes metric solves the following minimization problem (Williams et al. (2021)):
\[\text{d}(\mathbf{X},\mathbf{Y})=\min_{\mathbf{C}\in O(n)}||\mathbf{X}-\mathbf{C }\mathbf{Y}||_{F} \tag{3}\]
Where \(O(n)\) is the orthogonal group, \(F\) indicates the Frobenius norm, and \(X,Y\) are two data matrices. However, vector fields, represented by the DMD matrices \(\mathbf{A}_{x},\mathbf{A}_{y}\) above, transform differently than X and Y (see Supplementary Information for an illustration). The transformation \(\mathbf{C}\mathbf{A}_{y}\) simply rotates vectors in place, which destroys dynamic structure (for example, it could turn a stable fixed point into an unstable one). However, \(\mathbf{C}\mathbf{A}_{y}\mathbf{C}^{-1}\) rotates their positions as well, thereby preserving the vector field structure and satisfying the same goal as \(\mathbf{C}\mathbf{Y}\) applied to a data matrix. Thus we introduce a novel similarity metric to the machine learning literature: Procrustes Analysis over Vector Fields:
\[\text{d}(\mathbf{A}_{x},\mathbf{A}_{y})=\min_{\mathbf{C}\in O(n)}||\mathbf{A}_ {x}-\mathbf{C}\mathbf{A}_{y}\mathbf{C}^{-1}||_{F} \tag{4}\]
This nonconvex optimization problem is similar to Jimenez et al. (2013), although we solve it with gradient-based optimization. Across our experiments, we found that the metric asymptotes in less than 200 iterations using the Adam optimizer (Kingma and Ba (2014)) with a learning rate of 0.01. We derive this metric from first principles and prove that it is indeed a proper metric in the Supplementary Information, which as noted in Williams et al. (2021) is crucial for various downstream machine learning pipelines such as clustering or classification. In our experiments, we compare to Procrustes as it is the most similar shape metric to DSA. Note that other similarity metrics could also be modified to fit the DSA algorithm.
### Model Architectures and Tasks
To demonstrate the viability of our method, we identified four illustrative cases that disentangle geometry and dynamics. First, a set of neural networks that shape analyses describe as different but dynamically are very similar; second, a set of neural networks that shape analyses describe as similar but dynamically are very different; third, a ring attractor network whose geometry we can smoothly deform while preserving the attractor topology; finally, a line attractor network that we transform into a ring attractor by transforming periodic boundary conditions.
Figure 1: **Schematic and example of DSA.****a.** Schematic of Delay Embedding. **b.** Schematic of the Koopman Operator computed on time-delay coordinates. **c,d.** PCA trajectories of a GRU (left) and Vanilla RNN trained to produce sine waves of a range of frequencies (Sussillo and Barak (2013)). Colored dots indicate center points of the limit cycles, colored by frequency of the output wave (black lines). **e.** Comparison of measured dissimilarity between the networks in **c,d** with (from left to right) Procrustes on the condition-averaged trajectories, Procrustes on the delay-embedded condition-averaged trajectories, DSA on the original data, and DSA on the delay embedding. Lower is more similar. Each metric uses angular distance, which ranges from 0 to \(\pi\).
Flip FlopFor the first case, we train a collection of 50-unit RNNs on the well-known 3-bit Flip Flop task from Sussillo and Barak (2013) for the sake of space and its well-studied dynamical properties. Following Maheswaranathan et al. (2019), We varied architecture type (LSTM, GRU, UGRNN, Vanilla RNN), activation function (ReLU, Tanh), learning rate (.01,0.005), and multiple different seeds across each set of hyperparameters, ultimately collecting a dataset of 240 networks. We only saved a network if it solved the task with MSE less than 0.05. After training, we tested all models on the same 64 input sequences, and extracted the recurrent states, upon which we apply Procrustes and DSA in parallel to compare networks.
Perceptual Decision-MakingTo demonstrate the converse-same condition-averaged geometry with different topology-we examine a set of three noisy recurrent networks from Galgali et al. (2023). These systems implement a bistable pair of attractors, a line attractor, and a single stable fixed point at the origin-their dynamical equations are included in the Supplementary Information. Here, the inputs to the latter two networks were adversarially optimized so that the condition-averaged trajectories are indistinguishable from the first network (Fig. 3a). These networks have intrinsic noise in their hidden states, which enables DSA to distinguish systems beyond condition-averages. We simulated 200 noisy trajectories from 100 networks per class, each with randomly sampled parameters.
Ring Attractor DeformationsNext, we measure how DSA responds when geometric deformations are applied to a dynamical system, while preserving the topology. Here, we chose to study a ring attractor network, a system known to track head direction in the brain, and whose topology is a ring embedded in \(n\) neurons (Zhang (1996); Chaudhuri et al. (2019); Skaggs et al. (1995)). The neural activity on the ring is instantiated as a bump of local activity, which is stable due to local inhibition and global excitation. In each trial, we randomly sampled \(n\) uniformly between 100 and 250, and drove the network with constant input and dynamical noise, leading the bump to rotate around the ring at a constant velocity (we used code from Wang and Kang (2022)). The network's dynamics are detailed in the Supplementary Information. Importantly, after each simulation, the activations can be transformed by a continuous deformation that preserves the ring topology while changing the geometry:
\[r=\frac{1}{1+\exp{(-\beta s)}} \tag{5}\]
The variable \(s\) describes the synaptic activations of the ring network (the dynamical variable), and \(r\) describes the neural activations, which we compare with DSA and Procrustes. Here, \(\beta\) controls the width of the bump-at small \(\beta\), the ring is essentially planar, and at large \(\beta\), the ring is bent along each axis (see Fig. 1 in Kriegeskreite and Wei (2021) for a depiction). We scaled the magnitude of \(\beta\) from 0.1 to 4. After applying these transformations to simulated trajectories from our network, we applied DSA and Procrustes to respectively compare all \(\beta\)s to the lowest \(\beta\) in our dataset.
Transforming a Line into a RingFinally, we assessed the converse to the previous analysis: does DSA respond to changes in topology? Using the same ring attractor, we varied the strength of its periodic boundary conditions to convert the ring into a curved line segment attractor by ablating some neurons from the network on one end of the ring. Our ring is defined by a cosine kernel that defines how each neuron connects to its neighbors. This kernel has a length parameter \(l\) which defines how far apart two neurons can connect. To completely eliminate the boundary conditions, \(l\) neurons must therefore be ablated. Using an ablation length \(c\), we defined the following parameter:
\[\alpha=1-\frac{c}{l} \tag{6}\]
When \(\alpha=1\), the network is perfectly a ring, and breaks into a line segment when \(\alpha<1\). As \(\alpha\to 0\) the ring becomes progressively less curved. To ensure that network sizes were consistent across all values of \(\alpha\), we initialized the network size to be \(n+c\) before ablation, where \(n\) is the desired final number of neurons. We simulated networks of between 200 and 300 neurons. Instead of a constant drive of input as before, we drove the network with a fixed magnitude input \(u\) whose sign flipped stochastically. We chose this input to prevent the bump of activity from getting trapped on one side in the line attractor setting.
### Learning Rule Disentangling with DSA
We applied DSA to two datasets containing neural networks trained with different learning rules: a dataset of observables from Nayebi et al. (2020) recorded from the hidden layers of large convolutional neural networks trained to solve three different tasks, and a set of small 3-layer neural networks (with a hidden size of 100 units) trained to solve a multivariate regression task from Bordelon and Pehlevan (2022). The learning rules utilized in the first set include Adam (Kingma and Ba (2014)), Stochastic Gradient Descent with Momentum (Sutskever et al. (2013)), Information Alignment (Kunin et al. (2020)), and Feedback Alignment (Lillicrap et al. (2016)). The learning rules in the second set include Gradient Descent, two forms of Feedback Alignment that vary based on the initialization of the random feedback weights (Boopathy and Fiete (2022)), and Hebbian learning (Hebb (1949)). We apply DSA to trajectories of the neural representations (or observables) across training, such that our delay embeddings are computed over epochs. For each test set, we generate 200 sets of models total: In the first set, we generate 50 DMD matrices for each learning rule by randomly selecting trajectories from networks with the same learning rule to form a single dataset, and subsequently fitting a HAVOK model. In the second task, we generate 50 DMD matrices for each learning rule by fitting a single HAVOK model to 1000 epochs of training for a single model with a different random seed. We compare each model pairwise, which results in a representational dissimilarity matrix.
## 3 Results
### DSA captures dynamical similarity between RNNs despite differences in geometry.
Our set of RNNs trained on the 3-bit Flip-Flop Task (Sussillo and Barak (2013)) have different representational geometry but equivalent attractor topology (Maheswaranathan et al. (2019)). Fig.2 a. displays the trajectories of individual trials in a sample network in the first 3 Principal Components, which capture on average 95.9% of the total variance. The computational solution to this task is characterized by eight stable fixed points at the vertices of a cube that correspond to the eight possible unique memory states (\(2^{3}\)). Importantly, this cubic structure is preserved across all networks, even though particular geometric aspects of the trajectories may differ. Standard shape analyses differentiate architectures and activation function by geometry, which we replicated in Fig. 2 b (architecture) and in the Supplementary Information (activation).
We fit individual HAVOK models with 75 delays and rank 100 to sampled trajectories from each network. For computational efficiency, we reduced the dimensionality of the network to 10 using Principal Components Analysis (PCA) before applying HAVOK. By linearity of PCA, the dynamics of the original system within the PC subspace are preserved. For each pair of networks, we compared these states directly using Procrustes Analysis and on the DMD matrices using Procrustes Analysis on Vector Fields. This yields two Dissimilarity Matrices, which we reduced to two dimensions using Multidimensional Scaling (Kruskal (1964)). We plot the results in this similarity space in Fig. 2, where each point is a trained network. As expected, the networks cluster by architecture (Fig. 2b) and activation function (Supplementary Figures) when Procrustes is applied, indicating that aspects of their geometry differ. However, the networks intermingle under the DSA (Fig. 2c, inset zoomed in), and have pairwise dissimilarities close to zero, indicating that their underlying dynamics are equivalent, which is correct (Maheswaranathan et al. (2019)). We quantified these differences by applying a linear SVM to the dissimilarity matrices and plotted the test accuracy in Fig. 2d. This further underscores the differences between Procrustes and DSA in Fig. 2b and c, as DSA's accuracy is close to chance, whereas Procrustes achieves high accuracy on both labels.
### DSA identifies dynamical differences despite geometric similarity
Next, we sought to display the converse, namely that our similarity method can identify differences in dynamical structure even when spatial analysis cannot. We tested three models from Galgali et al. (2023), which implement different strategies for a classical perceptual decision-making task used in systems neuroscience: unstable evidence integration, stable integration, and leaky integration. We visualized the condition-averaged and single-trial trajectories in Fig. 3a for each dynamical system to confirm that the condition-averaged trajectories are indistinguishable.
We fit a HAVOK model with 100 lags and a rank of 50 for each network. As in Fig. 2, we computed both the spatial and dynamic dissimilarity matrices between each network and plotted them with
MDS. Comparing the two types of similarities side by side (spatial similarity, Fig. 3b, and DSA, Fig. 3c), it is evident that only our method is able to easily distinguish dynamic types from the data alone. Thus, DSA can efficiently extract features that describe the underlying dynamical system from data alone, which better reflect the nature of the computation that is being performed in RNNs.
### DSA is invariant under geometric deformations that preserve attractor topology
Here, we demonstrate that DSA compares two dynamical systems at the level of their topology. To do so, we smoothly deformed a ring attractor network in two manners and compared the original network to the progressively deformed system. Importantly, the simulated synaptic data is still the same in each case, so only the network's geometry is deformed. In Fig. 4a we plot the first two PCs as the ring is progressively transformed with \(\beta\) (Fig. 4a). As we increase \(\beta\), the width of the bump (see Supplementary Information for a visualization) becomes progressively tighter, which makes the ring less planar. This can also be measured by calculating the cumulative explained variance ratio of the first two PC's, which starts at 99% at small \(\beta\) but drops to 30% at large \(\beta\) (Supplementary Information). We fit HAVOK models with a lag of 10 and a rank of 1000 to trajectories at each level of deformation. When we compared the network using DSA and Procrustes, we found that only DSA is invariant across these transformations (Fig. 4b,d). The value for DSA is not strictly zero, which may be due to approximation or numerical error in HAVOK and the similarity metric. Yet the dissimilarity measured by DSA is quite stable, with a standard deviation of 0.0175 across all trials **and** all \(\beta\) values. On the other hand, Procrustes had a standard deviation of 0.13 across trials
Figure 3: **Different dynamics, same shape, only distinguished by DSA.****a.** Sample single-trial (faded) and condition-averaged trajectories of the three systems in two dimensions (x and y axes). Trials start at the center point and move away in a direction depending on the condition (colored). **b.** Shape analysis MDS on the RDMs of the three network types. **c.** DSA-MDS of the three networks.
Figure 2: **Same dynamics, different shape, only identified as similar with DSA. Dots indicate a single trained network. Analysis of RNNs trained on the 3-bit flipflop task.****a.** PCA trajectories of a single trained RNN. **b.** MDS Projection of the Dissimilarity Matrix computed across condition-averaged hidden states between each RNN (architecture indicated by color). In this view of the data, RNNs cluster by architecture. **c.** MDS Embedding of the dissimilarity matrix generated from DSA of the same set of RNNs as in b. Here, RNNs do not cluster by architecture. **d.** Classification test accuracy of condition-averaged Procrustes and DSA similarity spaces on both architecture and activation labels. Dotted lines indicate chance.
per** deformation value. This suggests that DSA identifies similarity at the level of the topology of a dynamical system.
### DSA responds to changes in topology
We tested DSA on simulations of the ring network again as we smoothly varied the boundary conditions, thereby transforming the ring attractor into a line as \(\alpha\) become progressively smaller.When we compared each value of \(\alpha\) of the network to \(\alpha=0\) with DSA and Procrustes, we identified that DSA reports values close to 0 until \(\alpha\) becomes close to 1.0, when it jumps almost to \(\pi/2\). We plotted these results in Fig. 4, where panel **e** depicts the low-dimensional visualization of the line and the ring, and **f** displays our metric's response. Note that because the method and data are stochastic, we expected DSA to jump slightly before 1.0, which we empirically verified to be around \(\alpha=0.9\) in our simulations. Here we used only 10 delays (for a total embedding size of 2500) and rank 100, but found that our results generalized to much larger DMD models as well. This further solidifies the notion that DSA captures topological similarity of two dynamical systems.
### DSA can disentangle learning rules in an unsupervised fashion
It is still unclear what learning rules the brain uses. There has been significant progress recently, especially in the design of novel biologically-plausible learning rules (Lillicrap et al. (2016); Hinton (2022)), experiments and modeling work that suggests signs of gradient-based learning in the brain (Sadther et al. (2014); Humphreys et al. (2022); Payeur et al. (2023)), supervised methods to classify learning rules from aggregate statistics in neural data (Nayebi et al. (2020)), and theory describing how representations evolve across training in different learning rules (Bordelon and Pehlevan (2022)). Because artificial networks are initialized with random seeds, their trajectories in the activity space across learning will almost always have different geometry, despite the fact that they may converge to the same minimum. However, because the trajectories arise from the same dynamical system, DSA is invariant to random initializations. We visualize a variety of learning rule dynamics in Fig. 5a to demonstrate how qualitatively different these can be.
Figure 4: **DSA is invariant under geometric deformation of a ring attractor, and sensitive to the transformation of a ring attractor into a line.** Top row displays scaling of \(\beta\) in the sigmoid, making the ring progressively less planar. The bottom row displays topological transformation of a ring into a line via \(\alpha\). The network is a curve line attractor at \(\alpha<1\), and is a ring attractor at \(\alpha=1\). **a, c.** Trajectories along the ring plotted in the top two PCs of the network, colored by magnitude of the deformation parameter. In **c**, trajectories are scaled for visualization purposes. **b, d.** Distance (D) to \(\beta=0.1\), and \(\alpha=0.0\) in both Procrustes and DSA. Importantly, the topology only changes in the last row, at \(\alpha=1.0\).
We applied DSA on two test cases from the literature to demonstrate that we can empirically disentangle artificial networks trained with different learning rules. In the first case, we use the dataset from Nayebi et al. (2020), which consists of aggregate statistics about the weights and activity of layers in various convolutional neural networks trained on various tasks and learning rules. We fit a HAVOK model with a rank of 45 and lag of 4 to trajectories of these statistics _across_ training. After comparing trajectories of these observables across training with DSA, we visualize the MDS of the similarity space in Fig. 5b. The training dynamics clearly cluster by learning rule under DSA. Quantifying the degree of clustering, we could classify the learning rule with 88.5 % validation accuracy using only a _linear_ classifier and 5-fold cross-validation, whereas the linear SVM classifier in Nayebi et al. (2020) only reached \(\sim 55\)% accuracy (Fig. 2a. in their paper). In the second case, a set of 3-layer feedforward linear networks from Bordelon and Pehlevan (2022), we fit HAVOK models to individual networks with a delay of 100 and a rank of 500. As above, we found that the MDS visualization of these networks clustered by learning rule in the pairwise similarity space (Fig. 5c). The two types of Feedback Alignment cluster together, suggesting that their dynamics are similar, as might be expected. Once again utilizing a linear classifier, we achieved 89% validation accuracy after assigning both types of FA as the same class. These results suggest the DSA could be used in a hypothesis-driven fashion to assess learning rules in data recorded from biological circuits.
## 4 Discussion
We demonstrated that our novel metric can be used to study dynamics in neural networks across numerous scenarios, including the comparison of dynamics across different RNNs, dynamics of a single RNN across training, or dynamics of weight and activity changes in feedforward networks across training. We performed an ablation study in Fig. 1e, which demonstrates that both delay embedding and the DMD were required for DSA to succeed. While we applied our metric only to RNNs, we stress that it can be applied to any dynamical system, including diffusion models or generative transformers. Procrustes Analysis over Vector Fields could also be applied to Koopman Embeddings identified via other algorithms, such as an autoencoder (Lusch et al. (2018)).
As we demonstrate in Fig. 3, our method could be used to quantitatively compare various hypotheses about a particular biological neural circuit's dynamics (such as fixed point structure), by calculating similarities of various RNNs or bespoke models to data (Pagan et al. (2022), Schaeffer et al. (2020)). This task is computationally intensive, relying on assessing the dynamics of noise perturbations in the neural state space (Chaudhuri et al. (2019), Galgali et al. (2023)) or computing the locations of fixed points in a task-optimized RNN model that has similar spatial trajectories as the neural data (Mante et al. (2013), Chaisangmongkon et al. (2017)). Our method will be useful in the analysis of biological neural systems, where only recorded neural activity is accessible. To our knowledge, DSA is the first that can be used to quantitatively assess different dynamical hypotheses. We suggest that this method should be used in conjunction with shape analyses when assessing fits of computational models to data. In future work, we plan to apply DSA to experimental data.
Figure 5: **DSA unsupervisedly disentangles learning rules.****a.** Schematic example of different algorithms minimizing \(3x^{2}+2y^{2}\). **b.** DSA MDS plot of hidden activation observables from networks trained on ImageNet **c.** DSA MDS plot of the dynamics of learning in 3-layer linear networks.
As suggested by Fig. 2 and Fig. 4, DSA may be capturing similarity at the level of topology of the dynamics. This can perhaps be recognized from the methodology of DSA: the similarity transform is a homeomorphism between two linear systems. If the fitted HAVOK model is sufficiently accurate, then by Takens (1981) and Koopman (1991), it is possible that the model is topologically equivalent to the original system. Furthermore, Hart et al. (2020) proved that a sufficiently high dimensional embedding of a nonlinear dynamical system can perfectly predict the next observation with a simple linear readout. Thus, DSA could be thought of as quantifying how far from homeomorphic two dynamical systems are. In future work, we hope to prove this rigorously.
DSA could potentially be used for rapid alignment of Brain-Computer Interfaces (BCI) across days in a subject, a common problem due to non-stationary sampling of neurons from intracortical electrode shift, neuron death, and other noise in longitudinal BCI trials (Biran et al. (2005), Perge et al. (2013)). In fact, two recent approaches (Willett et al. (2023), Degenhart et al. (2020)) utilized Procrustes Analysis in their decoding and alignment pipeline. As demonstrated by Figs. 2, 3, and 4, an alignment method that takes dynamics into account will be useful. Karpowicz et al. (2022) utilized LFADS to align dynamics across days, but DSA may provide equal performance with less compute and data due to the method's simplicity.
In section 3.5 we demonstrated how the DSA metric space could be used in a hypothesis-driven fashion to compare neurophysiological data recorded across training to artificial neural networks trained with different learning rules. To do so, one would train a variety neural networks with different learning rules to solve the same task as the animal model. By applying DSA pairwise to each network and the neurophysiology data, and identifying which learning rule cluster to which the experimental data aligns best, we can reject other learning rule hypotheses. DSA has multiple advantages over previous methods: These include unsupervised feature generation, which can improve classification capabilities, and abstraction from geometry, which makes the method more generically applicable across systems of different types and initial conditions.
Limitations and ComputeAs discussed in the DMD literature (e.g. Brunton et al. (2017)), it can be challenging to identify the optimal hyperparameters, but it is not computationally intensive to perform a grid sweep. Furthermore, we found that the rank and delay required to achieve our results was relatively small. Our Procrustes Analysis over Vector Fields method scales at best as \(O(r^{2})\) with the size of the DMD matrix. These issues are mitigated with parallel processing, and the code we will release is GPU-accelerated. We ran all experiments with a single V100 GPU on a departmental cluster, which each took between an hour and a day to complete, depending on the size of the datasets.
AcknowledgmentsThe authors are grateful to Sarthak Chandra, Raymond Wang, Aran Nayebi, Ho Kyung Sung, and other members of the Fiete lab for helpful discussions and advice.
|
2304.06173 | Continuous Human Activity Recognition using a MIMO Radar for
Transitional Motion Analysis | The prompt and accurate recognition of Continuous Human Activity (CHAR) is
critical in identifying and responding to health events, particularly fall risk
assessment. In this paper, we examine a multi-antenna radar system that can
process radar data returns for multiple individuals in an indoor setting,
enabling CHAR for multiple subjects. This requires combining spatial and
temporal signal processing techniques through micro-Doppler (MD) analysis and
high-resolution receive beamforming. We employ delay and sum beamforming to
capture MD signatures at three different directions of observation. As MD
images may contain multiple activities, we segment the three MD signatures
using an STA/LTA algorithm. MD segmentation ensures that each MD segment
represents a single human motion activity. Finally, the segmented MD image is
resized and processed through a convolutional neural network (CNN) to classify
motion against each MD segment. | John Kobak, Bennett J. Richman, LaJuan Washington Jr., Syed A. Hamza | 2023-04-12T22:10:56Z | http://arxiv.org/abs/2304.06173v1 | # Continuous Human Activity Recognition using a MIMO Radar for Transitional Motion Analysis
###### Abstract
The prompt and accurate recognition of Continuous Human Activity (CHAR) is critical in identifying and responding to health events, particularly fall risk assessment. In this paper, we examine a multi-antenna radar system that can process radar data returns for multiple individuals in an indoor setting, enabling CHAR for multiple subjects. This requires combining spatial and temporal signal processing techniques through micro-Doppler (MD) analysis and high-resolution receive beamforming. We employ delay and sum beamforming to capture MD signatures at three different directions of observation. As MD images may contain multiple activities, we segment the three MD signatures using an STA/LTA algorithm. MD segmentation ensures that each MD segment represents a single human motion activity. Finally, the segmented MD image is resized and processed through a convolutional neural network (CNN) to classify motion against each MD segment.
E-mails: {jjkobak, bjrichman, lewashington, shamza}@widener.edu
## 1 Introduction
The use of contactless technology for detecting human motions has become increasingly popular due to its non-invasive nature as it eliminates the need for users to wear specific tracking devices [1, 2, 3]. Radar systems are particularly effective in this regard, as they provide reliable non-contact monitoring that is privacy-preserving and not affected by lighting conditions. Active radio frequency (RF) sensing, in particular, allows for 4D imaging capabilities by measuring the scatterer's velocity in addition to range and 2D angular localization. This is a major advantage over visual-based systems, which require additional pre-processing and filtering operations to accurately detect small movements [4, 5]. While a high-resolution imaging radar ensures privacy by generating silhouette-type portraits that reveal minimal identifiable information, unlike camera-based systems. Radar-based CHAR has diverse applications such as detecting significant events for automated surveillance, analyzing daily activities and behaviors, recognizing abnormalities in gait, monitoring health in care facilities and rehabilitation services, and promoting independent living for the elderly [5, 6, 7, 8, 9, 10, 11].
The primary objective of this research is to detect the real-time daily activities of one or multiple individuals residing in a shared dwelling. Utilizing radar technology to achieve effective CHAR presents a significant challenge, as it relies heavily on a series of pre-processing steps that must isolate both the individuals and their corresponding activities simultaneously. The proposed approach in this paper utilizes beamforming to filter the received data and achieve spatial filtering, effectively isolating the motion of a single individual in the field of view. As a result of human motion, the radar return undergoes a frequency shift due to the Doppler effect. However, since human movement involves the motion of various body parts, there are often additional movements and rotations in different parts of the body besides the torso's movement. For example, when a person walks, their arms swing naturally. These micro-scale movements result in additional Doppler shifts, known as micro-Doppler (MD) effects, which aid in identifying the motion type. After isolating an individual in a certain angular region, in the field of view, short-time Fourier transform is applied to the output of the beamformer to obtain a MD image. However, due to the limited angular resolution of the beamformer, the actions performed by individuals outside of the main beam may not be entirely separable, either due to their proximity to the other individual or leakage associated with the beamformer sidelobes. Hence, MD images might not entirely be representative of the activity of the individual in the main beam but also has remnants of the activities performed by other individuals at different locations. At this stage, it is possible that the MD image could include a series of activities, and the goal is to separate the series of activities executed by the individual present within the
beamformer's main lobe. We apply STA/LTA (short-time-average/long-time-average trigger algorithm) in order to segment the MD image into sub-images, such that each sub-image comprise exclusively of a single motion. The STA/LTA algorithm performs event detection by utilizing two windows that operate on the envelopes of the MD image. The envelopes are extracted after smoothing out the modulated MD image, which facilitates the envelope detection algorithm employing the percentile technique. The STA/LTA algorithm's task of detecting the activity of a specific individual within the main beam is challenging due to the presence of residual activities from other individuals that were not intended to be captured in the MD image obtained after spatial filtering. In the final step, the sub-images are padded to a uniform size and fed into a convolutional neural network (CNN) to classify and recognize the specific human activity being performed in the image sequence. It is noted that padding is necessary for resizing the sub-images to achieve consistent input dimensions, as the sub-dimensions of images may vary depending on the activity. Two experimental studies were conducted to collect data, one involved a single subject while the other involved two subjects using beamforming. A total of 800 trials were performed, with various activities carried out by a maximum of two individuals.
The paper's organization is as follows: Section 2 details the signal model discussing the methodology to process ADC data returns to convert them into visible MD spectrograms. Section 3 details the radar parameters and discusses the data collection process. Section 4 describes the procedure of cleaning the MD images and separating the individual events within a MD image. Section 5 outlines the structure of the proposed CNN for one person, and then extrapolate that to include a combination of right, left, and nominal spectrogram images. Finally, Section 6 reveals the results.
## 2 Radar Return Signal Analysis
The complex valued raw data matrix \(\mathbf{s}(n,m)\in C^{N*M}\) of the frequency-modulated continuous wave (FMCW) radar is obtained through spatially processing the radar returns by an \(M\) element uniformly spaced antenna array. The data is collected over \(N\) temporal sampling instances. The receiver array vector \(\mathbf{s}(m)\in C^{M}\) at time instant \(n\) corresponds to the \(n_{th}\) row of \(\mathbf{s}(n,m)\) and is given by,
\[\mathbf{s}(m)=\sum_{l=1}^{L}\alpha_{l}\mathbf{a}(\theta_{l})+\mathbf{v}(m), \tag{1}\]
where, \(\mathbf{a}(\theta_{l})\in\mathbb{C}^{M}\) is the steering vector corresponding to the azimuth direction \(\theta_{l}\) of the scatterer, and is defined as follows,
\[\mathbf{a}(\theta_{l})=[1\;\;e^{j(2\pi/\lambda)dcos(\theta_{l})}\ldots e^{j(2 \pi/\lambda)d(M-1)cos(\theta_{l})}]^{T}. \tag{2}\]
Here, \(d\) is the inter-element spacing and \(\alpha_{l}\in\mathbb{C}\) is the complex amplitude of the radar return. The additive Gaussian noise \(\mathbf{v}(m)\in\mathbb{C}^{M}\) has variance \(\sigma_{v}^{2}\). The elements of the received data vector \(\mathbf{s}(m)\) are combined linearly by the \(M\)-sensor beamformer that strives to spatially filter the reflections from all other directions except the signal in the direction of beamformer look angle \(\theta_{k}\). The spatially filtered signal vector \(\mathbf{x}(\theta_{k})\in\mathbb{C}^{N}\) after beamforming is given by,
\[\mathbf{x}(\theta_{k})=\mathbf{s}(n,m)\mathbf{w}^{H}(\theta_{k}), \tag{3}\]
where \(\mathbf{w}(\theta_{k})=\mathbf{a}^{H}(\theta_{k})\) are the complex beamformer weights pointing towards \(\theta_{k}\).
The spatially filtered signal vector \(\mathbf{x}(\theta_{k})\) is reshaped into a two-dimensional matrix, \(\mathbf{x}_{\theta_{k}}(p,q)\). This is achieved by segmenting the \(N\) dimensional vector \(\mathbf{x}(\theta_{k})\), such that, the \(P\) samples collected within a pulse repetition interval (PRI) are stacked into a \(P\) dimensional column. There are \(Q\) such columns within \(\mathbf{x}_{\theta}(p,q)\) where \(Q=N/P\) is the number of PRIs processed within the observation time \(N\). The range-map, \(\mathbf{r}_{\theta_{k}}(p,q)\) is obtained by applying the column-wise Discrete Fourier Transform (DFT) operation which is given by,
\[\mathbf{r}_{\theta_{k}}(l,q)=\sum_{p=0}^{P-1}\mathbf{x}_{\theta_{k}}(p,q)e^{- j(2\pi lp/N)} \tag{4}\]
We observe the data in the time-frequency (TF) domain after localizing the motion in azimuth and range bins of interest. The spectrogram is used as the TF signal representation, showing the variation of the signal power
as a function of time \(n\) and frequency \(k\). The spectrogram of a periodic version of a discrete signal \(\mathbf{v}_{\theta_{k}}(n)\), is given by [12, 13, 14, 15]
\[\mathbf{d}_{\theta_{k}}(n,k)=|\sum_{m=0}^{H-1}\mathbf{h}(m)\mathbf{v}_{\theta_{ k}}(n-m)e^{-j(2\pi km/H)}|^{2}, \tag{5}\]
where \(\mathbf{v}_{\theta_{k}}=\sum_{l=r_{l}}^{r_{u}}\mathbf{r}_{\theta_{k}}(l,q)\) is obtained by collapsing the range dimension beginning from lower range bin \(r_{l}\) to highest range bin \(r_{h}\). Tapering window \(\mathbf{h}\) of length \(H\) is applied to reduce the sidelobes. The spectrograms reveal the different velocities, accelerations, and higher order moments which cannot be easily modeled or assumed to follow specific non-stationary structures [16, 15]. We observe the motion of two persons performing different activities in close proximity to each other at different azimuth angles. We aim to correctly pair the activity to the corresponding azimuth angle. This is achieved by jointly processing the spectrograms, \(\mathbf{v}_{\theta_{1}}(n,k)\) and \(\mathbf{v}_{\theta_{2}}(n,k)\) which are respectively localized at azimuth angles \(\theta_{1}\) and \(\theta_{2}\). It is clear that the actions of multiple persons is hard to be distinguished in azimuth by only using a single antenna.
## 3 Data Collection
To ensure the representation of the average human body type, we performed each transitional activity using six subjects with different heights, genders and body compositions. The number of trials conducted for each activity is outlined in Table 1. Please note that these classes arise after performing image segmentation on extended MD images. For single person activity, the extended MD images were captured for 12s with different combinations of three human motions selected from five activities namely, walking forward, walking backward, sitting, standing, and bending motions. For the two-person data, trials were conducted for walking forward and walking back at \(\pm 30^{\circ}\), resulting in four additional classes. The radar system used had a bandwidth of 4 GHz and operated at 77 GHz. The two participants performed activities while facing the radar system with radial angles of \(+30^{\circ}\)and -\(30^{\circ}\), respectively. The radar systems were positioned at an average distance of 2 meters from the participants, and the RF sensor's output transmission power was set to 40 mW.
* PRI is set to 1 ms, and each data example is observed over the time period of 12 s, resulting in \(Q=4000\) slow time samples.
* ADC sampling rate is 512 ksps, rendering 512 fast time samples per PRI, resultantly the length of data vector is \(N=6156000\).
* The received data \(\mathbf{s}(n,m)\in C^{N*M}\), is collected through \(M=4\) element receive array, with an inter-element spacing of \(\lambda/2\) (\(\lambda\) is the wavelength corresponding to the operating frequency), therefore the dimensionality of received raw data matrix is \(6156000\times 4\).
* Beamforming is performed on the raw data matrix, resulting in a spatially filtered \(\mathbf{x}(\theta_{k})\) vector of dimensions \(3276800\times 1\). Two such vectors are generated in the directions of each motion \(\theta_{1}\) and \(\theta_{2}\).
* Each vector \(\mathbf{x}(\theta_{k})\) is reshaped into a \(512\times 12000\) matrix. After applying columnwise DFT, and identifying the range bins of interest, the corresponding rows are summed together, resulting in \(\mathbf{v}_{\theta_{k}}=\sum_{l=r_{l}}^{r_{u}}\mathbf{r}_{\theta_{k}}(l,q)\), which is of dimensions \(12000\times 1\).
* A combined spectrogram and two spectrograms after beamforming \(\mathbf{d}_{\theta_{1}}\) and \(\mathbf{d}_{\theta_{2}}\), each of dimensions \(128\times 128\) is obtained, where the window length is 128.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline Examples & Forward & Backward & Bending & Standing & \begin{tabular}{l} Sitting \\ down \\ \end{tabular} & \begin{tabular}{l} Forward \\ \(+30^{\circ}\) \\ \end{tabular} & \begin{tabular}{l} Forward \\ \(-30^{\circ}\) \\ \end{tabular} & \begin{tabular}{l} Backward \\ \(+30^{\circ}\) \\ \end{tabular} &
\begin{tabular}{l} Backward \\ \(-30^{\circ}\) \\ \end{tabular} \\ \hline Total & 771 & 498 & 120 & 71 & 131 & 13 & 41 & 31 & 40 \\ \hline \end{tabular}
\end{table}
Table 1: Number of trials for segmented activity
## 4. Expanded Micro-Doppler Segmentation
Figure 1 depicts the complete processing chain, beginning with radar data collection and culminating in the classification of a specific human activity type. Once the extended MD images are acquired, after beamforming[17], the envelope and event detection algorithms are applied to locate the start and end times of possible events within the duration of the extended MD image. These timestamps are then used to identify and crop the individual events within the extended MD image. Next, the cropped images are resized to a resolution of \(600\times 600\) by zero-padding, enabling all of them to be consistent with the input dimensions of the CNN. Event detection is accomplished using the STA/LTA algorithm, which is elaborated on below.
### STA/LTA algorithm for Event Detection
The short-time-average/long-time-average STA/LTA trigger algorithm is commonly utilized in weak-motion applications where the detection of events is most desired. With extended MD returns providing TF characteristics of reflected Doppler signals, the STA/LTA algorithm can identify and extract the desired features by tracking the start and end times of events. The STA/LTA algorithm is applied to the envelope of the extended MD which is obtained using an envelope detection algorithm.
Envelope detection is a technique used to extract the envelope of the modulated extended MD signal, which essentially captures the highest and lowest amplitude variations at all time instances. When a extended MD image is utilized as the primary signal, the Van Dorp and Groen percentile technique can be employed to reduce
Figure 2. Pre-Processing Stage Implementation
Left: Envelope Detection, Center: Event Detection, Right: Zero-Padded Image
Figure 1. Flowchart for the proposed approach
the radar return into a single array. This percentile technique calculates the cumulative amplitude distribution for each temporal slice [18]. The common approach to implementing this technique with a extended MD return is to find the upper \(u_{i}\), central \(c_{i}\), and lower \(l_{i}\) envelopes through percentile multiplication with the sum of intensities with time \(I(n)\) as follows,
\[u_{i}(n)=0.97*I(n), \tag{6}\]
\[c_{i}(n)=0.50*I(n), \tag{7}\]
and
\[l_{i}(n)=0.03*I(n). \tag{8}\]
Because background noise cannot be completely filtered out of the extended MD during the smoothing process, shown in Figure 2 Left, we propose utilizing two central envelopes, starting from opposite ends of the extended MD image, and averaging across them in time to obtain the'real' central envelope.
After obtaining the central envelope, the STA/LTA ratio is calculated continuously at each time sample \(n\) for every \(j\)th pixel along the central envelope \(c_{i}\) as \(R=\frac{STA(n)}{LTA(n)}\)[19], where
\[STA(n)=\frac{1}{N_{1}}\sum_{j=n+1}^{n+N_{1}}c_{i}(j), \tag{9}\]
and
\[LTA(n)=\frac{1}{N_{2}}\sum_{j=n-N_{2}}^{n}c_{i}(j). \tag{10}\]
\(N_{1}\) and \(N_{2}\) are short/long window lengths, respectively. The start time of an event is determined when \(STA(n)>\sigma_{1}\) and \(R>\sigma_{2}\) and ends when \(STA(n)<\sigma_{3}\) and \(R<\sigma_{2}\) where \(\sigma_{1},\sigma_{2},\sigma_{3}\) are pre-defined detection thresholds. Note that we have considered non-overlapping STA and LTA windows.
Ideally, the triggered event should include all phases, but for the majority of these weak events, the algorithm does not trigger at the beginning because the threshold for ignition of the start time is reached a couple of pixels late. These detection thresholds are set based on the maximum frequency pixel of non-motion observation. To counteract this side effect, we propose an appropriate pre-event time (PEM) selection that ensures the correct start time of the event [20]. This same concept of incorrect ignition due to threshold setting also occurs during detriggering of an event. Because of this, we also set a post-event time (PET) parameter. Optimal PEM and PET durations depend mostly on the application. For the underlying scenario, the extended MD image spans over twelve-second, PEM=PET=\(\frac{SizeofEvent}{20}\).
It's worth noting that energy leakage due to beamforming on radar returns at varying angles can potentially lead to issues with event detection. Figure 1 demonstrates this by presenting a comparison of zero-padded events. Specifically, the extended MD image obtained, without beamforming, struggled to capture the first motion because of the similar intensities across upper and lower spectra, whereas the extended MD image obtained at \(\pm 30^{\circ}\)angles performed as expected due to sufficient filtering of the other simultaneous motion.
## 5 Micro-Doppler Classification
The CNN is a widely used neural network for image classification due to its ability to automatically select image features through simple convolution and nonlinear activation operations. The CNN's architecture, shown in Fig. 1, utilizes three input modalities including the extended MD images collected from three angular locations, namely the array broadside and \(\pm 30^{\circ}\) with respect to the array broadside. Each extended MD image is of size \(128\times 128\times 3\) and is passed through a 3-layer CNN consisting of 16 filters in each layer, employing \(3\times 3\) convolution. It is noted that the CNN receives the MD image after cropping and zero-padding. As mentioned previously, individual motions are cropped from extended MD images using STA/LTA, start/end timestamps. However, the length of the cropped events may vary depending on the duration of the activity. For instance,
walking may have a longer duration than bending motion. These dimensional mismatches can be problematic for the CNN since it requires all images to be \(128\times 128\times 3\). To address this issue, we zero-pad the cropped images to \(600\times 600\times 3\) and then resize them to \(128\times 128\times 3\). For two-person data, cropping is performed similarly using STA/LTA start/end times to isolate desired motions.
The outputs of the three networks corresponding to the data inputs are concatenated and fed to a dense layer followed by an output layer of size 9, which represents the number of possible activities. The output layer is a one-hot encoded vector where the location of a single 1 in the output vector indicates a specific activity. The internal layers of the network use ReLU activation function, while the softmax activation function is used for the output layer.
## 6 Experimental Results
In this section, we show that the radar system is capable of separating the MD spectrograms of one-person and two-people such that the CNN is able to adequately classify each individual activity in a transitional movement. We implemented various algorithms to detect multiple transitional activities happening within one extended
\begin{table}
\begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline & Class-1 & Class-2 & Class-3 & Class-4 & Class-5 & Class-6 & Class-7 & Class-8 & Class-9 \\ Predicted v. Actual & Bend & Sit Down & Stand Up & Walk & Walk & Walk & Walk & Walk & Walk & Walk \\ & (0\({}^{\circ}\)) & Down & Up & Back & Back & Back & Forward & Forward & Forward \\ & (0\({}^{\circ}\)) & (0\({}^{\circ}\)) & (0\({}^{\circ}\)) & (0\({}^{\circ}\)) & (-30\({}^{\circ}\)) & (+30\({}^{\circ}\)) & (+30\({}^{\circ}\)) & (-30\({}^{\circ}\)) & (-30\({}^{\circ}\)) \\ \hline Class-1 & & & & & & & & & \\ Bend & 83.3\% & 4.2\% & 12.5\% & 0\% & 0\% & 0\% & 0\% & 0\% & 0\% \\ Down (0\({}^{\circ}\)) & & & & & & & & \\ \hline Class-2 Sit Down (0\({}^{\circ}\)) & 4.2\% & 95.8\% & 0\% & 0\% & 0\% & 0\% & 0\% & 0\% & 0\% \\ \hline Class-3 & & & & & & & & & \\ Stand Up (0\({}^{\circ}\)) & 0\% & 0\% & 93.3\% & 0\% & 0\% & 0\% & 6.7\% & 0\% & 0\% \\ \hline Class-4 & & & & & & & & & \\ Walk Back & 0\% & 0\% & 0\% & 100\% & 0\% & 0\% & 0\% & 0\% & 0\% \\ (0\({}^{\circ}\)) & & & & & & & & \\ \hline Class-5 & & & & & & & & & \\ Walk Back & 0\% & 0\% & 0\% & 0\% & 100\% & 0\% & 0\% & 0\% & 0\% \\ (-30\({}^{\circ}\)) & & & & & & & & \\ \hline Class-6 & & & & & & & & & \\ Walk Back & 0\% & 16.7\% & 0\% & 0\% & 0\% & 83.3\% & 0\% & 0\% & 0\% \\ (+30\({}^{\circ}\)) & & & & & & & & & \\ \hline Class-7 & & & & & & & & & \\ Walk & 0\% & 0\% & 0\% & 0\% & 0\% & 100\% & 0\% & 0\% & 0\% \\ Forward & 0\% & 0\% & 0\% & 0\% & 0\% & 0\% & 0\% & 100\% & 0\% \\ (-30\({}^{\circ}\)) & & & & & & & & & \\ \hline Class-8 & & & & & & & & & \\ Walk & 0\% & 0\% & 0\% & 0\% & 0\% & 0\% & 0\% & 100\% & 0\% \\ Forward & & & & & & & & & \\ (+30\({}^{\circ}\)) & & & & & & & & & \\ \hline Class-9 & & & & & & & & & \\ Walk & 0\% & 0\% & 0\% & 0\% & 0\% & 0\% & 0\% & 100\% \\ Forward & & & & & & & & \\ (-30\({}^{\circ}\)) & & & & & & & & & \\ \hline \end{tabular}
\end{table}
Table 2: Confusion Matrix for one and two-person data after image segmentation and zero-padding. The 0\({}^{\circ}\), +30\({}^{\circ}\), and -30\({}^{\circ}\) indicate the angle at which the activity was captured with respect to the radar broadside
MD image. After pre-processing and spatial filtering of the extended MD spectrograms, we were able to isolate different activities happening within the frame at a desired angular region.
To train the CNN, we used 80% of the samples for each class in the training set, where each sample consists of three \(128\times 128\times 3\) images (one nominal and two beamformed images). The network weights were optimized using the Adam Optimizer with a learning rate of 0.0001 and 25 epochs. Table 2 presents the confusion matrix, which shows the percentage of each class that was predicted versus the actual class based on our labeling. It provides a visual representation of the activities that are being incorrectly classified as other activities. The overall accuracy of the network is observed to be around 95.1%.
## 7 Conclusion
In this paper, we introduce a novel approach that utilizes the TF representation of radar returns obtained from multiple azimuth angles. This approach enables us to identify combinations of continuous daily activities that are performed simultaneously at different angles. Our method is effective in detecting individual activities from extended MD images, isolating them, and mapping them to their respective angular locations. Notably, our approach yields promising results, particularly when spectrograms are not entirely separable based on angles. In conclusion, our approach provides an efficient means of distinguishing between different motions performed concurrently and has broad applications in various indoor settings.
## 8 Acknowledgment
The authors would like to thank Jessica Levin, John Barr, Nicholas DePrince, and Alexander Milazzo for their assistance with data collection.
|
2306.08727 | Gauss Newton method for solving variational problems of PDEs with neural
network discretizaitons | The numerical solution of differential equations using machine learning-based
approaches has gained significant popularity. Neural network-based
discretization has emerged as a powerful tool for solving differential
equations by parameterizing a set of functions. Various approaches, such as the
deep Ritz method and physics-informed neural networks, have been developed for
numerical solutions. Training algorithms, including gradient descent and greedy
algorithms, have been proposed to solve the resulting optimization problems. In
this paper, we focus on the variational formulation of the problem and propose
a Gauss- Newton method for computing the numerical solution. We provide a
comprehensive analysis of the superlinear convergence properties of this
method, along with a discussion on semi-regular zeros of the vanishing
gradient. Numerical examples are presented to demonstrate the efficiency of the
proposed Gauss-Newton method. | Wenrui Hao, Qingguo Hong, Xianlin Jin | 2023-06-14T20:11:01Z | http://arxiv.org/abs/2306.08727v3 | # Gauss Newton method for solving variational problems of PDEs with neural network discretizaitons
###### Abstract
The numerical solution of differential equations using machine learning-based approaches has gained significant popularity. Neural network-based discretization has emerged as a powerful tool for solving differential equations by parameterizing a set of functions. Various approaches, such as the deep Ritz method and physics-informed neural networks, have been developed for numerical solutions. Training algorithms, including gradient descent and greedy algorithms, have been proposed to solve the resulting optimization problems.
In this paper, we focus on the variational formulation of the problem and propose a Gauss-Newton method for computing the numerical solution. We provide a comprehensive analysis of the superlinear convergence properties of this method, along with a discussion on semi-regular zeros of the vanishing gradient. Numerical examples are presented to demonstrate the efficiency of the proposed Gauss-Newton method.
**Keywords:** Partial differential equations, neural network discretization, variational form, Gauss-Newton method, convergence analysis.
## 1 Introduction
The use of machine learning-based approaches in the computational mathematics community has witnessed significant growth in recent years, particularly in the numerical solution of differential equations. Neural network-based discretization has emerged as a revolutionary tool for solving differential equations [9, 13, 29] and for discovering the underlying physics from experimental data [23]. This approach has been successfully applied to a wide range of practical problems with remarkable success [4, 11, 15, 21]. One of the key advantages of this approach is that neural networks can alleviate, and in some cases overcome, the curse of dimensionality associated with high-dimensional problems [16, 13, 19]. This can be attributed to the dimension-independent approximation properties of neural networks [18], which have been compared to traditional methods such as finite element methods (FEMs) and other approximation techniques in the field of approximation theory [20, 24, 27, 32].
Neural network discretization of PDEs involves parameterizing a set of functions that are used to solve the PDEs. There are three main approaches to solving the numerical solutions: the first approach, introduced in the deep Ritz method [31], minimizes the variational energy; the second approach, widely used in physics-informed neural networks (PINNs) [23], minimizes the L2 residual of PDEs and boundary conditions; and the third approach involves solving the discretized system of nonlinear equations [7]. Researchers have made efforts to bound the errors associated with these approaches. For instance, studies have shown that gradient descent applied to a wide enough network can reach a global minimum [1, 2, 35]. The convergence of stochastic gradient descent (SGD) and Adam optimizer [17] has also been analyzed in Fourier space, revealing that the error converges more rapidly in the lowest frequency modes. This observation is known as the frequency principle or spectral bias of neural network training [22]. Furthermore, a novel greedy training algorithm has been devised for shallow neural networks in order to numerically determine their theoretical convergence rate [25].
In practice, Adam or SGD are commonly used to solve the resulting optimization problems in both variational and L2-minimization approaches. Additionally, randomized Newton's method has been developed to achieve faster local convergence by solving the system of nonlinear equations [7]. Gauss-Newton method has also been employed to speed up the computation of the L2-minimization approach [5]. However, these training algorithms cannot be directly applied to the variational form of the problem. In this paper, we will develop a Gauss-Newton method for the variational form and provide a convergence analysis for this proposed method.
The remaining sections of the paper are organized as follows: In Section 2, we introduce the problem setup and discuss the class of elliptic PDEs we will be solving. Section 3 provides an overview of Gauss-Newton methods for solving PDEs in both variational and L2-minimization forms. The property of local minimizes, namely, semi-regular zeros of the vanishing gradient, is discussed in Section 4. In Section 5, we present a comprehensive analysis of the convergence properties of the Gauss-Newton method for the variational form, as well as its variant, the random Gauss-Newton method. Several numerical examples are presented in Section 6 to illustrate the efficiency of the proposed Gauss-Newton method. Finally, in Section 7, we conclude the paper, by summarizing the key findings and discussing potential avenues for future research.
## 2 Problem setup
We consider the following second-order elliptic equation
\[\mathcal{L}v=f\text{ in }\Omega, \tag{2.1}\]
where \(\Omega\in\mathbb{R}^{d}\) is an open subset, \(d\geq 1\). Here \(\mathcal{L}\) is the second-order elliptic operator defined by
\[\mathcal{L}=-\sum_{i=1}^{d}\sum_{j=1}^{d}a_{i,j}\frac{\partial}{ \partial x_{i}}\frac{\partial}{\partial x_{j}}+\sum_{k=1}^{d}b_{k}\frac{ \partial}{\partial x_{k}}+c. \tag{2.2}\]
To ensure the existence and uniqueness of the weak solution for problem (2.1), certain assumptions need to be made on the operator \(\mathcal{L}\). Specifically, the following conditions should be satisfied [10]:
\[a_{i,j}\in L^{\infty}(\Omega),\quad 1\leq i,j\leq d, \tag{2.3}\]
which means that the coefficients of the operator are bounded in \(L^{\infty}(\Omega)\).
Moreover, there should exist positive constants \(\lambda\) and \(\Lambda\) such that
\[\lambda|\boldsymbol{\xi}|^{2}\leq\sum_{i=1}^{d}\sum_{j=1}^{d}a_{i,j}\xi_{i}\xi_{ j}\leq\Lambda|\boldsymbol{\xi}|^{2},\quad\forall\boldsymbol{\xi}\in\mathbb{R}^{d}, \quad\forall x\in\Omega. \tag{2.4}\]
This means that the operator is uniformly elliptic, i.e., it satisfies a strong ellipticity condition, with ellipticity constants \(\lambda\) and \(\Lambda\). These assumptions ensure that the weak solution to the problem (2.1) exists and is unique. To simplify notation, we consider the following second-order partial differential equation:
\[-a\Delta v(x)+cv(x) =f(x),\text{ in }\Omega \tag{2.5}\] \[\frac{\partial v}{\partial n} =0,\text{ on }\partial\Omega \tag{2.6}\]
where \(a\geq 0\) and \(c\geq 0\) are constants such that the conditions (2.3)-(2.4) are satisfied. It should be noted that all the results presented in this paper can be extended to the more general form of the operator \(L\) defined in (2.2), with Dirichlet boundary conditions.
We define the admissible set \(V\) on \(\Omega\) as \(V=H^{1}(\Omega)\) and transform the PDE (2.5) into an energy minimization problem, given by:
\[\min_{w\in V}\mathcal{J}(w)=\int_{\Omega}\left(\frac{a}{2}|\nabla w(x)|^{2}+ \frac{c}{2}w(x)^{2}-f(x)w(x)\right)dx. \tag{2.7}\]
Here, \(\mathcal{J}(w)\) is the energy functional, and the minimizer \(v=\arg\min_{w\in V}\mathcal{J}(w)\) of (2.7) satisfies the PDE (2.5). It is important to note that when \(a\neq 0\), minimizing (2.7) is equivalent to solving the PDE (2.5); while \(a=0\) but \(c\neq 0\), (2.7) degenerates into a function approximation problem.
The minimization problem can be solved using various learning algorithms, such as the Deep Ritz method [33], which utilizes deep neural networks (DNNs). In practice, the energy integral in (2.7) can be computed using numerical quadrature methods, such as the Gauss quadrature and the Monte Carlo method [12].
## 3 Gauss-Newton method for the variational problem
In this section, without loss of generality, we will introduce the Gauss-Newton method for solving the energy minimization problem (2.7) with \(a=1\) and \(c=1\). Let \(u(x,\theta)\) be a learning model to approximate the exact solution \(w(x)\), for example, \(u(x,\theta)\) can be the finite element function given in subsection 4.1.1 or the neural network function given in subsection 4.1.2. We will focus on a smaller admissible set, i.e., the DNN function space, where the loss function takes the form:
\[L(\theta)=\int_{\Omega}\frac{1}{2}|\nabla u(x,\theta)|^{2}+\frac{1}{2}u(x, \theta)^{2}-f(x)u(x,\theta)dx. \tag{3.1}\]
The gradient of \(L(\theta)\) is
\[\nabla_{\theta}L(\theta)=\int_{\Omega}\nabla_{\theta}\nabla u(x,\theta) \nabla u(x,\theta)+u(x,\theta)\nabla_{\theta}u(x,\theta)-f(x)\nabla_{\theta} u(x,\theta)dx \tag{3.2}\]
The Hessian of \(L(\theta)\) is as follows:
\[\mathbf{H}(\theta) =\underbrace{\int_{\Omega}\nabla_{\theta}\nabla u(x,\theta)\cdot \nabla_{\theta}\nabla u(x,\theta)^{T}+\nabla_{\theta}u(x,\theta)\cdot\nabla_{ \theta}u(x,\theta)^{T}dx}_{\mathbf{J}(\theta)} \tag{3.3}\] \[+\underbrace{\int_{\Omega}\nabla_{\theta}^{2}\nabla u(x,\theta) \cdot\nabla u(x,\theta)+u(x,\theta)\nabla_{\theta}^{2}u(x,\theta)-f(x)\nabla_{ \theta}^{2}u(x,\theta)dx}_{\mathbf{Q}(\theta)}, \tag{3.4}\]
where \(\mathbf{J}(\theta)\) and \(\mathbf{Q}(\theta)\) denote the first- and second-order derivatives of \(\theta\), respectively. Then we estimate the second derivative matrix by the following lemma [3, 26, 28, 32].
**Lemma 1**.: _For \(\forall\epsilon>0\), there exist \(\delta>0\), \(J\in\mathbb{N}^{+}\) and \(m\in\mathbb{N}^{+}\), such that \(\theta\in\mathbb{R}^{m}\), \(DNN_{J}\subset H^{1}(\Omega)\), and for \(\|\theta-\theta^{*}\|<\delta\), \(\theta^{*}=\arg\min L(\theta)\), if \(D_{\theta}^{2}u(x,\theta)\in H^{1}(\Omega)\), then it holds \(\|\mathbf{Q}(\theta)\|<\epsilon\)._
Proof.: First, we can rewrite \(\mathbf{Q}(\theta)\) using integration by parts as
\[\mathbf{Q}(\theta) =\int_{\Omega}\nabla_{\theta}^{2}\nabla u(x,\theta)\cdot\nabla u (x,\theta)+u(x,\theta)\nabla_{\theta}^{2}u(x,\theta)-f(x)\nabla_{\theta}^{2}u (x,\theta)dx \tag{3.5}\] \[=\int_{\Omega}\nabla_{\theta}^{2}u(x,\theta)(-\Delta u(x,\theta) +u(x,\theta)-f(x))dx+\int_{\partial\Omega}\nabla_{\theta}^{2}u(x,\theta)\frac {\partial u(x,\theta)}{\partial n}ds. \tag{3.6}\]
Given that the true solution \(v(x)\) satisfies:
\[-\Delta v(x)+v(x) =f(x),\text{ in }\Omega \tag{3.7}\] \[\frac{\partial v}{\partial n} =0,\text{ on }\partial\Omega \tag{3.8}\]
We can rewrite \(\mathbf{Q}(\theta)\) as follows: \(\mathbf{Q}(\theta)\) as
\[\mathbf{Q}(\theta)= \int_{\Omega}\nabla_{\theta}^{2}u(x,\theta)(-\Delta u(x,\theta)+ u(x,\theta)-f(x))dx+\int_{\partial\Omega}\nabla_{\theta}^{2}u(x,\theta)\frac{ \partial u(x,\theta)}{\partial n}ds \tag{3.9}\] \[= \int_{\Omega}\nabla_{\theta}^{2}u(x,\theta)(-\Delta u(x,\theta)+ u(x,\theta)+\Delta v(x)-v(x))dx\] (3.10) \[+\int_{\partial\Omega}\nabla_{\theta}^{2}u(x,\theta)\left(\frac{ \partial u(x,\theta)}{\partial n}-\frac{\partial v(x)}{\partial n}\right)ds. \tag{3.11}\]
This implies
\[\|\mathbf{Q}(\theta)\|\leq \|\nabla_{\theta}^{2}u(x,\theta)\|_{H^{1}(\Omega)}\big{(}\| \Delta u(x,\theta)-\Delta v(x)\|_{H^{-1}(\Omega)}+\|u(x,\theta)-v(x)\|_{H^{-1}( \Omega)}\big{)} \tag{3.12}\] \[+\|\nabla_{\theta}^{2}u(x,\theta)\|_{H^{\frac{1}{2}}(\partial \Omega)}\bigg{\|}\frac{\partial u(x,\theta)}{\partial n}-\frac{\partial v(x)}{ \partial n}\bigg{\|}_{H^{-\frac{1}{2}}(\partial\Omega)}. \tag{3.13}\]
Hence, by trace theorem, we have
\[\|\mathbf{Q}(\theta)\|\leq C\|\nabla_{\theta}^{2}u(x,\theta)\|_{H^{1}(\Omega)}\|u(x, \theta)-v(x)\|_{H^{1}(\Omega)} \tag{3.14}\]
Noting that \(\theta^{*}=\arg\min L(\theta)\), and using the error estimates shown in [32], we have
\[\|\mathbf{Q}(\theta)\|\leq C\|\nabla_{\theta}^{2}u(x,\theta)\|_{H^{1}(\Omega)}\|u(x, \theta)-v(x)\|_{H^{1}(\Omega)} \tag{3.15}\] \[\leq C\|\nabla_{\theta}^{2}u(x,\theta)\|_{H^{1}(\Omega)}\inf_{u(x, \theta)\in DNN_{J}}\|u(x,\theta)-v(x)\|_{H^{1}(\Omega)} \tag{3.16}\]
Furthermore, by the approximation results of neural networks shown in [3, 26, 28], the desired result is obtained.
Hence, it is reasonable to consider the first-order approximation of the Hessian, i.e., \(\mathbf{J}(\theta)\approx\mathbf{H}(\theta)\). The Gauss-Newton method for the variational problem is then given by
\[\theta_{k+1}=\theta_{k}-\mathbf{J}(\theta_{k})^{\dagger}\nabla_{ \theta}L(\theta_{k}),\quad k=0,1,2,\cdots. \tag{3.17}\]
### Gauss-Newton method for solving the L2 minimization problem
In this subsection, we will review the Gauss-Newton method for solving the L2 minimization problem, namely,
\[\min_{\theta}\frac{1}{2}\|\mathbf{F}(\theta)\|_{2}^{2}, \tag{3.18}\]
where
\[\mathbf{F}(\theta)=\left(\begin{array}{c}-a\Delta u(x_{1},\theta)+ cu(x_{1},\theta)-f(x_{1})\\ \vdots\\ -a\Delta u(x_{N},\theta)+cu(x_{N},\theta)-f(x_{N})\\ \nabla u(x_{1}^{b},\theta)\cdot\mathbf{n}\\ \vdots\\ \nabla u(x_{n}^{b},\theta)\cdot\mathbf{n}\end{array}\right), \tag{3.19}\]
with collocation points \(\{x_{1},x_{2},\cdots,x_{N}\}\subset\Omega\) and \(\{x_{1}^{b},x_{2}^{b},\cdots,x_{n}^{b}\}\subset\partial\Omega\). Here, \(\mathbf{n}\) is the outer normal unit vector of \(\partial\Omega\), and \(\theta=\{\theta_{1},\theta_{2},\cdots,\theta_{m}\}\) represents the parameters in the learning model \(u(x,\theta)\). Therefore, an optimal choice of \(\theta\) is computed by the Gauss-Newton method as
\[\theta_{k+1}=\theta_{k}-(\mathbf{J}\mathbf{F}(\theta_{k}))^{ \dagger}\mathbf{F}(\theta_{k}),\quad k=0,1,2,\cdots \tag{3.20}\]
where \({}^{\dagger}\) denotes the Moore-Penrose inverse and \(\mathbf{J}\mathbf{F}(\theta)\) is the Jacobi matrix defined as
\[\mathbf{J}\mathbf{F}(\theta)=\begin{bmatrix}-\nabla_{\theta} \Delta u(x,\theta)^{T}+\nabla_{\theta}u(x,\theta)^{T}\\ \left(\nabla_{\theta}\nabla u(x^{b},\theta)\cdot\mathbf{n}\right)^{T}\end{bmatrix} \in\mathbb{R}^{(N+n)\times m}\text{ and }\nabla_{\theta}\nabla= \begin{bmatrix}\partial_{x_{1}}\partial_{\theta_{1}}&\cdots&\partial_{x_{d}} \partial_{\theta_{1}}\\ \vdots&\ddots&\vdots\\ \partial_{x_{1}}\partial_{\theta_{m}}&\cdots&\partial_{x_{d}}\partial_{ \theta_{m}}\end{bmatrix}. \tag{3.21}\]
### The consistency between Gauss-Newton methods for L2 minimization and variational problems
By applying the divergence theorem to \(\mathbf{J}(\theta)\), we obtain:
\[\mathbf{J}(\theta)=\int_{\Omega}\nabla_{\theta}u(x,\theta)\cdot\left( -\nabla_{\theta}\Delta u(x,\theta)+\nabla_{\theta}u(x,\theta)\right)^{T}dx+ \int_{\partial\Omega}\nabla_{\theta}u(x,\theta)\cdot\frac{\partial\nabla_{ \theta}u(x,\theta)}{\partial\mathbf{n}}^{T}dS. \tag{3.22}\]
If we compute all integrals using numerical methods, e.g., the Gaussian quadrature rule, we can derive the condition for the consistency of the Gauss-Newton method in (3.17) with the Gauss-Newton method for the L2 minimization problem in (3.20). Denote the grid points in the domain \(\Omega\) as \(\boldsymbol{x}=(x_{1},x_{2},\cdots,x_{N})^{T}\) and the corresponding weights as \(\boldsymbol{w}=(w_{1},w_{2},\ldots,w_{N})^{T}\). Then, we can write the first part of (3.22) as:
\[\int_{\Omega}\nabla_{\theta}u(x,\theta)\cdot\left(-\nabla_{\theta}\Delta u(x, \theta)+\nabla_{\theta}u(x,\theta)\right)^{T}dx=\sum_{i=1}^{N}w_{i}\nabla_{ \theta}u(x_{i},\theta)\cdot\left(-\nabla_{\theta}\Delta u(x_{i},\theta)+ \nabla_{\theta}u(x_{i},\theta)\right)^{T}. \tag{3.23}\]
Similarly, with grid points on the boundary \(\boldsymbol{x}^{\boldsymbol{b}}=(x_{1}^{b},x_{2}^{b},\cdots,x_{n}^{b})^{T}\) and weights \(\boldsymbol{w}^{\boldsymbol{b}}=(w_{1}^{b},w_{2}^{b},\cdots,w_{n}^{b})^{T}\), we can write the second part of (3.22) as:
\[\int_{\partial\Omega}\nabla_{\theta}u(x,\theta)\cdot\frac{\partial\nabla_{ \theta}u(x,\theta)}{\partial\boldsymbol{n}}^{T}dS=\sum_{j=1}^{n}w_{j}^{b} \nabla_{\theta}u(x_{j},\theta)\cdot\frac{\partial\nabla_{\theta}u(x_{j}, \theta)}{\partial\boldsymbol{n}}^{T}. \tag{3.24}\]
Thus, we have
\[\boldsymbol{J}(\theta)=\boldsymbol{G}\cdot\mathbf{J}\mathbf{F}( \boldsymbol{\theta})\]
where \[\boldsymbol{G}=\left[w_{1}\nabla_{\theta}u(x_{1},\theta)\ \ \cdots\ \ w_{N}\nabla_{\theta}u(x_{N},\theta)\ \ \ w_{1}^{b}\nabla_{\theta}u(x_{1}^{b},\theta)\ \ \cdots\ \ w_{n}^{b}\nabla_{\theta}u(x_{n}^{b},\theta)\right]\] (3.25)
Similarly, we can rewrite the gradient defined in (3.2) as
\[\nabla_{\theta}L(\theta)=\int_{\Omega}\nabla_{\theta}u(x,\theta) \left(-\Delta u(x,\theta)+u(x,\theta)-f(x)\right)dx+\int_{\partial\Omega} \nabla_{\theta}u(x,\theta)\frac{\partial\nabla_{\theta}u(x,\theta)}{\partial \boldsymbol{n}}dS=\boldsymbol{G}\cdot\boldsymbol{F}(\boldsymbol{x},\theta) \tag{3.26}\]
Therefore, the increment in (3.17), in the sense of the numerical integration and pseudo-inverse, becomes
\[(\boldsymbol{J}(\theta))^{\dagger}\cdot\nabla_{\theta}L(\theta)=( \boldsymbol{G}\cdot\mathbf{J}\mathbf{F})^{\dagger}\cdot\boldsymbol{G}\cdot \boldsymbol{F}(\boldsymbol{x},\theta)=(\mathbf{J}\mathbf{F})^{\dagger}\cdot( \boldsymbol{G}^{\dagger}\boldsymbol{G})\cdot\boldsymbol{F}(\boldsymbol{x}, \theta). \tag{3.27}\]
If \(G\) has linearly independent columns, then we have \(G^{\dagger}G=I\in\mathbb{R}^{(N+n)\times(N+n)}\). This is possible if the number of grid points \(N+n\) is less than or equal to the number of parameters \(\theta\). In this case, the Gauss-Newton method that we proposed for the variational problem (3.17) is identical to the Gauss-Newton method for the L2 minimization (3.20).
## 4 Semiregular zeros of \(\nabla L(\theta)=0\)
We will consider the semiregular zeros of \(\nabla L(\theta^{*})=0\) by the following two definitions.
**Definition 1**.: _(Dimension of a Zero) Let \(\theta^{*}\) be a zero of a smooth mapping \(\nabla L:\Omega\subset\mathbb{R}^{m}\rightarrow\mathbb{R}^{N+n}\). If there is an open neighborhood \(\Omega_{z}\subset\Omega\) of \(\theta_{*}\) in \(\mathbb{R}^{m}\) such that \(\Omega_{z}\ \cap(\nabla L)^{-1}(\boldsymbol{0})=\phi(\Lambda)\) where \(\mathbf{z}\mapsto\phi(\mathbf{z})\) is a differentiable injective mapping defined in a connected open set \(\Lambda\) in \(\mathbb{R}^{k}\) for a certain \(k>0\) with \(\phi\left(\mathbf{z}_{*}\right)=\theta_{*}\) and \(rank\left(\phi_{\mathbf{z}}\left(\mathbf{z}_{*}\right)\right)=k\), then the dimension of \(\theta_{*}\) as a zero of \(\nabla L\) is defined as_
\[dim_{\nabla L}\left(\theta_{*}\right):=dim\left(Range\left(\phi_{\mathbf{z}} \left(\mathbf{z}_{*}\right)\right)\right)\equiv rank\left(\phi_{\mathbf{z}} \left(\mathbf{z}_{*}\right)\right)=k\]
**Definition 2**.: _(Semiregular Zero) A zero \(\theta^{*}\in\mathbb{R}^{m}\) of a smooth mapping \(\theta\mapsto\nabla L(\theta)\) is semiregular if \(dim_{\nabla L}\left(\theta^{*}\right)\) is well-defined and identical to nullity \(\left(\mathbf{H}\left(\theta^{*}\right)\right)\). Namely_
\[dim_{\nabla L}\left(\theta^{*}\right)+rank\left(\mathbf{H}\left(\theta^{*} \right)\right)=m.\]
### Justification of semiregular zeros
#### 4.1.1 Application to finite element method
Let us consider the finite element space and define \(V_{N}\) as the set of functions that can be represented as a linear combination of the basis functions \(\phi_{i}(x)\), where \(a_{i}\in\mathbb{R}\) and \(i=1,\cdots,m\). Here, \(\phi_{i}(x)\) is a frame defined according to the partition \(\mathcal{T}_{h}\). Thus we can express \(V_{N}\) as
\[V_{N}=\left\{\sum_{i=1}^{m}a_{i}\phi_{i}(x)\right\}. \tag{4.1}\]
In this case, we obtain a linear system \(\nabla L(\theta)=A\theta-g\), where \(A\) is a square matrix and \(\theta=(a_{1},a_{2},\cdots,a_{m})\) represents the coefficients of the basis functions in \(V_{N}\).
Regular zero: if \(A\) is full rank, \(\theta^{*}\) is unique and therefore an isolated and regular zero.
* if \(A\) is not full rank, then we denote \(rank(A)=r\). We assume that \(Kerl(A)=span\{\theta_{1},\theta_{2},\cdots,\theta_{m-r}\}\) with \(\|\theta_{i}\|=1\) and set \[\Delta=\{\theta:\|\theta-\theta^{*}\|<\delta\}.\] For any \(\theta\in\Delta\cap(\nabla L)^{-1}(0)\), we have \[\theta=\theta^{*}+\delta_{1}\theta_{1}+\cdots+\delta_{m-r}\theta_{m-r}\] with \(\|\delta_{i}\|<\frac{\delta}{m-r}\) and \(\mathbf{z}=(\delta_{1},\delta_{2},\cdots,\delta_{m-r}).\) Then we can construct \(\phi\) as \(\phi(\mathbf{z})=\theta=\theta^{*}+\delta_{1}\theta_{1}+\cdots+\delta_{m-r} \theta_{m-r}\) and \(\phi_{\mathbf{z}}(\mathbf{z})=(\theta_{1},\theta_{2},\cdots,\theta_{m-r})\in \mathcal{R}^{m\times(m-r)}.\) Moreover, \[dim_{\nabla L}\left(\theta^{*}\right)+rank\left(\mathbf{H}\left(\theta^{*} \right)\right)=rank(\phi_{z}(\mathbf{z}))+rank(A)=m\] which means that \(\theta^{*}\) is a semirrgular zero.
#### 4.1.2 Application to neural network discretization
Consider a simple neural network in 1D with domain \(\Omega=(-1,1)\):
\[V_{N}^{k}=\left\{\sum_{i=1}^{m}a_{i}\mathrm{ReLU}^{k}\left(w_{i}x+b_{i}\right),a_{i}\in\mathbb{R},w_{i}\in\{-1,1\},b_{i}\in[-1-\delta,1+\delta]\right\}. \tag{4.2}\]
Because of the scaling property of the ReLU function, we can rewrite \(V_{N}^{k}\) as
\[V_{N}^{k}=\left\{\sum_{i=1}^{m}a_{i}\mathrm{ReLU}^{k}\left(x+b_{i}\right),a_{i }\in\mathbb{R},b_{i}\in[-1-\delta,1+\delta]\right\}. \tag{4.3}\]
We consider the following semi-regular zero cases:
* Let us consider a simple case where \(u=a_{1}\mathrm{ReLU}^{k}(x+b_{1})\) when \(x_{1}+b_{1}^{*}<0\) and \(F(x_{1}^{b},\theta^{*})=0\). Then we can write \[\mathbf{G}=\begin{bmatrix}w_{1}\frac{\partial u(x_{1},\theta)}{\partial a_{1}}&w_ {1}^{b}\frac{\partial u(x_{1}^{b},\theta)}{\partial a_{1}}\\ w_{1}\frac{\partial u(x_{1},\theta)}{\partial b_{1}}&w_{1}^{b}\frac{\partial u (x_{1}^{b},\theta)}{\partial b_{1}}\end{bmatrix}=\begin{bmatrix}0&w_{1}^{b} \frac{\partial u(x_{1}^{b},\theta)}{\partial a_{1}}\\ 0&w_{1}^{b}\frac{\partial u(x_{1}^{b},\theta)}{\partial b_{1}}\end{bmatrix}, \mathbf{J}\mathbf{F}=\begin{bmatrix}0&0\\ \frac{\partial B(x_{1}^{b},\theta)}{\partial a_{1}}&\frac{\partial B(x_{1}^{b},\theta)}{\partial b_{1}}\end{bmatrix},\]
\[\nabla_{\theta}L(\theta)=\begin{bmatrix}0&w_{1}^{b}\frac{\partial u(x_{1}^{b}, \theta)}{\partial a_{1}}\\ 0&w_{1}^{b}\frac{\partial u(x_{1}^{b},\theta)}{\partial b_{1}}\end{bmatrix} \begin{bmatrix}F(x_{1},\theta)\\ F(x_{1}^{b},\theta)\end{bmatrix}=w_{1}^{b}\begin{bmatrix}\frac{\partial u(x_{1}^ {b},\theta)}{\partial a_{1}}F(x_{1}^{b},\theta)\\ \frac{\partial u(x_{1}^{b},\theta)}{\partial b_{1}}F(x_{1}^{b},\theta)\end{bmatrix}.\]
Let us define
\[\theta=(a_{1},b_{1})^{T},\quad\theta^{*}=(a_{1}^{*},b_{1}^{*})^{T},\quad\theta _{1}=(0,1)^{T}\]
and
\[\phi(\mathbf{z})=\theta^{*}+\delta_{1}\theta_{1}=\theta^{*}+\delta_{1}\theta _{1}=\theta^{*}+\delta_{1}(0,1)^{T}\quad\text{with}\quad\mathbf{z}=\delta_{1}.\]
Then, there exists a \(\delta\) such that \(\Delta=\{\theta:\|\theta-\theta^{*}\|<\delta\}\), which means \(b_{1}\) is near \(b_{1}^{*}\). We have \(\Lambda=(-\delta,\delta)\) such that \(\phi(\Lambda)=\Delta\cap(\nabla L)^{-1}(0)\) since if \(\theta^{*}=(a_{1}^{*},b_{1}^{*})^{T}\) satisfies \(\nabla L(\theta^{*})=0\) then for some \(\delta\), for any \(\theta=(a_{1}^{*},b_{1})\) with \(\|b_{1}-b_{1}^{*}\|<\delta\) satisfies \(\nabla L(\theta)=0\), which implies \(\Delta\cap(\nabla L)^{-1}(0)=\Delta\).
We can see that \(\phi_{z}(\mathbf{z})=(0,1)^{T}\) and \(\operatorname{rank}(\phi_{z}(\mathbf{z}))=1\). Furthermore, we observe that
\[H(\theta)=w_{1}^{b}\begin{bmatrix}\frac{\partial u(x_{1}^{b},\theta)}{\partial a _{1}}F_{a_{1}}(x_{1}^{b},\theta)&\frac{\partial^{2}u(x_{1}^{b},\theta)}{ \partial a_{1}\partial b_{1}}F(x_{1}^{b},\theta)+\frac{\partial u(x_{1}^{b}, \theta)}{\partial a_{1}}F_{b_{1}}(x_{1}^{b},\theta)\\ \frac{\partial^{2}u(x_{1}^{b},\theta)}{\partial b_{1}\partial a_{1}}F(x_{1}^{ b},\theta)+\frac{\partial u(x_{1}^{b},\theta)}{\partial b_{1}}F_{a_{1}}(x_{1}^{b}, \theta)&\frac{\partial^{2}u(x_{1}^{b},\theta)}{\partial b_{1}^{2}}F(x_{1}^{b}, \theta)+\frac{\partial u(x_{1}^{b},\theta)}{\partial b_{1}}F_{b_{1}}(x_{1}^{b}, \theta)\end{bmatrix},\]
which implies that \(\operatorname{rank}(H(\theta^{*}))=1\).
Therefore, we have \(\operatorname{rank}(\phi_{z}(\mathbf{z}))+\operatorname{rank}(\mathbf{H}( \theta^{*}))=2\), which is the dimension of the domain of \(\nabla_{\theta}L(\theta)\). This implies that in the simple case \(u=a_{1}\mathrm{ReLU}(x+b_{1})\), \(\theta^{*}\) is a semiregular zero of \(\nabla_{\theta}L(\theta)=0\).
* We next consider the general case of \(m>1\) and assume \(\operatorname{rank}(\mathbf{G}(\theta^{*}))=r<2m\). By denoting
\[\theta=(a_{1},a_{2},\cdots,a_{m},b_{1},b_{2},\cdots,b_{m})^{T},\quad\theta^{* }=(a_{1}^{*},a_{2}^{*},\cdots,a_{m}^{*},b_{1}^{*},b_{2}^{*},\cdots,b_{m}^{*}),\]
we assume there exists \(2m-r\) sample points \(x_{i}\) such that \(x_{i}+b_{j}^{*}<0\) for all \(j=1,\cdots,m\) and define
\[\theta_{1}=(0,0,\cdots,0,1,0,\cdots,0,0,0,\cdots,0)^{T},\] \[\theta_{2}=(0,0,\cdots,0,0,1,\cdots,0,0,0,\cdots,0)^{T},\] \[\cdots\] \[\theta_{2m-r}=(0,0,\cdots,0,0,0,\cdots,0,1,0,\cdots,0)^{T}.\]
We observe that if \(\theta^{*}=(a_{1}^{*},\cdots,a_{m}^{*},b_{1}^{*},\cdots,b_{m}^{*})\) satisfies \(\nabla L(\theta^{*})=0\), then \(b_{i}\) is a function of \(b_{i}^{*}\) and the remaining parameters \(a_{1}^{*},\ldots,a_{m}^{*}\), and hence, if \(\theta=(a_{1},a_{2},\cdots,a_{m},b_{1},b_{2},\cdots,b_{m})^{T}\), then \(\nabla L(\theta)=0\) as well. This implies that there exist \(\delta_{1},\delta_{2},\cdots,\delta_{2m-r}\) such that
\[\phi(\mathbf{z})=\theta^{*}+\delta_{1}\theta_{1}+\delta_{2}\theta_{2}+\cdots+ \delta_{2m-r}\theta_{2m-r}\text{ with }\mathbf{z}=(\delta_{1},\delta_{2},\cdots,\delta_{2m-r})\]
and \(\Delta=\{\theta:\|\theta-\theta^{*}\|<\delta\}\) such that \(\Delta\cap(\nabla L)^{-1}(0)=\Delta\). We define
\[\Lambda=(-\delta_{1},\delta_{1})\times(-\delta_{2},\delta_{2})\times\cdots \times(-\delta_{2m-r},\delta_{2m-r}),\]
and have
\[\phi_{z}(\mathbf{z})=(\theta_{1},\theta_{2},\cdots,\theta_{2m-r}).\]
Clearly, we have \(\mathrm{rank}(\phi_{z}(\mathbf{z}))=2m-r\). If \(F(x_{i};\theta^{*})=0\) when \(x_{i}+b_{j}^{*}>0\) for \(i=1,\cdots,N+n\) and \(j=1,\cdots,m\), then we have \(\mathrm{rank}(\mathbf{H}(\theta^{*}))=r\) which implies that \(\mathrm{rank}(\phi_{z}(\mathbf{z}))+\mathrm{rank}(\mathbf{J}(\theta^{*}))=2m -r+r=2m\). Hence, in this case, \(\theta^{*}\) is a semiregular zero of \(\nabla_{\theta}L(\theta)=0\).
### Properties of semiregular zeros
**Lemma 2**.: _Let \(\theta\mapsto\nabla L(\theta)\) be a smooth mapping with a semiregular zero \(\theta^{*}\). Then there is an open neighborhood \(\Omega_{*}\) of \(\theta^{*}\) and \(Range(Q(\theta^{*}))\subseteq Range(J(\theta^{*}))\), such that, for any \(\hat{\theta}\in\Omega_{*}\), the equality \(\mathbf{J}(\hat{\theta})^{\dagger}\nabla L(\hat{\theta})=\mathbf{0}\) holds if and only if \(\hat{\theta}\) is a semiregular zero of \(\nabla L\) in the same branch of \(\theta^{*}\)._
Proof.: We follow the proof of Lemma 4 in [34] and note that \(\left|\mathbf{Q}\right|\leq\epsilon\) for any small \(\epsilon\). First, we claim that there exists a neighborhood \(\Omega_{1}\) of \(\theta^{*}\) such that for every \(\hat{\theta}\in\Omega_{1}\), we have \(\nabla L(\hat{\theta})=\mathbf{0}\) and \(\mathbf{J}(\hat{\theta})^{\dagger}\nabla L(\hat{\theta})=\mathbf{0}\). Assume that this assertion is false. Then there exists a sequence \(\left\{\theta_{j}\right\}_{j=1}^{\infty}\) converging to \(\theta^{*}\) such that \(\mathbf{J}\left(\theta_{j}\right)^{\dagger}\nabla L\left(\theta_{j}\right)= \mathbf{0}\) but \(\nabla L\left(\theta_{j}\right)\neq\mathbf{0}\) for all \(j=1,2,\ldots\). Let \(\mathbf{z}\mapsto\phi(\mathbf{z})\) be the parameterization of the solution branch containing \(\theta^{*}\) as defined in Definition 1, with \(\phi\left(\mathbf{z}\right)=\theta^{*}\). From Lemma 9, for any sufficiently large \(j\), there exists a \(\tilde{\theta}j\in\Omega\cap(\nabla L)^{-1}(\mathbf{0})=\phi(\Delta)\) such that
\[\left\|\theta_{j}-\tilde{\theta}_{j}\right\|_{2}=\min_{\mathbf{z}\in\Delta} \left\|\theta_{j}-\phi(\mathbf{z})\right\|_{2}=\left\|\theta_{j}-\phi\left( \mathbf{z}_{j}\right)\right\|_{2} \tag{4.4}\]
at a certain \(\mathbf{z}_{j}\) with \(\phi\left(\mathbf{z}_{j}\right)=\hat{\mathbf{x}}_{j}\), implying
\[\phi_{\mathbf{z}}\left(\mathbf{z}_{j}\right)\phi_{\mathbf{z}}\left(\mathbf{z}_ {j}\right)^{\dagger}\frac{\theta_{j}-\phi\left(\mathbf{z}_{j}\right)}{\left\| \theta_{j}-\phi\left(\mathbf{z}_{j}\right)\right\|_{2}}=\frac{\phi_{\mathbf{z }}\left(\mathbf{z}_{j}\right)}{\left\|\theta_{j}-\phi\left(\mathbf{z}_{j} \right)\right\|_{2}}\left(\phi_{\mathbf{z}}\left(\mathbf{z}_{j}\right)^{ \dagger}\left(\theta_{j}-\phi\left(\mathbf{z}_{j}\right)\right)\right)= \mathbf{0}. \tag{4.5}\]
We claim that \(\tilde{\theta}_{j}\) converges to \(\theta^{*}\) as \(j\) approaches infinity. To see why, assume otherwise. Namely, suppose there exists an \(\varepsilon>0\) such that for any \(N>0\), there is a \(j>N\) with \(\left\|\tilde{\theta}_{j}-\theta^{*}\right\|_{2}\geq 2\varepsilon\). However, we know that \(\left\|\theta_{j}-\theta^{*}\right\|_{2}<\varepsilon\) for all \(j\) larger than some fixed \(N\). This implies that
\[\left\|\tilde{\theta}_{j}-\theta_{j}\right\|\geq\left\|\tilde{\theta}_{j}- \theta^{*}\right\|-\left\|\theta_{*}-\theta_{j}\right\|>\varepsilon>\left\| \theta_{j}-\theta_{*}\right\|_{2}\]
which contradicts (4.4). Therefore, we conclude that \(\tilde{\theta}_{j}\) converges to \(\theta^{*}\) as \(j\) approaches infinity.
Since \(\nabla L\left(\theta_{j}\right)\neq\mathbf{0}\), we have \(\theta_{j}\neq\tilde{\theta}_{j}\). Therefore, we can consider the unit vector \(\mathbf{v}_{j}=\left(\theta_{j}-\tilde{\theta}j\right)/\left\|\theta_{j}- \tilde{\theta}j\right\|_{2}\) for each \(j\). By compactness, there exists a subsequence \(\mathbf{v}_{j_{k}}\) that converges to some unit vector \(\mathbf{v}\). That is, \(\lim\limits_{k\rightarrow\infty}\mathbf{v}_{j_{k}}=\mathbf{v}\) for some unit vector \(\mathbf{v}\). Thus
\[\begin{split} 0&=\lim\limits_{j\rightarrow\infty}\frac{ \mathbf{J}(\theta_{j})^{\dagger}\left(\nabla L\left(\tilde{\theta}_{j}\right)- \nabla L\left(\theta_{j}\right)\right)}{\left\|\theta_{j}-\tilde{\theta}_{j} \right\|_{2}}\\ &=\lim\limits_{j\rightarrow\infty}\frac{\mathbf{J}(\theta_{j})^{ \dagger}\mathbf{H}\left(\theta_{j}\right)\left(\tilde{\theta}_{j}-\theta_{j} \right)}{\left\|\tilde{\theta}_{j}-\theta_{j}\right\|_{2}}=\mathbf{J}(\theta^{* })^{\dagger}\mathbf{H}\left(\theta^{*}\right)\mathbf{v}\end{split}. \tag{4.6}\]
By the assumption \(Range(Q(\theta^{*}))\subseteq Range(J(\theta^{*}))\) and noting that \(\mathbf{H}=\mathbf{J}+\mathbf{Q}\) and \(\left\|\mathbf{Q}\right\|\leq\epsilon_{2}\) for any small \(\epsilon_{2}\), we have \(\mathbf{v}\in Kernel\left(\mathbf{H}\left(\theta^{*}\right)\right)\). As a result,
\[span\{\mathbf{v}\}\oplus Range\left(\phi_{\mathbf{z}}\left(\mathbf{z}_{*} \right)\right)\subset Kernel\left(\mathbf{H}\left(\theta^{*}\right)\right)\]
since \(\mathbf{H}\left(\theta^{*}\right)\phi_{\mathbf{z}}\left(\mathbf{z}_{*}\right)=O\) due to \(\nabla L(\phi(\mathbf{z}))\equiv\mathbf{0}\) in a neighborhood of \(\mathbf{z}_{*}\). From the limit of (4.5) for \(j\rightarrow\infty\), the vector \(\mathbf{v}\) is orthogonal to \(Range\left(\phi_{\mathbf{z}}\left(\mathbf{z}_{*}\right)\right)\) and thus
\[nullity\left(\mathbf{H}\left(\theta^{*}\right)\right)\geq rank\left(\phi_{ \mathbf{z}}\left(\mathbf{z}_{*}\right)\right)+1\]
which is a contradiction to the semiregularity of \(\theta^{*}\). For the special case where \(\theta^{*}\) is an isolated semiregular zero of \(\nabla L(\theta)\) with dimension \(0\), the above proof applies with \(\dot{\theta}_{j}=\theta_{j}\) so that (4.6) holds, implying a contradiction to \(nullity\left(\mathbf{H}\left(\theta^{*}\right)\right)=0\).
Lemma 11 implies that there exists a neighborhood \(\Omega_{2}\) of \(\theta^{*}\) such that for every \(\theta\in\Omega_{2}\cap(\nabla L)^{-1}(\mathbf{0})\), \(\theta\) is a semiregular zero of \(\nabla L\) in the same branch as \(\theta^{*}\). Therefore, the lemma holds for \(\Omega_{*}=\Omega_{1}\cap\Omega_{2}\).
## 5 Convergence analysis
### Gauss-Newton method
In this section, we will analyze the convergence properties of the Gauss-Newton method shown in (3.17). We assume that the variational problem has a semiregular zero \(\theta^{*}\) such that \(\nabla_{\theta}L(\theta^{*})=0\), and we also assume that \(\operatorname{rank}\boldsymbol{J}(\theta^{*})=r\leq m\). Using singular value decomposition, we can write \(J(\theta^{*})=U\Sigma V^{T}\), where \(J(\theta^{*})\) is an \(r\)-rank semi-positive definite matrix. In particular, we have [30]
\[\Sigma=diag\left(\left[\sigma_{1},\cdots,\sigma_{r},0,\cdots,0\right]\right),\text{ with }\sigma_{1}\geq\sigma_{2}\geq\cdots\geq\sigma_{r}>0,\,r\leq m. \tag{5.1}\]
Thus, the pseudo-inverse can be represented as
\[\boldsymbol{J}(\theta^{*})^{\dagger}=V\Sigma^{\dagger}U^{T},\text{ with }\Sigma^{\dagger}=diag\left(\left[\frac{1}{\sigma_{1}},\cdots,\frac{1}{\sigma_{r}},0, \cdots,0\right]\right). \tag{5.2}\]
Here, the pseudo-inverse is based on the rank-r projection of \(\boldsymbol{J}(\theta)\) denoted as \(\boldsymbol{J}_{\text{rank-r}}(\theta)\) in [34]. However, for simplicity, in our paper, we use the term \(\boldsymbol{J}(\theta)\) to represent the rank-r projection.
**Lemma 3**.: _Let \(\boldsymbol{J}(\theta)\) be the approximated Hessian defined in (3.3) and \(\theta^{*}\) be the stationary point. Then there exists a small open set \(\Omega_{*}\ni\theta^{*}\) and constants \(\epsilon,\zeta,\alpha>0\) such that for every \(\theta_{1},\theta_{2}\in\Omega_{*}\), the following inequalities holds:_
\[\|\boldsymbol{J}(\theta_{2})\boldsymbol{J}(\theta_{2})^{\dagger }-\boldsymbol{J}(\theta_{1})\boldsymbol{J}(\theta_{1})^{\dagger}\| \leq\zeta\|\theta_{2}-\theta_{1}\|, \tag{5.3}\] \[\|\nabla_{\theta}L(\theta_{2})-\nabla_{\theta}L(\theta_{1})- \boldsymbol{J}(\theta_{1})\left(\theta_{2}-\theta_{1}\right)\| \leq\epsilon\|\theta_{2}-\theta_{1}\|+\alpha\|\theta_{2}-\theta_{1}\|^{2}. \tag{5.4}\]
Proof.: From Taylor's expansion, we have
\[\nabla_{\theta}L(\theta_{2})-\nabla_{\theta}L(\theta_{1})- \boldsymbol{J}(\theta_{1})\left(\theta_{2}-\theta_{1}\right)\] \[=\mathbf{H}(\theta_{1})\left(\theta_{2}-\theta_{1}\right)- \boldsymbol{J}(\theta_{1})\left(\theta_{2}-\theta_{1}\right)+\mathcal{O}(\| \theta_{2}-\theta_{1}\|^{2})\] \[=\boldsymbol{Q}(\theta_{1})\left(\theta_{2}-\theta_{1}\right)+ \mathcal{O}(\|\theta_{2}-\theta_{1}\|^{2}).\]
Therefore, due to the smoothness of the Hessian with respect to \(\theta\) and Lemma 1, there exists a small neighborhood \(\Omega_{*}\) of \(\theta^{*}\) and a constant \(\alpha>0\) such that for every \(\theta_{1},\theta_{2}\in\Omega_{*}\), we have \(\|\boldsymbol{Q}(\theta_{1})\|\leq\epsilon\), and the following inequality holds:
\[\|\nabla_{\theta}L(\theta_{2})-\nabla_{\theta}L(\theta_{1})- \boldsymbol{J}(\theta_{1})\left(\theta_{2}-\theta_{1}\right)\|\leq\epsilon\| \theta_{2}-\theta_{1}\|+\alpha\|\theta_{2}-\theta_{1}\|^{2}. \tag{5.5}\]
for every \(\theta_{1},\theta_{2}\in\Omega_{*}\). On the other hand, Weyl's Theorem guarantees the singular value to be continuous with respect to the matrix entries. Therefore, as the open set \(\Omega_{*}\) being sufficiently small, it holds
\[\sup_{\theta\in\Omega_{*}}\|\mathbf{J}(\theta)\|=\sup_{\theta\in \Omega_{*}}\sigma_{1}(\mathbf{J}(\theta))\leq C\|\mathbf{J}(\theta^{*})\|\] \[\sup_{\theta\in\Omega_{*}}\|\mathbf{J}(\theta)^{\dagger}\|=\frac{1}{ \sup_{\theta\in\Omega_{*}}\sigma_{r}(\mathbf{J}(\theta))}\leq C\|\mathbf{J}(\theta^{*} )^{\dagger}\|\]
for all \(\theta\in\Omega_{*}\). By the smoothness of \(\mathbf{J}(\theta)\) with respect to \(\theta\) and the estimation in Lemma 8, we have
\[\|\mathbf{J}(\theta_{2})\mathbf{J}(\theta_{2})^{\dagger}-\mathbf{J}(\theta_{ 1})\mathbf{J}(\theta_{1})^{\dagger}\|\] \[\leq\|\mathbf{J}(\theta_{2})^{\dagger}\|\|\mathbf{J}(\theta_{2})-\mathbf{J}( \theta_{1})\|+\|\mathbf{J}(\theta_{1})\|\|\mathbf{J}(\theta_{2})^{\dagger}-\mathbf{J}( \theta_{1})^{\dagger}\|\] \[\leq\zeta\|\theta_{2}-\theta_{1}\|.\]
**Theorem 1**.: _(Convergence Theorem) Let \(L(\theta)\) be a sufficiently smooth target function of \(\theta\) and \(\mathbf{J}(\theta)\) be the approximated Hessian of \(L(\theta)\) defined in (3.3). Then for every open neighborhood \(\Omega_{1}\) of \(\theta^{*}\), there exists another neighborhood \(\Omega_{2}\ni\theta^{*}\) such that, from every initial guess \(\theta_{0}\in\Omega_{2}\), the sequence \(\{\theta_{k}\}_{k=1}^{\infty}\) generated by the iteration (3.17) converges in \(\Omega_{1}\). Furthermore, \(\{\theta_{k}\}_{k=1}^{\infty}\) has at least linear convergence with a coefficient \(\gamma\leq 2\epsilon\) when \(k\) is large enough._
Proof.: Firstly, Let \(\Omega_{*}\) be the small open neighbourhood in Lemma 3 such that \(\|\mathbf{Q}(\theta)\|\leq\epsilon<\frac{1}{2}\). For any open neighbourhood \(\Omega_{1}\) of \(\theta^{*}\), there exists a constant \(0<\delta<2\) such that \(B(\theta^{*},\delta)\subset\Omega_{1}\cap\Omega_{*}\) and for any \(\theta_{1},\theta_{2}\in B(\theta^{*},\delta)\), it holds
\[\|\mathbf{J}(\theta_{2})^{\dagger}\|\left(\alpha\|\theta_{2}-\theta_{1}\|+\zeta\| \nabla_{\theta}L(\theta_{1})\|\right)\leq h<1. \tag{5.6}\]
On the other hand, there also exists \(0<\tau<\frac{\delta}{2}\) such that
\[\|\mathbf{J}(\theta)^{\dagger}\|\nabla_{\theta}L(\theta)\|\leq\frac{1-h}{2}\delta <\frac{\delta}{2}<1 \tag{5.7}\]
holds for any \(\theta\in B(\theta^{*},\tau)\). Let \(\Omega_{2}=B(\theta^{*},\tau)\), then for \(\forall\theta_{0}\in\Omega_{2}\), the iteration implies
\[\|\theta_{1}-\theta^{*}\|\leq\|\theta_{1}-\theta_{0}\|+\|\theta_{0}-\theta^{*} \|\leq\|\mathbf{J}(\theta_{0})^{\dagger}\|\|\nabla_{\theta}L(\theta_{0})\|+\tau<\delta. \tag{5.8}\]
Next we assume that \(\theta_{k}\in B(\theta^{*},\delta)\) for some integer \(k\geq 1\). From Lemma 3 we have the following estimation
\[\|\theta_{k+1}-\theta_{k}\| =\|\mathbf{J}(\theta_{k})^{\dagger}\nabla_{\theta}L(\theta_{k})\|\] \[=\|\mathbf{J}(\theta_{k})^{\dagger}\left(\nabla_{\theta}L(\theta_{k})- \mathbf{J}(\theta_{k-1})\left(\theta_{k}-\theta_{k-1}+\mathbf{J}(\theta_{k-1})^{ \dagger}\nabla_{\theta}L(\theta_{k-1})\right)\right)\|\] \[\leq\|\mathbf{J}(\theta_{k})^{\dagger}\left(\nabla_{\theta}L(\theta_{ k})-\nabla_{\theta}L(\theta_{k-1})-\mathbf{J}(\theta_{k-1})(\theta_{k}-\theta_{k-1}) \right)\|\] \[\quad+\|\mathbf{J}(\theta_{k})^{\dagger}\left(\mathbf{J}(\theta_{k})\mathbf{J }(\theta_{k})^{\dagger}-\mathbf{J}(\theta_{k-1})\mathbf{J}(\theta_{k-1})^{\dagger} \right)\nabla_{\theta}L(\theta_{k-1})\|\] \[\leq\|\mathbf{J}(\theta_{k})^{\dagger}\|\left(\alpha\|\theta_{k}- \theta_{k-1}\|+\zeta\|\nabla_{\theta}L(\theta_{k-1})\|+\epsilon\right)\|\theta_ {k}-\theta_{k-1}\|. \tag{5.9}\]
Since \(\theta_{k}\in B(\theta^{*},\delta)\), then it leads to
\[\|\theta_{k+1}-\theta_{k}\|<h\|\theta_{k}-\theta_{k-1}\|<\|\theta_{k}-\theta_{k-1}\| \tag{5.10}\]
so the convergence is guaranteed. Therefore, we obtain that \(\|\theta_{j}-\theta_{j-1}\|\leq h^{j}\|\theta_{1}-\theta_{0}\|\) for \(1\leq j\leq k+1\), and
\[\|\theta_{k+1}-\theta^{*}\| \leq\|\theta_{0}-\theta^{*}\|+\sum_{j=0}^{k}\|\theta_{k-j+1}- \theta_{k-j}\|\] \[\leq\|\theta_{0}-\theta^{*}\|+\sum_{j=0}^{k}h^{j}\|\theta_{0}- \theta^{*}\|\] \[<\frac{1}{1-h}\|\theta_{1}-\theta_{0}\|+\|\theta_{0}-\theta^{*}\|\] \[<\frac{1}{1-h}\frac{h-1}{2}\delta+\frac{1}{2}\delta=\delta,\]
which completes the induction. Thus we conclude that the sequence \(\{\theta_{k}\}_{k=0}^{\infty}\subset B(\theta^{*},\delta)\subset\Omega_{1}\) as long as the initial iterate \(\theta_{0}\in B(\theta^{*},\tau)=\Omega_{2}\).
Secondly, let us define \(\hat{\theta}=\lim_{k\to\infty}\theta_{k}\) so that \(\hat{\theta}\in\Omega_{1}\). By the smoothness of \(\nabla_{\theta}L(\theta)\) and the convergence property (5.10), there exists a constant \(\mu>0\) such that
\[\|\nabla_{\theta}L(\theta_{k-1})\| =\|\nabla_{\theta}L(\theta_{k-1})-\nabla_{\theta}L(\hat{\theta})\|\] \[\leq\mu\|\theta_{k-1}-\hat{\theta}\|\] \[\leq\mu\left(\|\theta_{k-1}-\theta_{k}\|+\|\theta_{k}-\theta_{k+1 }\|+\cdots\right)\] \[\leq\frac{\mu}{1-h}\|\theta_{k}-\theta_{k-1}\|. \tag{5.11}\]
Now let us combine (5.9) and (5.11) to find that
\[\|\theta_{k+1}-\theta_{k}\| \leq\beta\|\theta_{k}-\theta_{k-1}\|^{2}+\epsilon\|\theta_{k}- \theta_{k-1}\| \tag{5.12}\] \[=\left(\beta\|\theta_{k}-\theta_{k-1}\|+\epsilon\right)\|\theta_{ k}-\theta_{k-1}\| \tag{5.13}\]
for some constant \(\beta>0\).
In the case when \(k\) is not large enough, i.e., for \(k\leq k_{0}\) with some \(k_{0}\), such that \(\beta\|\theta_{k}-\theta_{k-1}\|\geq\epsilon\), we have
\[\|\theta_{k+1}-\theta_{k}\| =\left(\beta\|\theta_{k}-\theta_{k-1}\|+\epsilon\right)\|\theta_{ k}-\theta_{k-1}\|\] \[=\left(\beta\|\theta_{k}-\theta_{k-1}\|+\beta\|\theta_{k}-\theta_{ k-1}\|\right)\|\theta_{k}-\theta_{k-1}\|\] \[=2\beta\|\theta_{k}-\theta_{k-1}\|^{2}\] \[\leq(2\beta)^{1+2+4+\cdots+2^{k-1}}\|\theta_{1}-\theta_{0}\|^{2^ {k}}\] \[=(2\beta)^{\frac{2^{k}-1}{2}}\|\theta_{1}-\theta_{0}\|^{2^{k}}\]
On the other hand, it can be easily seen that
\[\|\theta_{k+1}-\hat{\theta}\| =\|\theta_{k+1}-\theta^{k+2}\|+\|\theta^{k+2}-\theta^{k+3}\|+\| \theta^{k+3}-\theta^{k+4}\|+\cdots\] \[\leq\left(h+h^{2}+h^{3}+\cdots\right)\|\theta_{k+1}-\theta_{k}\|\] \[\leq\frac{h}{1-h}\|\theta_{k+1}-\theta_{k}\|.\]
Therefore, when \(k\leq k_{0}\) with some \(k_{0}\), we have quadratic convergence as follows
\[\|\theta_{k+1}-\hat{\theta}\| \leq\frac{h}{1-h}\|\theta_{k+1}-\theta_{k}\| \tag{5.14}\] \[=\frac{h}{1-h}(2\beta)^{\frac{2^{k}-1}{2}}\|\theta_{1}-\theta_{0} \|^{2^{k}}. \tag{5.15}\]
As \(k\) grows sufficiently large with \(k\geq\bar{k}\) such that \(\beta\|\theta_{k}-\theta_{k-1}\|\leq\epsilon\), we can use the estimate (5.12) to obtain
\[(\beta\|\theta_{k}-\theta_{k-1}\|+\epsilon)\leq 2\epsilon<1.\]
Therefore
\[\|\theta_{k+1}-\theta_{k}\| \leq 2\epsilon\|\theta_{k}-\theta_{k-1}\| \tag{5.16}\] \[\leq(2\epsilon)^{k}\|\theta^{1}-\theta^{0}\| \tag{5.17}\]
Hence
\[\|\theta_{k+1}-\hat{\theta}\|\leq\frac{h}{1-h}\leq(2\epsilon)^{k}\|\theta^{1} -\theta^{0}\|. \tag{5.18}\]
The estimates (5.14) and (5.18) show that \(\{\|\theta_{k+1}-\hat{\theta}\|\}_{k=1}^{\infty}\) is a decreasing sequence that exhibits quadratic convergence when the number of iterations is small, and at least has a linear convergence sequence with coefficient \(2\epsilon\) when the number of iterations is large. Since \(2\epsilon\) can be very small due to the construction of this iteration, the parameter sequence \(\{\theta_{k}\}_{k=1}^{\infty}\) will converge rapidly in numerical experiments.
### Random Gauss-Newton method
By employing Monte Carlo approximation with random sampling points to approximate the integration of \(L(\theta)\), the Gauss-Newton method can be transformed into a random algorithm. Specifically, we define \(\xi:\Omega\rightarrow\Gamma\) as a random variable, where \(\Gamma\) is a set containing all the combinations of \(\tilde{N}+\tilde{n}\) numbers that denote the sampling indices on both the domain and boundaries selected from \(1,2,\cdots,N,N+1,\cdots,N+n\),. The cardinality of \(\Gamma\) is given by \(|\Gamma|=\begin{pmatrix}N+n\\ m+r\end{pmatrix}\). Here, we assume that the total number of sampling points on the domain and boundary are \(N\) and \(n\), respectively. In each iteration, we draw \(\tilde{N}\) and \(\tilde{n}\) random samples from the domain and boundary, respectively. Therefore, the random Gauss-Newton method can be expressed as follows:
\[\theta_{k+1}(\xi_{k})=\theta_{k}(\xi_{k-1})-\mathbf{J}(\theta_{k};\xi_{k})^{ \dagger}\nabla_{\theta}L(\theta_{k};\xi_{k}).\]
Here, \(\theta_{k+1}(\xi_{k})\) represents the updated parameter vector at the \((k+1)\)-th iteration, which depends on the random variable \(\xi_{k}\).
Additionally, it's important to note that at each step \(k\), the sample points on both the domain and boundary are given by:
\[x_{s_{1}},x_{s_{2}},\cdots,x_{s_{\tilde{N}}}.x_{s_{1}}^{b},x_{s_{2}}^{b}, \cdots,x_{s_{\tilde{n}}}^{b}.\]
**Lemma 4**.: _Let \(\mathbf{J}(\theta)\) be the approximated Hessian defined in (3.3) and \(\theta^{*}\) be the stationary point. Then there exists a small open set \(\Omega_{*}\ni\theta^{*}\) and constants \(\epsilon,\zeta,\alpha>0\) such that for every \(\theta_{1},\theta_{2}\in\Omega_{*}\), the following inequalities holds:_
\[\mathbb{E}_{\xi_{2},\xi_{1}}\left(\|\mathbf{J}(\theta_{2},\xi_{2})\mathbf{ J}(\theta_{2},\xi_{2})^{\dagger}-\mathbf{J}(\theta_{1},\xi_{1})\mathbf{J}(\theta_{1}, \xi_{1})^{\dagger}\|\right) \leq\zeta\mathbb{E}_{\xi}\left(\|\theta_{2}-\theta_{1}\|\right), \tag{5.19}\] \[\mathbb{E}_{\xi_{2},\xi_{1}}\left(\|\nabla_{\theta}L(\theta_{2}, \xi_{2})-\nabla_{\theta}L(\theta_{1},\xi_{1})-\mathbf{J}(\theta_{1},\xi_{1})\left( \theta_{2}-\theta_{1}\right)\|\right) \leq\epsilon\mathbb{E}_{\xi}\left(\|\theta_{2}-\theta_{1}\| \right)+\alpha\mathbb{E}_{\xi}\left(\|\theta_{2}-\theta_{1}\|\right)^{2}. \tag{5.20}\]
Proof.: From Taylor's expansion, we have
\[\nabla_{\theta}L(\theta_{2},\xi_{2})-\nabla_{\theta}L(\theta_{1}, \xi_{1})-\mathbf{J}(\theta_{1},\xi_{1})\left(\theta_{2}-\theta_{1}\right)\] \[=\nabla_{\theta}L(\theta_{2},\xi_{2})-\nabla_{\theta}L(\theta_{1},\xi_{2})+\nabla_{\theta}L(\theta_{1},\xi_{2})-\nabla_{\theta}L(\theta_{1}, \xi_{1})-\mathbf{J}(\theta_{1},\xi_{1})\left(\theta_{2}-\theta_{1}\right)\] \[=\mathbf{H}(\theta_{1},\xi_{2})\left(\theta_{2}-\theta_{1}\right)+ \mathcal{O}(\|\theta_{2}-\theta_{1}\|^{2})+\nabla_{\theta}L(\theta_{1},\xi_{2 })-\nabla_{\theta}L(\theta_{1},\xi_{1})\] \[-\left(\mathbf{J}(\theta_{1},\xi_{1})-\mathbf{J}(\theta_{1},\xi_{2})+\mathbf{ J}(\theta_{1},\xi_{2})\right)\left(\theta_{2}-\theta_{1}\right)\] \[=\mathbf{H}(\theta_{1},\xi_{2})\left(\theta_{2}-\theta_{1}\right)- \mathbf{J}(\theta_{1},\xi_{2})\left(\theta_{2}-\theta_{1}\right)+\mathcal{O}(\| \theta_{2}-\theta_{1}\|^{2})\] \[+\nabla_{\theta}L(\theta_{1},\xi_{2})-\nabla_{\theta}L(\theta_{1},\xi_{1})-\left(\mathbf{J}(\theta_{1},\xi_{1})-\mathbf{J}(\theta_{1},\xi_{2})\right) \left(\theta_{2}-\theta_{1}\right)\] \[=\mathbf{Q}(\theta_{1},\xi_{2})\left(\theta_{2}-\theta_{1}\right)+ \mathcal{O}(\|\theta_{2}-\theta_{1}\|^{2})+\nabla_{\theta}L(\theta_{1},\xi_{2 })-\nabla_{\theta}L(\theta_{1},\xi_{1})-\left(\mathbf{J}(\theta_{1},\xi_{1})-\mathbf{ J}(\theta_{1},\xi_{2})\right)\left(\theta_{2}-\theta_{1}\right).\]
Now taking norms and expectation from both sides and noting that
\[\mathbb{E}_{\xi_{2},\xi_{1}}\left(\|\nabla_{\theta}L(\theta_{1},\xi_{2})- \nabla_{\theta}L(\theta_{1},\xi_{1})\|\right)=0\]
and
\[\mathbb{E}_{\xi_{2},\xi_{1}}\left(\|\left(\mathbf{J}(\theta_{1},\xi_{1})-\mathbf{J}( \theta_{1},\xi_{2})\right)\left(\theta_{2}-\theta_{1}\right)\|\right)=0\]
we have
\[\mathbb{E}_{\xi_{2},\xi_{1}}\left(\|\nabla_{\theta}L(\theta_{2}, \xi_{2})-\nabla_{\theta}L(\theta_{1},\xi_{1})-\mathbf{J}(\theta_{1},\xi_{1})\left( \theta_{2}-\theta_{1}\right)\|\right)\] \[\leq\mathbb{E}_{\xi_{2},\xi_{1}}\left(\|\mathbf{Q}(\theta_{1},\xi_{2 })\left(\theta_{2}-\theta_{1}\right)\|\right)+\mathbb{E}_{\xi}\left(\mathcal{O }(\|\theta_{2}-\theta_{1}\|^{2})\right).\]
Therefore, due to the smoothness of the Hessian with respect to \(\theta\) and Lemma 1, there exists a small neighborhood \(\Omega_{*}\) of \(\theta^{*}\) and a constant \(\alpha>0\) such that for every \(\theta_{1},\theta_{2}\in\Omega_{*}\), we have \(\mathbb{E}_{\xi_{2}}(\|\mathbf{Q}(\theta_{1},\xi_{2})\|)\leq\epsilon\), and the following inequality holds:
\[\mathbb{E}_{\xi_{2},\xi_{1}}\left(\|\nabla_{\theta}L(\theta_{2}, \xi_{2})-\nabla_{\theta}L(\theta_{1},\xi_{1})-\mathbf{J}(\theta_{1},\xi_{1})\left( \theta_{2}-\theta_{1}\right)\|\right) \tag{5.21}\] \[\leq\epsilon\mathbb{E}_{\xi}(\|\theta_{2}-\theta_{1}\|)+\alpha \mathbb{E}_{\xi}(\|\theta_{2}-\theta_{1}\|^{2}) \tag{5.22}\]
for every \(\theta_{1},\theta_{2}\in\Omega_{*}\). On the other hand, Weyl's Theorem guarantees the singular value to be continuous with respect to the matrix entries. Therefore, as the open set \(\Omega_{*}\) being sufficiently small, it holds
\[\sup_{\theta\in\Omega_{*}}\mathbb{E}_{\xi}(\|\mathbf{J}(\theta,\xi)\|) =\sup_{\theta\in\Omega_{*}}\sigma_{1}\mathbb{E}_{\xi}(\mathbf{J}(\theta,\xi))\leq C \mathbb{E}_{\xi}(\|\mathbf{J}(\theta^{*},\xi)\|)\] \[\sup_{\theta\in\Omega_{*}}\mathbb{E}_{\xi}(\|\mathbf{J}(\theta,\xi)^{ \dagger}\|)=\frac{1}{\sup_{\theta\in\Omega_{*}}\sigma_{r}\mathbb{E}_{\xi}(\mathbf{J}( \theta,\xi))}\leq C\mathbb{E}_{\xi}(\|\mathbf{J}(\theta^{*},\xi)^{\dagger}\|)\]
for all \(\theta\in\Omega_{*}\). By the smoothness of \(\mathbb{E}_{\xi}(\mathbf{J}(\theta,\xi))\) with respect to \(\theta\) and the estimation in Lemma 8, we have
\[\mathbb{E}_{\xi_{2},\xi_{1}}(\|\mathbf{J}(\theta_{2},\xi_{2})\mathbf{J}( \theta_{2},\xi_{2})^{\dagger}-\mathbf{J}(\theta_{1},\xi_{1})\mathbf{J}(\theta_{1},\xi_{ 1})^{\dagger}\|)\] \[\leq\mathbb{E}_{\xi}(\|\mathbf{J}(\theta_{2},\xi_{2})^{\dagger}\|) \mathbb{E}_{\xi_{2},\xi_{1}}(\|\mathbf{J}(\theta_{2},\xi_{2})-\mathbf{J}(\theta_{1}, \xi_{1})\|)+\mathbb{E}_{\xi_{1}}(\|\mathbf{J}(\theta_{1},\xi_{1})\|)\mathbb{E}_{ \xi_{2},\xi_{1}}(\ \|\mathbf{J}(\theta_{2},\xi_{2})^{\dagger}-\mathbf{J}(\theta_{1},\xi_{1})^{ \dagger}\|)\] \[\leq\zeta\mathbb{E}_{\xi}(\|\theta_{2}-\theta_{1}\|).\]
**Theorem 2**.: _(Convergence Theorem) Let \(L(\theta)\) be a sufficiently smooth target function of \(\theta\) and \(\mathbf{J}(\theta)\) be the approximated Hessian of \(L(\theta)\) defined in (3.3). Then for every open neighbourhood \(\Omega_{1}\) of \(\theta^{*}\), there exists another neighbourhood \(\Omega_{2}\ni\theta^{*}\) such that, from every initial guess \(\theta_{0}\in\Omega_{2}\), the sequence \(\{\theta_{k}\}_{k=1}^{\infty}\) generated by the iteration (3.17) converges in \(\Omega_{1}\). Furthermore, \(\{\theta_{k}\}_{k=1}^{\infty}\) has at least linear convergence with a coefficient \(\gamma\leq 2\epsilon\) when \(k\) is large enough._
Proof.: Firstly, Let \(\Omega_{*}\) be the small open neighbourhood in Lemma 3 such that \(\mathbb{E}_{\xi}(\|\mathbf{Q}(\theta,\xi)\|)\leq\epsilon<\frac{1}{2}\). For any open neighbourhood \(\Omega_{1}\) of \(\theta^{*}\), there exists a constant \(0<\delta<2\) such that \(B(\theta^{*},\delta)\subset\Omega_{1}\cap\Omega_{*}\) and for any \(\theta_{1},\theta_{2}\in B(\theta^{*},\delta)\), it holds
\[\mathbb{E}_{\xi_{2}}(\|\mathbf{J}(\theta_{2},\xi_{2})^{\dagger}\|)\left(\alpha \mathbb{E}_{\xi}(\|\theta_{2}-\theta_{1}\|)+\zeta\mathbb{E}_{\xi_{1}}(\| \nabla_{\theta}L(\theta_{1},\xi_{1})\|)\right)\leq h<1. \tag{5.23}\]
On the other hand, there also exists \(0<\tau<\frac{\delta}{2}\) such that
\[\mathbb{E}_{\xi}(\|\mathbf{J}(\theta,\xi)^{\dagger}\|)\mathbb{E}_{\xi}(\|\nabla_{ \theta}L(\theta,\xi)\|)\leq\frac{1-h}{2}\delta<\frac{\delta}{2}<1 \tag{5.24}\]
holds for any \(\theta\in B(\theta^{*},\tau)\). Let \(\Omega_{2}=B(\theta^{*},\tau)\), then for \(\forall\theta_{0}\in\Omega_{2}\), the iteration implies
\[\mathbb{E}_{\xi}(\|\theta_{1}-\theta^{*}\|)\leq\mathbb{E}_{\xi}(\|\theta_{1}- \theta_{0}\|)+\mathbb{E}_{\xi}(\|\theta_{0}-\theta^{*}\|)\leq\mathbb{E}_{\xi _{0}}(\|\mathbf{J}(\theta_{0},\xi_{0})^{\dagger}\|)\mathbb{E}_{\xi_{0}}(\|\nabla_{ \theta}L(\theta_{0},\xi_{0})\|)+\tau<\delta. \tag{5.25}\]
Next we assume that \(\theta_{k}\in B(\theta^{*},\delta)\) for some integer \(k\geq 1\). From Lemma 3 we have the following estimation
\[\mathbb{E}_{\xi}(\|\theta_{k+1}-\theta_{k}\|)=\mathbb{E}_{\xi}(\| \mathbf{J}(\theta_{k},\xi_{k})^{\dagger}\nabla_{\theta}L(\theta_{k},\xi_{k})\|)\] \[=\mathbb{E}_{\xi_{0},\xi_{1},\cdots,\xi_{k-1}}\mathbb{E}_{\xi_{k} }\left(\|\mathbf{J}(\theta_{k},\xi_{k})^{\dagger}\left(\nabla_{\theta}L(\theta_{k},\xi_{k})-\mathbf{J}(\theta_{k-1},\xi_{k-1})\left(\theta_{k}-\theta_{k-1}+\mathbf{J}( \theta_{k-1},\xi_{k-1})^{\dagger}\nabla_{\theta}L(\theta_{k-1},\xi_{k-1}) \right)\right)\|\right)\] \[\leq\mathbb{E}_{\xi_{0},\xi_{1},\cdots,\xi_{k-1}}\mathbb{E}_{\xi_ {k}}\left(\|\mathbf{J}(\theta_{k},\xi_{k})^{\dagger}\left(\nabla_{\theta}L(\theta_{k},\xi_{k})-\nabla_{\theta}L(\theta_{k-1},\xi_{k-1})-\mathbf{J}(\theta_{k-1},\xi_{k-1 })(\theta_{k}-\theta_{k-1})\right)\|\right)\] \[\quad+\mathbb{E}_{\xi_{0},\xi_{1},\cdots,\xi_{k-1}}\mathbb{E}_{ \xi_{k}}\left(\|\mathbf{J}(\theta_{k},\xi_{k})^{\dagger}\left(\mathbf{J}(\theta_{k},\xi_{ k})\mathbf{J}(\theta_{k},\xi_{k})^{\dagger}-\mathbf{J}(\theta_{k-1},\xi_{k-1})\mathbf{J}( \theta_{k-1},\xi_{k-1})^{\dagger}\right)\nabla_{\theta}L(\theta_{k-1},\xi_{k-1 })\|\right)\] \[\leq\mathbb{E}_{\xi_{k}}(\|\mathbf{J}(\theta_{k},\xi_{k})^{\dagger}\|) \left(\alpha\mathbb{E}_{\xi}(\|\theta_{k}-\theta_{k-1}\|)+\zeta\mathbb{E}_{\xi_ {k-1}}(\|\nabla_{\theta}L(\theta_{k-1},\xi_{k-1})\|)+\epsilon\right)\mathbb{E}_{ \xi}(\|\theta_{k}-\theta_{k-1}\|). \tag{5.26}\]
Since \(\theta_{k}\in B(\theta^{*},\delta)\), then it leads to
\[\mathbb{E}_{\xi}(\|\theta_{k+1}-\theta_{k}\|)<h\mathbb{E}_{\xi}(\|\theta_{k}- \theta_{k-1}\|)<\mathbb{E}_{\xi}(\|\theta_{k}-\theta_{k-1}\|) \tag{5.27}\]
so the convergence is guaranteed. Therefore, we obtain that \(\mathbb{E}_{\xi}(\|\theta_{j}-\theta_{j-1}\|)\leq h^{j}\mathbb{E}_{\xi}(\|\theta_ {1}-\theta_{0}\|)\) for \(1\leq j\leq k+1\), and
\[\mathbb{E}_{\xi}(\|\theta_{k+1}-\theta^{*}\|) \leq\mathbb{E}_{\xi}(\|\theta_{0}-\theta^{*}\|)+\sum_{j=0}^{k} \mathbb{E}_{\xi}(\|\theta_{k-j+1}-\theta_{k-j}\|)\] \[\leq\mathbb{E}_{\xi}(\|\theta_{0}-\theta^{*}\|)+\sum_{j=0}^{k}h^ {j}\mathbb{E}_{\xi}(\|\theta_{0}-\theta^{*}\|)\] \[<\frac{1}{1-h}\mathbb{E}_{\xi}(\|\theta_{1}-\theta_{0}\|)+ \mathbb{E}_{\xi}(\|\theta_{0}-\theta^{*}\|)\] \[<\frac{1}{1-h}\frac{h-1}{2}\delta+\frac{1}{2}\delta=\delta,\]
which completes the induction. Thus we conclude that the sequence \(\{\mathbb{E}_{\xi}(\theta_{k})\}_{k=0}^{\infty}\subset B(\theta^{*},\delta) \subset\Omega_{1}\) as long as the initial iterate \(\mathbb{E}_{\xi}(\theta_{0})\in B(\theta^{*},\tau)=\Omega_{2}\).
Secondly, let us define \(\mathbb{E}_{\xi}(\hat{\theta})=\lim_{k\to\infty}\mathbb{E}_{\xi}(\theta_{k})\) so that \(\mathbb{E}_{\xi}(\hat{\theta})\in\Omega_{1}\). By the smoothness of \(\mathbb{E}_{\xi}(\nabla_{\theta}L(\theta,\xi))\) and the convergence property (5.27), there exists a constant \(\mu>0\) such that
\[\mathbb{E}_{\xi_{k-1}}(\|\nabla_{\theta}L(\theta_{k-1},\xi_{k-1})\|) =\mathbb{E}_{\xi_{k-1}}(\|\nabla_{\theta}L(\theta_{k-1},\xi_{k-1}) -\nabla_{\theta}L(\hat{\theta},\xi_{k-1})\|)\] \[\leq\mu\mathbb{E}_{\xi}(\|\theta_{k-1}-\hat{\theta}\|)\] \[\leq\mu\mathbb{E}_{\xi}((\|\theta_{k-1}-\theta_{k}\|)+\mathbb{E}_ {\xi}(\|\theta_{k}-\theta_{k+1}\|)+\cdots)\] \[\leq\frac{\mu}{1-h}\mathbb{E}_{\xi}(\|\theta_{k}-\theta_{k-1}\|). \tag{5.28}\]
Now let us combine (5.26) and (5.28) to find that
\[\mathbb{E}_{\xi}(\|\theta_{k+1}-\theta_{k}\|) \leq\beta\mathbb{E}_{\xi}(\|\theta_{k}-\theta_{k-1}\|)^{2}+ \epsilon\mathbb{E}_{\xi}(\|\theta_{k}-\theta_{k-1}\|) \tag{5.29}\] \[=(\beta\mathbb{E}_{\xi}(\|\theta_{k}-\theta_{k-1}\|)+\epsilon) \,\mathbb{E}_{\xi}(\|\theta_{k}-\theta_{k-1}\|) \tag{5.30}\]
for some constant \(\beta>0\).
In the case when \(k\) is not large enough, i.e., for \(k\leq k_{0}\) with some \(k_{0}\), such that \(\beta\|\theta_{k}-\theta_{k-1}\|\geq\epsilon\), we have
\[\mathbb{E}_{\xi}(\|\theta_{k+1}-\theta_{k}\|) =(\beta\mathbb{E}_{\xi}(\|\theta_{k}-\theta_{k-1}\|)+\epsilon) \,\mathbb{E}_{\xi}(\|\theta_{k}-\theta_{k-1}\|)\] \[=(\beta\mathbb{E}_{\xi}(\|\theta_{k}-\theta_{k-1}\|)+\beta \mathbb{E}_{\xi}(\|\theta_{k}-\theta_{k-1}\|))\,\mathbb{E}_{\xi}(\|\theta_{k} -\theta_{k-1}\|)\] \[=2\beta\mathbb{E}_{\xi}(\|\theta_{k}-\theta_{k-1}\|)^{2}\] \[\leq(2\beta)^{1+2+4+\cdots+2^{k-1}}\mathbb{E}_{\xi}(\|\theta_{1}- \theta_{0}\|)^{2^{k}}\] \[=(2\beta)^{\frac{k-1}{2}}\mathbb{E}_{\xi}(\|\theta_{1}-\theta_{0} \|)^{2^{k}}\]
On the other hand, it can be easily seen that
\[\mathbb{E}_{\xi}(\|\theta_{k+1}-\hat{\theta}\|) =\mathbb{E}_{\xi}(\|\theta_{k+1}-\theta^{k+2}\|)+\mathbb{E}_{\xi} (\|\theta^{k+2}-\theta^{k+3}\|)+\mathbb{E}_{\xi}(\|\theta^{k+3}-\theta^{k+4}\|)+\cdots\] \[\leq\left(h+h^{2}+h^{3}+\cdots\right)\mathbb{E}_{\xi}(\|\theta_{k +1}-\theta_{k}\|)\] \[\leq\frac{h}{1-h}\mathbb{E}_{\xi}(\|\theta_{k+1}-\theta_{k}\|).\]
Therefore, when \(k\leq k_{0}\) with some \(k_{0}\), we have quadratic convergence as follows
\[\mathbb{E}_{\xi}(\|\theta_{k+1}-\hat{\theta}\|) \leq\frac{h}{1-h}\mathbb{E}_{\xi}(\|\theta_{k+1}-\theta_{k}\|) \tag{5.31}\] \[=\frac{h}{1-h}(2\beta)^{\frac{2^{k}-1}{2}}\mathbb{E}_{\xi}(\| \theta_{1}-\theta_{0}\|)^{2^{k}}. \tag{5.32}\]
As \(k\) grows sufficiently large with \(k\geq\bar{k}\) such that \(\beta\mathbb{E}_{\xi}(\|\theta_{k}-\theta_{k-1}\|)\leq\epsilon\), we can use the estimate (5.29) to obtain
\[(\beta\mathbb{E}_{\xi}(\|\theta_{k}-\theta_{k-1}\|)+\epsilon) \leq 2\epsilon<1.\]
Therefore
\[\mathbb{E}_{\xi}(\|\theta_{k+1}-\theta_{k}\|) \leq 2\epsilon\mathbb{E}_{\xi}(\|\theta_{k}-\theta_{k-1}\|) \tag{5.33}\] \[\leq(2\epsilon)^{k}\mathbb{E}_{\xi}(\|\theta^{1}-\theta^{0}\|) \tag{5.34}\]
Hence
\[\mathbb{E}_{\xi}(\|\theta_{k+1}-\hat{\theta}\|) \leq\frac{h}{1-h}\leq(2\epsilon)^{k}\mathbb{E}_{\xi}(\|\theta^{1}- \theta^{0}\|). \tag{5.35}\]
The estimates (5.31) and (5.35) show that \(\{\mathbb{E}_{\xi}(\|\theta_{k+1}-\hat{\theta}\|)\}_{k=1}^{\infty}\) is a decreasing sequence that exhibits quadratic convergence when the number of iterations is small, and at least has a linear convergence sequence with coefficient \(2\epsilon\) when the number of iterations is large. Since \(2\epsilon\) can be very small due to the construction of this iteration, the parameter sequence \(\{\mathbb{E}_{\xi}(\theta_{k})\}_{k=1}^{\infty}\) will converge rapidly in numerical experiments.
## 6 Numerical experiments
In this section, we analyze the robustness and efficiency of the Gauss-Newton method by considering a differential equation with zero Neumann boundary conditions. The equation is given as follows:
\[-\Delta u+u=f,\text{ in }\Omega,\] \[\frac{du}{dn}=0,\text{ on }\partial\Omega.\]
where \(\Omega=(-1,1)\) and \(f(x)=\pi^{2}\cos((\pi x)+\cos(\pi x)\). Then the exact solution is \(u(x)=\cos(\pi x)\).
Firstly, we utilize Gauss-Legendre quadrature numerical integration with 100 grid points to approximate the integration of \(L(\theta)\) in Eq. 3.1. The resulting energy loss versus iteration steps is presented in Fig. 1 with 1-hidden layer and 100 neurons. For the adaptive step size approach, we initialize the step size of the Gauss-Newton method as 0.1 and gradually increase it by multiplying by 5 every 10 iterations until it reaches 1. This adjustment is necessary because the Gauss-Newton method only exhibits local convergence and requires global optimization techniques for global convergence. On the other hand, for the fixed step size case, we employ a trained initial guess obtained from 1000 iterations of the Adam algorithm to initialize the parameters of the neural networks.
This initialization ensures that the parameters are close to the local minimizer, allowing us to set the step size of the Gauss-Newton method to \(1\). As depicted in the figure, it is evident that the Gauss-Newton method achieves a significantly faster convergence rate compared to the Adam and GD methods, thanks to its super-linear convergence properties.
Secondly, we proceed to apply the Gauss-Newton method to various neural network architectures and evaluate the resulting numerical errors. The computed errors measured in both \(L_{2}\) and \(H_{1}\) norms are presented in Table 1, demonstrating the consistency of the numerical error across different neural network structures. This consistency further highlights the robustness of the Gauss-Newton method.
Next, we assess the consistency of Gauss-Newton methods for both \(L2\) minimization and variational problems. To conduct these tests, we employ one-hidden-layer neural networks with \(128\) nodes, which implies \(m=384\). We then set \(N=100\) and \(n=2\), resulting in a full-column rank matrix \(G\). Fig. 1(a)&b illustrate the numerical solutions obtained using two Gauss-Newton methods, which
\begin{table}
\begin{tabular}{c|c|c|c|c|c|} Hidden Layer Width & 32 & 64 & 128 & 256 & 512 \\ \hline \(L_{2}\) Error & 6.33E-3 & 5.82E-3 & 3.97E-3 & 2.50E-3 & 2.38E-3 \\ \(H_{1}\) Error & 5.44E-2 & 5.17E-2 & 4.14E-2 & 3.55E-2 & 3.37E-2 \\ \end{tabular}
\end{table}
Table 1: Errors corresponding to different widths of the hidden layer.
Figure 1: \(L(\theta)\) v.s. iterations. **Left:** Adaptive Learning Rate with Random Initialization. The Gauss-Newton method starts with an initial learning rate of \(0.1\), which is then multiplied by \(5\) every \(10\) epochs until it reaches \(1\). The learning rates for ADAM and GD are set to \(10^{-3}\) and \(5\times 10^{-3}\), respectively. **Right:** Fixed Learning Rate with Initialization Near Local Minimizer. the learning rates for Gauss-Newton, ADAM, and GD are set to \(1\), \(10^{-3}\), and \(5\times 10^{-3}\), respectively. To initialize the parameters, we adopt an approach that involves using the ADAM algorithm for \(300\) iterations. After these initial iterations, the parameter values obtained from ADAM are utilized as the starting point for the optimization process in the other methods.
demonstrate identical results. We proceed with \(N=500\), and the matrix \(G\) is no longer full column rank. As depicted in Fig. 2c&d, the two Gauss-Newton methods yield different solutions.
Finally, we assess the effectiveness of the random Gauss-Newton method on a one-hidden-layer neural network with 128 nodes. In each iteration, we randomly select 200 sampling points. Additionally, we implement an adaptive learning rate strategy, where we begin with a larger learning rate and gradually decrease it over the course of iterations. To compare the performance of the three methods, we present the results in Fig. 3, depicting both the variational loss and L2 error metrics. The random Gauss-Newton method outperforms the other two methods in terms of efficiency and convergence.
## 7 Conclusions
In this paper, the Gauss-Newton method has been introduced as a powerful approach for solving partial differential equations (PDEs) using neural network discretization in the variational energy formulation. This method offers significant advantages in terms of convergence and computational efficiency. The comprehensive analysis conducted in this paper has demonstrated the superlinear convergence properties of the Gauss-Newton method, positioning it as another choice for numerical solutions of PDEs. The method converges to semi-regular zeros of the vanishing gradient, indicating its effectiveness in reaching optimal solutions.
Furthermore, we provide the conditions under which the Gauss-Newton method is identical for both variational and L2 minimization problems. Additionally, a variant of the method known as the random Gauss-Newton method has been analyzed, highlighting its potential for large-scale systems. The numerical examples presented in this study reinforce the efficiency and accuracy of the proposed Gauss-Newton method.
In summary, the Gauss-Newton method introduces a novel framework for solving PDEs through neural network discretization. Its convergence properties and computational efficiency make it a valuable tool in the field of computational mathematics. Future research directions may involve incorporating fast linear solvers into the Gauss-Newton method to further enhance its efficiency. This advancement would contribute to the broader utilization of the method and accelerate its application in various domains.
|
2304.09240 | The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future
Directions | The Metaverse offers a second world beyond reality, where boundaries are
non-existent, and possibilities are endless through engagement and immersive
experiences using the virtual reality (VR) technology. Many disciplines can
benefit from the advancement of the Metaverse when accurately developed,
including the fields of technology, gaming, education, art, and culture.
Nevertheless, developing the Metaverse environment to its full potential is an
ambiguous task that needs proper guidance and directions. Existing surveys on
the Metaverse focus only on a specific aspect and discipline of the Metaverse
and lack a holistic view of the entire process. To this end, a more holistic,
multi-disciplinary, in-depth, and academic and industry-oriented review is
required to provide a thorough study of the Metaverse development pipeline. To
address these issues, we present in this survey a novel multi-layered pipeline
ecosystem composed of (1) the Metaverse computing, networking, communications
and hardware infrastructure, (2) environment digitization, and (3) user
interactions. For every layer, we discuss the components that detail the steps
of its development. Also, for each of these components, we examine the impact
of a set of enabling technologies and empowering domains (e.g., Artificial
Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on
its advancement. In addition, we explain the importance of these technologies
to support decentralization, interoperability, user experiences, interactions,
and monetization. Our presented study highlights the existing challenges for
each component, followed by research directions and potential solutions. To the
best of our knowledge, this survey is the most comprehensive and allows users,
scholars, and entrepreneurs to get an in-depth understanding of the Metaverse
ecosystem to find their opportunities and potentials for contribution. | Hani Sami, Ahmad Hammoud, Mouhamad Arafeh, Mohamad Wazzeh, Sarhad Arisdakessian, Mario Chahoud, Osama Wehbi, Mohamad Ajaj, Azzam Mourad, Hadi Otrok, Omar Abdel Wahab, Rabeb Mizouni, Jamal Bentahar, Chamseddine Talhi, Zbigniew Dziong, Ernesto Damiani, Mohsen Guizani | 2023-04-18T18:58:14Z | http://arxiv.org/abs/2304.09240v1 | # The Metaverse: Survey, Trends, Novel Pipeline Ecosystem & Future Directions
###### Abstract
The Metaverse offers a second world beyond reality, where boundaries are non-existent, and possibilities are endless through engagement and immersive experiences using the virtual reality (VR) technology. Many disciplines can benefit from the advancement of the Metaverse when accurately developed, including the fields of technology, gaming, education, art & culture, socialization, commerce, and businesses. Nevertheless, developing the Metaverse environment to its full potential is an ambiguous task that needs proper guidance and directions. Existing surveys on the Metaverse focus only on a specific aspect and discipline of the Metaverse and lack a holistic view of the entire process. Moreover, most surveys refrain from providing detailed guidance about the development process of the metaverse, including its impact on technologies, businesses, existing challenges, and potential research directions due to their lack of a macro and micro perception of such a topic. To this end, a more holistic, multi-disciplinary, in-depth, and academic and industry-oriented review is required to provide a thorough study of the Metaverse development pipeline and fill the gap in existing Metaverse surveys. To address these issues, we present in this survey a novel multi-layered pipeline ecosystem composed of (1) the Metaverse computing, networking, communications and hardware infrastructure, (2) environment digitization, and (3) user interactions. For every layer, we discuss the components that detail the steps of its development. Also, for each of these components, we examine the impact of a set of enabling technologies and empowering domains (e.g., Artificial Intelligence, Security & Privacy, Blockchain, Business, Ethics, and Social) on its advancement. In addition, we explain the importance of these technologies to support decentralization, interoperability, user experiences, interactions, and monetization. Our presented study highlights the existing challenges for each component, followed by research directions and potential solutions. To the best of our knowledge, this survey is the most comprehensive and allows users, scholars, and entrepreneurs to get an in-depth understanding of the Metaverse ecosystem to find their opportunities and potentials for contribution.
Metaverse, Augmented Reality, Virtual Reality, Mixed Reality, AI, Networking, Communications, Edge Computing, Security, Privacy, Blockchain, Digital Twins, Avatars, Rendering, 3D Modeling, User-to-User and User-to-Business Interactions.
## 1 Introduction
The Second life is the expression used to describe what the Metaverse has to offer. The Metaverse is a shared virtual space benefiting from the new waves of technologies to gather all the aspects of the physical world into virtual entities and bring them to life [1]. It is the common space for combining all the attempts to empower virtualism, where a user represented as an avatar, can create homes or other spaces, and can interact with the virtual environment and other users [2]. A verse, short of universe, refers to the virtual environment created technologies, such as virtual games or simulations. The Metaverse was first mentioned in a novel in 1992 [3] where it referred to a virtual world or verse for people to escape. Due to the advancements in Virtual Reality (VR) technology, computer vision, computing power, networking infrastructure,and communications, the term Metaverse got resurrected [4]. With a focus on social connections and experiences, the Metaverse is empowered by the new generation of virtual and augmented reality (VR/AR) enabled through wearable devices [5]. Although people are already familiar with the concept of the virtual world through some applications, the Metaverse is offering instead a shared virtual space with a combined set of applications and advanced technologies in one place. The Metaverse can benefit from the Digital Twin (DT) concept [6], where a virtual replica of the physical world is con
structed and manipulated by reading data from sensors or Internet of Things (IoT) devices. Using DT, it is possible to understand and optimize the complexity of physical entity.
During the recent COVID-19 pandemic, many things could have been easier if the Metaverse had been accessible to everyone [7]. Using an avatar, a user participates in different activities that mirror their preference to do things without leaving the room. Examples of practical Metaverse applications include virtual schools, teleworking in a company, or performing factory work. These environments empower higher quality education, ease of communications, and the ability to perform more complicated work in less effort and time. In Fig. 1, we illustrate a view of the Metaverse experience between three people sharing the same house but engaging in different virtual environments with various objectives. Building virtual schools in the Metaverse is promising for the future of education by designing a digital copy or DT of schools for teachers and students to join [8]. Students attend classes with instructors and colleagues virtually while being able to discover places and perform scientific experiments. This motivates students to learn and widen their capabilities through a live and engaging experience with the topic of study. Some demonstrations of use cases can be supported by visual aids with live interactions. For instance, demonstrations of virtual operations in medical schools help save time and effort for the professors and students while obtaining the needed experience [9]. In another context, a factory DT can be developed and deployed in the Metaverse using physical sensors and actuators that are bound to the factory [10]. Performing operations using the replicated factory's equipment in the virtual world is reproduced in the physical world in real-time. The verse can also be shared among employees who cannot attend the company physically and instead do their job virtually.
Despite the promising opportunities for businesses, most applications are still concepts and under development. There are many challenges to address before fully realizing the Metaverse. On the level of a single environment or entity within the Metaverse (i.e., one virtual verse), there still exist challenges in the computing and networking infrastructures that can affect the building and rendering of an environment in real-time [11]. Furthermore, security, privacy, and trust are the main concerns of the users to start accepting the idea of integration in the Metaverse [1]. In addition, different Metaverse environments can be built to resemble various user experiences depending on the interest and activity to perform. Subsequently, a user represented as an avatar should have access to the different virtual spaces and can interact with others and objects in real-time without delays. In order to address these requirements, the infrastructure should be ready with optimized architectures, sufficient resources, and advanced technologies. Synchronizing between various environments in a smart and secure manner is another major challenge. Thus, a clearly structured study is essential for the development pipeline of Metaverses while building and combining these environments into a globally distributed paradigm.
With many attempts to survey the current advancements in the Metaverse, a well-defined structure to develop, render, interact, and manage the content is still widely needed in the research community. Such a structure can highlight the main characteristics of the Metaverse, as well as the needs and challenges facing it in various sectors. Represented as a sequence or workflow of development tasks, the Metaverse can be defined as a pipeline ecosystem to identify each stage and its enabling domains. The literature comprises multiple Metaverse surveys in which some of them are addressing specific domains and others are multidisciplinary. The domain-specific surveys focus on the main enabling technologies of the Metaverse, such as Artificial Intelligence (AI) [12], Blockchain [13], [14], and wireless technologies (5G and 6G) [15]. Other work reviews the market and business advancement in the Metaverse in various countries and the applications offered [16]. Existing Metaverse multidisciplinary surveys focus on describing the concepts solely while ignoring the coherence of the enabling technologies and their challenges of deployment [17], [18]. Besides, there is no standard of structured and well-defined realization guidelines and development workflow or pipeline ecosystem for Metaverse in any of the existing surveys. Finally, the Metaverse offers opportunities for both the academic research and business communities, thus it is important to address and discuss each perspective with regards to the Metaverse pipeline ecosystem and existing challenges. All of the aforementioned problems can slow down and hinder the realization of the Metaverse.
### _Survey Contributions_
The purpose of this survey is to summarize all the efforts in a structured, well-defined, and sequential manner to allow the realization of the Metaverse by relying on experts in miscellaneous fields to engage in the process. Therefore, our main objective is to propose a Metaverse pipeline ecosystem that consists of several development stages with the integration of enabling technologies and empowering domains. Similar to the concept of any project development lifecycle, an environment should be developed after passing through several stages or a well-defined pipeline (i.e., workflow). Each part of that pipeline requires a combination of technologies that ensure the addressability of various user concerns and the delivery of the desired experiences. In this survey, we present a benchmark of a comprehensive pipeline ecosystem of Metaverse environments combined with a set of enabling technologies. Furthermore, we study what the businesses and academia offer as solutions in the current literature, and present them as test cases with state-of-the-art Metaverse-based architectures and solutions. Therefore, this survey offers for the first time a clear and well-structured realization guidelines and pipeline ecosystem for the Metaverse. In addition, a set of enabling technologies is integrated within each component of the pipeline, providing a multi-class classification of different scientific literature work and business solutions described by functionalities, applications, limitations, and future works. With this pipeline, we address most of the disciplines that are involved in the Metaverse through a holistic view of the ecosystem, as well as the possible technologies that are used or have the potential in augmenting this workflow and addressing existing Metaverse problems. The main objectives of this survey are summarized as follows:
* Provide an in-depth, holistic view, multi-disciplinary, academic, and industry-related review of the Metaverse development ecosystem.
* Tailor this survey to offer guidelines and benefit a wide range of audience, including field experts, businesses, and Metaverse users.
* Devise a multi-layered pipeline ecosystem composed of nine main stages while integrating enabling technologies, empowering domains, and social enablers within each stage.
* Present a multi-class classification of various literature works by reviewing the impact of each enabler on its development.
* Identify new opportunities in the Metaverse within each component by deducing challenges and existing issues that are linked to research directions.
We believe that this survey will constitute a major step towards standardizing the pipeline ecosystem and the integration of various enabling and advanced technologies at each stage for realizing the Metaverse. It will also offer opportunities, well-defined guidelines, and taxonomies, for investors, business owners, and researchers in order to find their fitting spot within the booming diverse Metaverse disciplines.
### _Survey Methodology and Organization_
In this survey, we collect the most recent surveys and reviews about the Metaverse in the past few years and study their contributions and limitations. In addition, most of the scientific papers that contribute to the Metaverse since 2018 are studied. Using the Metaverse keyword and the set of enabling technologies, we locate the articles using Google Scholar and Scopus. This survey contains hundreds of scientific articles that are grouped and classified depending on the contributions, disciplines, and use cases applied. Furthermore, we include a list of businesses and study their solutions offered for the Metaverse market as well as the technologies utilized. The methodology used in this survey is to identify the key enabling technologies and study their integration with a Metaverse pipeline ecosystem. This pipeline is sequentially ordered and organized as (1) Infrastructure, (2) Environment Digitization, and (3) User Interactions. The Infrastructure is composed of the hardware and equipment, frameworks, and platforms. Environment Digitization consists of avatar modeling, environmental rendering, and sessions. Furthermore, interactions from the user perspective are user-to-user, user-to-objects, and user-to-business in the Metaverse. For each layer and component, we devise a multi-layer classification of the existing literature and business efforts. Furthermore, we study the list of existing issues and challenges behind each component and outline the potential future research directions and solutions.
### _Survey Outline_
The remainder of this survey is organized as follows. Section II presents the comparative studies with the list of existing literature surveys. The Metaverse trends and projects are illustrated and analyzed in Section III. Section IV describes the Metaverse foundation through timelines of enabling technologies. Afterward, we present a novel Metaverse pipeline ecosystem in Section V. Then, we elaborate on the main three layers of the Metaverse foundation, i.e. Infrastructure, Environment Digitization, and User Interactions, in Sections VI, VII, and VIII, respectively. Lists of challenges and future directions extracted from the discussions and analysis of each Metaverse component are presented in Section IX. Finally, we conclude the survey in Section X. A detailed organization of our survey is depicted in Figure 2.
## 2 Comparative Study
In this section, we provide a review of the current Metaverse surveys. We highlight their points of discussion and analysis
Fig. 1: Multi-Disciplinary Metaverse Use-cases
in relation with the necessary Metaverse components of the development pipeline from a user or entrepreneur perspective. The Metaverse gained wide attention from its academic aspect since it became widespread. In terms of surveys, tens of diverse works were published in the last couple of years to summarize the recent advancements in this topic. Such surveys were addressing it from the perspective of various enabling technologies. Therefore, a comparison of surveys that do not address the same technologies cannot be conducted. In this context, we employ several metrics relevant to the Metaverse components and their enabling technologies. We rely on evaluating the literature work by referring to a double-edged metric system that can reveal the direction and focal points of these surveys. To further demonstrate the points of evaluation, we anticipate the following Metaverse components: (1) Hardware and Equipment, (2) Frameworks, Libraries, and Platforms, (3) Avatar and Object Modeling, (4) Environment Rendering, (5) Sessions and User Authentication, (6) User to User Interaction, (7) User to Business Interaction, and (8) User to Objects Interaction. In terms of relevant topics and enabling technologies, we use the following list to distinguish the contribution of the surveys: (A) Artificial intelligence, (B) Blockchain, (C) Networking, (D) Computing, (E) Business, (F) Privacy & Security, (G) Ethics, and (H) Sociopsychological aspect. The list presents the backbone of the Metaverse in terms of enabling technologies as well as social enablers. From this list, AI and Blockchain are the two pillars for achieving intelligence, security, transparency, and distributed storage. Furthermore, the Computing and Networking enablers form the Metaverse infrastructure to support and host the applications and technologies. In addition, the Business enabler forms a container of companies and products in the Markets supporting and offering solutions for the Metaverse. Moreover, the Privacy & Security are the crucial factors for the Metaverse success by protecting personal information, digital assets, and online reputation. Besides, social enablers are the Ethical and Sociopsychological implications behind developing and using the Metaverse. We adopt these components in our evaluation to demonstrate the Metaverse technologies and enablers for users and entreprene who are interested in particular topics. After identifying the comparison metrics (technologies and social enablers), we carefully study each relevant Metaverse survey in the literature and highlight their focal points and depth. Figure 3 shows in-depth feature extractions of the Metaverse surveys. Generally speaking, surveys address the Metaverse by summarizing and analyzing the literature works of how an enabling technology was able to assist/serve a certain component. To distinguish the surveys' level of contribution, we refer to the depth of discussion by pinpointing with sized circles None (_no circle_ ). Low (_small circle_ o), Medium (_medium circle_ 0), or High (_big circle_ O) to each paper regarding each combination of technology and component, as can be noticed in the figure. For instance, the work in [19] offers a Metaverse discussion regarding its networking aspect, and how Blockchain can aid in its development. From a macro point of view, they decently address the Hardware and equipment component, in addition to the Rendering and User-to-objects interactions in terms of multiple enabling technologies. Furthermore, other components, such as avatar modeling, authentication, and platforms, are slightly discussed. Similarly, [11] focuses on communications and networking in Metaverse. Precisely, their discussion comprises sensors, DT, edge computing, and Blockchain. Most of the provided information is limited to the authors' opinions. Nevertheless, reflecting back on our evaluation mechanism, their main contribution remains limited. From a Blockchain perspective, the survey in [13] provides a thorough discussion on Metaverse and the impact of Blockchain on other enabling technologies, including AI and the concept of DT. The discussion provides a deeper technical flavor than other surveys, however, the overall Metaverse ecosystem is not explored in their work. In parallel, a survey on security and privacy in Metaverse was formulated in [1]. The authors focus on detailing security threats in different domains and their countermeasures in the Metaverse paradigm. Nonetheless, their mature contribution was limited to the privacy and security aspects only, while the enablers of Metaverse were somehow ignored. A recent survey [20] approaches the Metaverse from a computational arts perspective to describe novel artworks in blended virtual-physical realities. The paper neglects however many important fundamentals of the Metaverse, thus, it does not qualify as an instructive survey about its enablers and advanced technologies. The survey in [7] examines the practices and ethics of the Metaverse while emphasizing its privacy and sociopsychological aspects of certain components. Nevertheless, their approach has no clear study of the technologies behind Metaverse. AI was
Fig. 2: Survey Outline
also addressed in several Metaverse surveys. To name a few, [14, 15], and [12] are reviewing the AI vision in Metaverse, with a prominent flavor. In [14], the authors tackled how Blockchain and AI fuse with the Metaverse. Their work considers many recent advancements in the literature. Nevertheless, they do not address other important aspects and technologies, as summarized in Figure 3. Secondly, the survey in [15] focuses on the role of AI in realizing the Metaverse while considering the networking requirements. However, they lack the mature AI discussion for components other than the platform and hardware, in addition to not bearing in mind the privacy and security aspects. The work in [12] provides an overall review of AI inside Metaverse. Although their AI discussion is covering many aspects of the Metaverse, the information is not deep enough for a reader to comprehend how the Metaverse functions. From a multidisciplinary point of view, there are other diverse Metaverse surveys approaching Metaverse, among which stand out the following works: [18, 17, 16]. These three surveys review the state-of-the-art technologies of the Metaverse. Nonetheless, the one thing that should be highlighted about these works is also their discontinued flow of information and the ambiguity to connect the Metaverse components into one complete and structured ecosystem due to addressing each component individually.
In summary, the current surveys in the literature lack a coherent architecture for identifying its components and the role of enabling technologies for each particular component. Hence, they are still limited in terms of contribution and are not consistent with the Metaverse workflow. Therefore, a refined pipeline ecosystem for defining the components of the Metaverse should be devised and discussed in order to
Fig. 3: Literature surveys’ focus of metaverse: the contribution level of each survey in certain domains
ease its realization and bring experts from multidisciplinary backgrounds.
## 3 Metaverse Trends & Projects in Academia and Industry: Statistical Analysis
In this section, we highlight and discuss the trend surrounding the hype as well as the rise of Metaverse. In the past couple of years, the Metaverse trend has taken an acute upward spiral. Several factors and enabling technologies have contributed to its rise, such as (1) Faster and cheaper computing, (2) better graphics, and (3) faster internet connectivity. However, with all trendy technologies, the future of Metaverse will be decided by the amount of disruption it will cause in mainstream domains, the newer arenas and paths it will pave, as well as the number of use-cases it will facilitate. The ultimate question for any up-and-coming technology remains: Will it improve the Quality of Life (QoL) of its users?
### _Market Growth & Trend Increase_
The Metaverse has seen tremendous market growth and trend increase in both industry and academic circles. It is worth noting that several of the industry leaders, who work largely on Metaverse-related projects, rely on academia. Some, even have their own academic research departments internally. A good example of this is Meta's internal research department [21] where it conducts academic research, as well as hosts Ph.D. programs such as 'Meta Research Ph.D. Fellowship Program'.
With more people throughout the world getting to know about the Metaverse and getting acquainted with its details and potential, it is expected that the trends regarding the Metaverse will keep on increasing. The marketing research and consulting firm Ipsos conducted research for the World Economic Forum [22] spanning across 29 countries in May 2022 and came up with the following. On average, 80% of the population knew what virtual reality was, 52% knew about the Metaverse (Figure 4), and 50% had positive feelings about dealing with and using extended reality in their everyday lives. The same research asked the participants about their perceptions of how the Metaverse and its applications will impact people's lives, and more than 50% believed that different Metaverse applications will change people's lives in the next 10 years. Furthermore, the research highlighted the percentage of people's beliefs about different domains in which the Metaverse will play a significant role. Virtual learning had the biggest result while virtual travel and tourism had the lowest perception rate. We show the results from the research for all the domains in Figure 5. From the statistics shown in this figure, we can realize that the vision for the Metaverse is multidisciplinary, which brings more complexity and challenges to meet the users' expectations. Specifically, each of these domains requires the use of different technologies to handle, render, and offer the needed experiences.
#### 3.1.1 Trend Increase in Industry
Despite the fact that the notion of Metaverse has been around for 30 years, however, it was only in the last 10 years that it started getting commercial attention. This reached its peak in October 2021, when one of the behemoths of modern social media companies, Facebook, announced that it will rebrand as Meta and focus on Metaverse-related applications. With such a big announcement from a company that has more than 2.8 billion users, people were curious to know what the Metaverse was really about. This can be easily seen in Figure 6, where in the past 5 years, interest and search for Metaverse was almost with a ranking of 0, which suddenly gets boosted to 100 (the highest possible ranking) upon the news of Facebook's new strategy.
This does not mean that the Metaverse did not exist
Fig. 4: Metaverse Familiarity Throughout the World in 2022 [22]
Fig. 5: Human Perception of Metaverse Uses in Different Domains, May 2022 [22]
Fig. 6: Google trends for the keyword Metaverse search between years 2017 and 2022
before, or major corporations did not take it seriously. On the contrary, the notion of Metaverse has been around for a while, especially on the gaming front. Matter of fact, the gaming sector has been the front face of Metaverse for a while and the initial rise for it mainly came from games [12]. Games and platforms such as Forthite, Minecraft, and Roblox existed way before the term Metaverse was commercialized. Furthermore, Forthite was one of the first companies that held a virtual concert in its gaming sphere, where DJ Marshmellow performed a DJ-ing set in early 2019. Other platforms followed where major artists performed virtually in different Metaverse platforms put forward by different companies [23].
Given Facebook's nature as a social platform, their goal was to create social universes through 'Meta Horizons' where users could join using special headsets. These headsets are produced by the company and allow users to hang out with other users, meet new people, play games with each other, and participate in events. Meta's new strategy also attempts to change what a normal day at the office looks like. A more corporate and business aspect of their vision is geared towards a service dubbed 'Meta Horizons Workrooms'. Another tech giant, Microsoft, who already has been in the Metaverse loop for a while with its very own Hololens virtual reality gear and its 'Microsoft Mesh' platform for virtual reality, purchased a very large gaming company named Activision Blizzard which is the owner and the developer of several successful games such as Call of Duty, World of Waterraft, and Guitar Hero. Microsoft's purchase was a big message to the entire industry that big firms were now spending millions of dollars and betting on Metaverse's future. Microsoft CEO's message was clear regarding the acquisition and the position of one of the biggest software companies, he has been quoted saying 'Metaverse is essentially about creating games.'.
To become a major player in the Metaverse space, an entity needs to be highly technical, engineering-oriented, and have interdisciplinary knowledge of the technological enablers for it. Multiple domain expertise and know-how, such as physics, hardware engineering, software engineering, graphics, and AI are required. To overcome such hurdles and make Metaverse accessible to the general public with no technical knowledge, some companies such as bit.country 1, Propel 2, Touchcast 3, Metaversebooks 4 have started to provide services in the form of Metaverse-as-a-Service (MaaS) where users do not need to worry about deployment and technical arrangements to own their Metaverse, rather worry only about content and moderation. These services include Metaverse engine deployment, hosting services for Metaverse needs as well as technical support. We can consider such a service analogous to the cloud services presented by cloud providers where companies do not host or develop their own software or host their own servers anymore. Instead, they subscribe to and utilize the services of these SaaS and IaaS providers. From an economic point of view, a MaaS can save large amounts of money for companies and entities that do not possess the digital expertise to develop their own Metaverses. It can also allow for smaller as well as mid-sized firms to enter the scene in a fast manner.
Footnote 1: [https://bit.country/](https://bit.country/)
Footnote 2: [https://propel.xyz/maas](https://propel.xyz/maas)
Footnote 3: [https://touchcast.com/](https://touchcast.com/)
Footnote 4: [https://metaversebooks.com/](https://metaversebooks.com/)
In addition, with the advent of decentralization and Web 3.0, we are seeing more Metaverse-oriented systems and applications that are working hand in hand with other Web 3.0-based enabling technologies such as blockchains [24]. For example, major commercial Metaverse platforms such as Decentraland 5, and The Sandbox 6 facilitate the purchase of virtual lands, alongside unique virtual items such as paintings and wearable in their platform, offer their own financial tokens as a means for financial transactions and purchases [25].
Footnote 5: [https://decentraland.org/](https://decentraland.org/)
Footnote 6: [https://www.sandbox.game/en/](https://www.sandbox.game/en/)
The financial and media company Bloomberg expects the potential market of Metaverse to reach around $800 billion by 2024, where online gaming, as well as AR and VR hardware production taking around 50% of the market share. The rest of the market share will be distributed between entertainment, ads, live events, concerts, films, sports, and social media applications. This is a big surge from the market share of Metaverse from 2020 where the total market share of all Metaverse-related applications was around $400 billion [26].
#### 3.1.2 Trend Increase in Academia & Research
The trend for Metaverse in academia has increased remarkably too. A quick search for the keyword 'Metaverse' in Google Scholar returns around 22,900 results (November 2022). What is worth noting is that the last couple of years saw a tremendous increase in the publication of academic works pertaining to the Metaverse topic. We searched for academic publications that contain the keyword 'Metaverse' on Google Scholar 7 in 5-year increments starting the year 1990 and showcase our findings in Figure 7.
Footnote 7: [https://scholar.google.com/](https://scholar.google.com/)
We can notice from the graph in Figure 7 that in only the past couple of years, i.e., 2020 and onward, around 7400 academic works were published, while during the 5 years before, this number was only around the half value, which was around 3480 publications. Such a spike in the number of publications highlights the amount of interest the topic of Metaverse has brought up in Academia. Furthermore, while searching for the keywords 'virtual reality', which is a major aspect of the Metaverse and considered one of its
Fig. 7: Google Scholar results for Metaverse keyword publications starting 1990
core components, we were able to locate more than 3,000,000 academic works, with 50,000+ works just published in 2022. As the industry interest in the Metaverse increases, we believe that academia will follow the track and we will witness more publications around the topic.
In parallel, we are also witnessing a rise in academic conferences that target the Metaverse. While in the past, virtual reality or Metaverse-related conferences took place under the umbrella of other conferences, these days we can see the term 'Metaverse' being used in the name of the conference, and the entire conference would target Metaverse and its related applications. Some of these conferences are organized by subdivisions of well-known academic bodies such as IEEE (Institute of Electrical and Electronics Engineers) which is organizing the 'IEEE-TLI-Metaverse 2023: IEEE Special Issue on Metaverse and the Future of Education'8, a conference especially dedicated for the topic of education in the realm of the Metaverse, as well as 'International Conference on Metaverse Computing, Networking, and Applications'9, and IEEE Metaverse Congress 10. Logically, such a conference would produce and allow the publication of several Metaverse-related publications, which will further fuel the research in academia, where researchers will build on the work done by other researchers.
Footnote 8: [https://ieee-edusociety.org/ieee-special-issue-metaverse-and-future-education](https://ieee-edusociety.org/ieee-special-issue-metaverse-and-future-education)
Footnote 9: [https://www.ieee-metacom.org/2023/](https://www.ieee-metacom.org/2023/)
Footnote 10: [https://engagestandards.ieee.org/IEEE-Metaverse-Congress-Series.html](https://engagestandards.ieee.org/IEEE-Metaverse-Congress-Series.html)
In addition to academic conferences, the amount of industry, or industry-academia hybrid conferences is also on the rise. Conferences that gather Metaverse industry experts and speakers are happening more frequently across the globe. For instance, the worldwide famous journal Economist is organizing a Metaverse Summit 11 as part of its Economist Impact initiative with more than 100 guest speakers. The event is happening both in-person and virtually to secure a large number of attendees. Other events include 'Global Metaverse Carnival' 12, 'Metaverse Global Congress' 13 and 'Augmented Enterprise Summit' 14. Such conferences are used as a bridging tool to connect different stakeholders including industry experts, entrepreneurs, engineers, researchers, as well as academics around the topics of Metaverse.
Footnote 13: [https://events.economist.com/metaverse/](https://events.economist.com/metaverse/)
Footnote 13: [https://metaverse-club.net/](https://metaverse-club.net/)
Footnote 14: [https://www.sensorsconverge.com/](https://www.sensorsconverge.com/)
### _Metaverse Projects_
Companies, organizations, start-ups, and even cities have started to apply the Metaverse in several different domains by coming up with various projects and applications. In general, we have started seeing projects revolving around Metaverse to be focused in the following several domains: gaming, industrial applications, entertainment, art (museums, shows, galleries), finance (DeFi), e-commerce, smart cities, real-estate, healthcare, manufacturing, and education.
Current top leading Metaverse projects in the consumer space are usually environments in which they aggregate several domains and applications under one roof. These include the notion of gaming, socializing, decentralized finance, entertainment, and ownership. For example Metaverse platforms like Avie Infinity, Decentraland, and Roblox, make it possible for the user to purchase and own properties and land, and even rent it to others through tokens that run using blockchain technology. Users can also purchase and trade certain memorabilia such as paintings or wearables as NFTs (Non-Fungible Tokens) or even fantasy characters through smart contracts and hold on to them or resell them later for a higher value. Concerts and art exhibitions are also possible in these environments. Such games also include immersive experiences, however, the majority of such games at the moment can be played without any special AR or VR equipment.
Metaverse has also found a place in the B2B space where companies are investing a lot. Uses of Metaverse in the B2B spectrum include training, product design, HR-related tasks such as employee hiring and onboarding processes, virtual guidance for hands-on labor work as well as corporate gatherings and meeting events. Professional services company Acperture 15 has created its own Metaverse for onboarding new employees as well as using it for corporate events and team-building meetings called the N-th Floor. Nokia and Gartner joined forces and conducted a survey [27] about the biggest use cases for B2B-related Metaverse applications that seemed the most compelling, and the domain of training led the poll. The results of their survey are highlighted in Figure 8. VR-based training is becoming more practical since it can bring so many people together from different parts of geographical locations for training sessions by cutting down traveling time and expenses. The advantage of such training sessions using different kinds of augmented and virtual reality systems and gears is that, in addition to the regular training that would have happened in a 2D environment, participants can engage in hands-on technical and multi-step based training. Such types of training are safer, interactive, and can be more engaging, which makes the training more fun. In addition, training can happen in difficult and technically challenging environments which are hard to recreate, such as rescue and fire-fighting missions. Companies such as Nokia and Walmart are already relying on Augmented and Virtual reality-based approaches for their staff training. Nokia developed an internal training platform based on augmented reality during the Covid19 pandemic called Nokia Learning Space to train
Fig. 8: Metaverse B2B Use Cases Survey Result by Nokia (extracted from [27])
staff on how to deploy Nokia equipment. Participants even got certifications at the end of the training session [27]. On the other hand, Walmart has purchased around 17,000 Oculus go VR headsets to train around 150,000 employees in customer service and management [28]. Other industry leaders are also making big use of augmented and virtual reality. The engineering firm Bosch has developed its own system known as Augmented Reality Platform (CAP) which allows quickly preparing and creating virtual content in augmented reality. Bosch claims that CAP applications save 15% in average time pertaining to tasks that involve repairs [29].
Another major domain where Metaverse is being used in several applications is the health industry. A big use is still taking place in the training aspect of healthcare. 2D images of body parts are converted into 3D floating objects and used for training purposes to prepare better surgeons without risking real lives during a training session. Companies like Enhatch 16 have started to provide services known as Intelligent Surgeries where a combination of enabling technologies such as robotics, navigation, and augmented reality provide surgeries to be more cost effecting and faster. Another major event where virtual reality was a core element in healthcare procedure was during the preparation of the complex conjoined twins' separation operation. Doctors and medical team taking part in the operation spent months using virtual reality projections of the twins resulting from MRI and CT scans. Surgeons from different countries, by the use of VR headsets, worked as if they are in the same room. The result was a successful operation that the surgeons attribute to the assistance of the VR [30].
Footnote 16: [https://www.enhatch.com/](https://www.enhatch.com/)
Cities have also entered the race toward the Metaverse. The city of Seoul in South Korea has been developing a Metaverse platform [31]. It will be the first municipality government to accomplish this. Dubbed as Metaverse Seoul, the platform will be built in stages over the course of the next couple of years. The procedure will be used for tasks currently conducted by the city including economic policies, as well as civil complaints. The Seoul Fintech Lab will be reproduced in the Metaverse to assist in the economical domain. This will also help the companies to attract foreign investments. The city believes this domain will also be rejuvenated with the help of the Metaverse after it shrunk due to Covid19 pandemic. The educational sector will also find a large place in Seoul's Metaverse plans, as it will include a virtual campus of the Seoul Open University. Immersive courses, lectures, and programs will be conducted. Tourist locations and landmarks will also be converted into virtual sports that people would be able to visit. Tourists will be able to hop on virtual buses and tour the city. Seoul's most attractive concerts and festivals will also be a part of the Metaverse. Parts of public services for the general public and citizens will also be produced in the Metaverse. These will include consultancy, civil complaints, and reservation of public facilities. Furthermore, a Metaverse version of the mayor's office will be created and used as a communication platform between the residents and the city's representatives. Metaverse will play a big role in urban management too. VR, AR, and XR technologies will be used to adapt to the disabled as well for their convenience and safety. Finally, the Seoul Metaverse will be big on events, conferences, and working space. It will be used as the main communication channel for events, as well as it will feature remote working environments. The city of Seoul is keen on making the Metaverse project become reality. This shows how the Metaverse can be used by large cities to facilitate the different aspects of their citizens and increase their QoL. Other than Seoul, several other cities such as Dubai [32] as well as several Chinese cities and provinces such as Shanghai, Zhejiang, Anhui, and Wuhan have shown interest in also having their virtual versions of their cities in the Metaverse space. Despite the fact that these cities do not have concrete timelines by the time of the writing for their Metaverse ambitions, they are positioning themselves for such advancements in the future.
Projects have also been spun up to govern the Metaverse to make it more interoperable and safe. Such an initiative was proposed during the World Economic Forum's yearly meeting in Davos [33]. Dubbed as 'Defining and Building the Metaverse', the initiative will provide assistance in creating an ethical and inclusive Metaverse in which several organizations across the public and private sectors can partake in a plethora of domains such as civil society, academia, business, regulators. The initiative will mainly focus on 2 main areas: The first is the governance of the Metaverse. This part of the initiative will focus on how different technologies can be used to create environments in the Metaverse in a secure, safe, inclusive, and interoperable manner. The second area is the value creation from the Metaverse, as well as identifying the risks and the incentives of those different stakeholders, such as individuals, businesses and society in general will face as the Metaverse progresses. The initiative will also highlight how Metaverse can transform industries, how it can disrupt current value chains, and how it can aid in the creation of new assets and the protection of their rights accordingly. More than 100 entities such as Meta, Microsoft, Lego, Sony, Walmart, Stanford University, and NYU across different industries and academia have committed to partner with World Economic Forum on this initiative. Other initiatives such as the Metaverse Standards Forum 17 are trying to unite different stakeholders, companies, and organizations in the realm of the Metaverse to make it more open and interoperable. The Forum itself will not create standards, however, it will provide resources and coordinate and consult the creation standards with the help of other standards organizations working for a better and open Metaverse.
Footnote 17: [https://metaverse-standards.org/](https://metaverse-standards.org/)
As stated, art has also been finding a big role in the Metaverse, especially from the perspective of NFTs which are digitally created pieces of art and are being sold through blockchain-enabled technologies. This has led to several so-called mini-museums being created inside well-established Metaverse platforms such as Decentralized. Others have created their own Metaverse platforms that are purely dedicated to becoming digital museums such as Musee Dezenl 18. This virtual museum is considered to be the first decentralized museum in the world. People can purchase
or rent frame space in the museum to exhibit and sell their work. Prominent art broker Sotheby's also launched its own Metaverse 19 which serves as a destination for NFT-based art sales. Furthermore, traditional brick-and-mortar museums are shifting to digital versions to re-grow sales which have been in decline due to Covid19 restrictions from one side, and to keep up with the new emerging trends such as AR and VR from the other side. In fact, many museums are using their own or collaborating with AR and VR applications and offering their art in virtual or augmented reality. For example, the infamous museum Louvre, which hosts Leonardo Da Vinci's Mona Lisa painting, has released an app in which you can enjoy the Mona Lisa painting in 360 degrees [34].
Footnote 19: [https://metaverse.sothebys.com/](https://metaverse.sothebys.com/)
A very important aspect for companies is to drive more and further sales of their products. Promoting user purchases is the ultimate goal for such companies. With the emergence of the Metaverse, more companies are utilizing it to provide immersive shopping experiences and promote their products. With the advent of online selling, and its promotion to live commerce, in which items are showcased via video broadcasts and presented using chat for potential buyers, the Metaverse is playing more of a complimentary role to combine the advantages of live commerce with the immersive nature of experiences. In this regard, the authors of [35] present a business model on how to incorporate Metaverse with live commerce to provide a better purchasing experience for users. Shoppers can immere themselves into the Metaverse and can see items in 3-dimensional space from every possible angle. They can also try and test the product virtually on different surfaces and different possible combinations with other products. Such advantages can further assist the purchaser to make an informed decision before buying an item. Based on these, companies are developing newer applications and services for their potential customers to further grow their sales and provide better shopping experiences.
## 4 Advancement of Enabling Technologies
This section introduces and explores the technological development that gave rise to the Metaverse. The Metaverse development workflow or pipeline employ these enabling technologies to augment the user experience. As presented in Figure 9, the most recent Metaverse development has been made possible by technologies including AI, IoT, AR, VR, Blockchain, Three-Dimensional Modeling, and Edge Computing. It's critical to understand the history of the technology that gave us the right tools and software to build the Metaverse. Each supporting technology's timeline and an overview are presented in this section and illustrated in Figure 9. For easier tracking, major technologies and sub-components are highlighted in bold.
### _Artificial Intelligence_
At a conference at Dartmouth University in 1956, the term "Artificial Intelligence" was formally used for the first time. Researchers started their contributions of simulating human intelligent activities using machines by learning human behavior [36]. AI is a synthesis of numerous academic disciplines, leading to a wide range of internal sub-fields, components, and applications. With a more than 70-year history, AI has undergone a prolonged development process that produced numerous smart technologies, which can be employed to construct the Metaveres. AI and its approaches are heavily used in the Metaveres, and its development can be broken down into various parts which are highlighted in the following paragraphs.
**Machine learning** (ML) [37] is a trendy name we hear everywhere. Businesses are getting competitive and profitable advantages in markets by adopting ML algorithms. Many mathematicians in the 18th century developed and employed statistical and probability theories such as the Bayes theorem, which is still in use today. Turing machine tests were introduced afterward in 1950. Researchers sought to establish whether a machine can think by demonstrating that it can respond to a query and convince a human. The term "perceptron" was initially used in 1957 to describe a method of creating a **neural network**[38] using a rotating resistor. Parallel Computing was originally utilized in 1986. As processing power increased, Parallel Computing aided in the adoption of Neural Network models that can handle large datasets of information generated. In the 1990s, Support-Vector Machines (SVMs) and Recurrent Neural Networks (RNNs) became popular. Afterward, **Deep Learning** (DL) [39] was first presented to explain new techniques in 2006. DL can offer a lot of services to the Metaverse such as Transformers, which help machines to work with natural language. In the 2000s, kernel methods and Unsupervised Learning became widespread. In addition to that, ML algorithms and recent technological breakthroughs have made it possible for **Image Classification**[40] and **Computer Vision** (CV) [41] approaches to solve problems. Google began to assist in testing **Robot** cars on roadways after MIT initially presented a facial identification framework in 2001. Modern government and business sectors made extensive and sophisticated use of CV, which plays a significant role in improving humans' ability to interact with the virtual world and gesture recognition in the Metaverse. Moreover, People began to see the value of employing a machine to translate across languages after World War II. **Natural Language Processing** (NLP) [42] has been studied and improved since this period to address more challenging issues like topic discovery and modeling, sentiment analysis, text-to-speech and speech-to-text conversion, and machine translation. In the beginning, NLP used a technique called "Bag-of-Words" to count the occurrences of each word. Following that, the "Word2Vec" and "FastText" algorithms were used. Alongside these breakthroughs, the 2019 publications "XLNet" and "Transformer Models" were popular among scholars. Such techniques aid in the development of speech recognition, allowing users to voice-navigate the Metaverse. This can enhance machine understanding, creative collaboration, and realism in AI storytelling, among other things. With such advancements, **Expert systems**[43], which mimic human decision-making abilities and seek to resolve complicated problems using knowledge-based reasoning, started to be well used to solve complex problems, especially while using **Fuzzy logic**, which is a method for representing and changing ambiguous information by evaluating how likely
the hypothesis is to be true.
To complete a task, AI relies on 5 key elements. (1) **Learning**, which consists of rote, unsupervised, and supervised learning. (2) **Reasoning**, often known as logic, can be either deductive or inductive. There are special purpose methods and general purpose approaches for (3) **problem-solving**. (4) **Perception**, which provides the AI agent with information about its surroundings, such as sensors and cameras. (5) **Knowledge representation** transfers the incoming information from sensors to a standard format to process. NLP is an example that translates the input language into a legitimate format to be processed.
As these techniques developed, applications of AI began to emerge. The first robot, "UNIMATE," which replaced humans on an assembly line in the industrial sphere in 1961, was one such application. A chess machine player from IBM named "DEEP BLUE" defeated the world champion in 1997. Apple developed "Siri" in 2011, and today we have "Alexa" by Amazon and many more sophisticated and intelligent AI applications that assisted in creating some Metaverse environments and continue to aid in improving upcoming Metaverses.
### _Blockchain_
Blockchain is a distributed and decentralized digital ledger. It operates entirely in a decentralized manner with no need for a central authority. As a result, Blockchain-based applications and architectural designs benefit from high levels of data availability, security, and privacy [44]. In 1991, the Blockchain technology concept began when research scientists W. Scott Stornetta and Stuart Haber were developing a workable technique to maintain the backup of digital information [45]. They described the use of a chain
Fig. 9: Overview of the Metaverse Enabling Technologies
to cryptographically secure blocks in order to preserve the accuracy of earlier data. Afterward, multiple advancements and techniques were proposed under Blockchain. Due to these advances, the Blockchain has emerged as one of the fundamental pillars of the Metaverse, allowing for the use of cryptocurrencies and NFTs to establish a virtual economy that is fully operational and allows for the buying and selling of any virtual good. In this paragraph, various aspects of how Blockchain has evolved and developed are covered.
The secure construction of decentralized blocks was a top priority for academics and developers. To gather information and documents into a single block, they created the **Merkle Trees** concept in 1992. Later, **Proof-of-Work** (PoW) [46] was adopted to safeguard these blocks against network abuses. The introduction of **Reusable Proof of Work** (RPoW) and the subsequent **Peer-to-Peer E-cash** system" known as **Bitcoin**[47] proposed by the pseudonym Satoshi Nakamoto marked the 20th century as the "golden period" for the rise of Blockchain. PoW method, which is used to send and verify transactions among decentralized nodes, was made popular by Bitcoin [47]. Consequently, Blockchain will allow Metaverse enterprises to provide their clients with integrated services that will merge their 2D and 3D digital presences, transforming how clients engage and transact in a secure way. During that period, people began to create decentralized applications, and in 2014, **Ethereum**[48] was published as a cryptocurrency and a decentralized platform that helps users in the Metaverse to exchange assets and funds. Moreover, with such advancement and the use of Blockchain, **smart contracts**[49] were introduced as one of Blockchain's most significant applications in 2015. People began to adopt Blockchain applications during that time and up until 2021, which prompted major corporations like Facebook and Amazon to develop Blockchain services and currencies. To address some limitations, **Ethereum 2.0** was introduced. In 2021 and 2022, **NFT** saw the lights [50] where owning digital assets became possible and trendy. Users of the Metaverse can sell their NFTs for fiat currency at any time or exchange them for cryptocurrencies to acquire other Metaverse objects. Many software programs, such as "OpenSea" were introduced until reaching the **Metaverse**, which was motivated by the COVID-19 epidemic. During this period, the culture of working from home started to be extended and became more convenient. Recently in September 2022, Ethereum completed the Blockchain's transition from PoW to **Proof of Stake**20.
Footnote 20: [https://ethereum.org/](https://ethereum.org/)
Due to its developments and dependability, Blockchain is used to create and suggest new cryptocurrencies. Blockchain-based smart contracts have recently become widely used to initiate processes and execute them automatically. Blockchain technology has been adopted by banks all around the world to boost productivity and cut expenses. It is currently utilized by several game companies and supply chain management like Walmart to monitor some of their products with the help of IBM's cloud servers. Moreover, it serves nowadays as a main pillar in addressing efficiency and security in the Metaverse.
### _Network infrastructure_
Networks and interconnected components attracted people's interest after computers became mainstream. The year 1969 was the first milestone where ARPANET (Advanced Research Projects Agency Network) connected the first computer to the network [51]. Onward, **General-Purpose Mainframe** computers became widely available in the middle of the 20th century, which prompted the development of **server** computing that offers services to **client** computers. Early in the 1990s, **cloud and mobile**[52] computing became popular. After that, a **World Wide Web** (www) proposal was produced in 1989, which sparked the creation of numerous www protocols and search engines. From then on, network infrastructure saw rapid advancements in the realm of cables, ranging from **Ethernet** to the development of **WiFi** in 1997. Since the Metaverse is being constructed on top of the internet, it will need the internet and advanced protocols in order to run and offer services to users. Beginning with MQTT in 1999 and moving on to AMQP and STOM in 2005, meaning **messaging protocols** have been improved. **Inter and Intra System Protocols, such as UART, USART, USB, I2C, SPI, and CAN, have also become popular. The **networking and cellular**21 infrastructure was also incorporated in the advancement trend by introducing **1G** in the 1970s to offer voice services. It was later extended to serve voice, data, and web mobile service with low speed under the **2, 2.5, and 2.75** Generations. Then, **3G and 3.5G** were released to speed up streaming, video calling, and web surfing. In 2010 we saw the arrival of **4G**, which allows users to use services at fast speeds, with high-quality HD video conferencing, and with global roaming. We are currently in the 5G era, which provides extraordinarily fast mobile internet, supports the Internet of Things, autonomous driving, smart cities, and intelligent healthcare systems, paving the way for the creation of the 6G network-based Metaverse to further improve the experience. Moreover, the solution to sustaining the Metaverse lies in **edge and fog computing** which were first introduced by Cisco 22 in 2012. In essence, edge and fog computing [53] improve response times, save bandwidth, and minimize latency, making it ideal for many Metaverse use cases. Additionally, it places data and applications as near to consumers as feasible, providing them with the necessary local computing capacity while reducing network-based latency and congestion risk.
Footnote 22: [https://www.rfpage.com/evolution-of-wireless-technologies](https://www.rfpage.com/evolution-of-wireless-technologies)
### _3D reconstruction_
3D reconstruction is the act of recreating the form and appearance of genuine items in three dimensions. It all started in the 15th Century, when the famous artist, Leonardo da Vinci drew the first accurate 3D drawing on paper. James Joseph Sylvester created the matrix mathematical notation [54] later on in the 18th century. This was a key building block for the creation of computer software and 3D images that enabled the development and expansion of the Metaverse applications, which heavily relies on 3D rendering and visualization. Various 3D reconstruction models and programs have been created over the years and improved.
Aspects of software and methods for 3D reconstruction will be covered in this paragraph.
People have been drawn to 3D modeling for ages. With the development of resources and technology, 3D software has become more prevalent. In the early 1960s, Sketchpad, the first 3D sketching program, was proposed and released. Following that, 3D graphics companies began to emerge in 1968 to create new tools for drawing 3D shapes, which led to the emergence of depth buffering or Z buffering by Ed Catmull. Later, other software appeared, including Autodesk 3D, Adobe Photoshop, Redshift, Octane, and many others. Such technology can create a 3D scene from a series of 2D photos collected from various perspectives in seconds, which motivates the idea of having better 3D objects in the Meteverse. People started by forming the model using the **Rasterisation** technique, which treated the model as a polygon mesh with embedded information in the vertices. After that, **Ray Casting** was invented to address various issues and drawbacks with the earlier method. **Ray Tracing** was later proposed to accurately simulate object shadows. Then, when those technologies were being developed, a number of **rendering engines**[55] were suggested, including Lunion, KeyShot, Corona Renderer, and V-Ray.The speed at which the Meteverse is being built can be greatly improved due to the advancements in this discipline, where it is now possible to create 3D spaces and objects by simply taking several 2D photographs.
Character animation was limited to fairly rigid movements due to complex resource issues [56]. Motion capture, then, implemented intricate facial rigs in games, adding a whole new level of realism to the experience. Additionally, developers began to give video game characters more motions and moves. The development that happened in 2014, brought a considerably more realistic feels to the game by using animations other than the usual walk, run, attack, and jump actions. Today's 3D animations have advanced to the point that, in a game, for instance, the player feels as fluid as they would in a movie. Additionally, major corporations like Meta are working to create 3D objects with more realistic free motions and capabilities. Applications like VR, AR, MR, and the Meteverse are now applicable due to the advancements made in this discipline.
### _Extended Reality_
Extended reality is the term provided for the immersing technology of the combination between _Augmented Reality_ and _Virtual Reality_. The extension of the physical world to virtual reality is achieved by the modern world digitalization in our day-to-day needs and wants. With the XR's advanced technology, people can now achieve higher dimensions of immersing their characters in the Meteverse. The term virtual reality first appeared in a science fiction story in 1930, in which the writer predicted that a pair of goggles that users can wear would allow them to experience the fictional world through smell, touch, and taste [57]. To further describe VR, we can refer to it as a 3D environment entirely generated by computer technology [58]. Users can immerse their presence inside it using attachable devices capable of generating visual and sensible scenes. Some VR apps are in the form of hand painting, in which users can use their imagination to be creative, using the tips of their hands to draw and paint without the need for physical brushing and painting tools [59]. In addition, this technology allows users to interact with their surroundings through a narrative that can provide contextual information about a product or place [60]. Many other real-world applications are today benefiting from this technology, such as Beat Saber 2018 25, E-bay VR Commerce 2016 24, Quill 2016 25 and VR Chat 2014 26.
Footnote 25: [https://beatsaber.fandom.com](https://beatsaber.fandom.com)
Footnote 26: [https://www.ebay.com/](https://www.ebay.com/)
Footnote 27: [https://www.quill.org/](https://www.quill.org/)
Footnote 28: [https://help.vront.com/](https://help.vront.com/)
Footnote 29: [https://tinyurl.com/T2/Rnasa](https://tinyurl.com/T2/Rnasa)
Footnote 30: [https://www.safe.com/](https://www.safe.com/)
Footnote 31: [https://sketchar.io/](https://sketchar.io/)
Footnote 32: [https://www.apple.com/ca/augmented-reality/](https://www.apple.com/ca/augmented-reality/)
Footnote 33: [https://arr.google.com/ar/](https://arr.google.com/ar/)
Footnote 34: [https://pokemongloviec.com/](https://pokemongloviec.com/)
Footnote 35: [https://www.xda-developers.com/](https://www.xda-developers.com/)
In contrast to VR, augmented reality does not create a new reality; instead, it augments the physical reality to add more components to an existing reality and enrich the content we see. AR, since its existence back in 1990, has been seen in several aspects of the real world. One aspect of the usage is in location-aware geospatial computing, in which a system is used to compute and track a location of a moving target. Astronauts are using the technology to assist in repairs to space stations, such as the T2 augmented reality (T2AR) project 2016 27. Other uses include interior mapping to map and plan venues with FME AR 2020 28. Another AR application is sketching, where users can follow virtual lines to draw physical ones on a piece of paper using a smartphone application Sketchar 2017 29. In addition, AR is also used in creating emojis for users' faces on smartphone devices, as well as many other usages 30. Floating menus and icons have also taken part in augmented reality simulations, where users can interact with buttons and menus to perform some actions [61]. Other real-world examples of the usage of AR include the Google maps AR feature 2019 31, Pokemon Go 2016 32 and world lens AR translator 2010 33.
Footnote 29: [https://www.guill.org/](https://www.guill.org/)
Footnote 28: [https://help.vront.com/](https://help.vront.com/)
Footnote 29: [https://tinyurl.com/T2ARnasa](https://tinyurl.com/T2ARnasa)
Footnote 29: [https://www.safe.com/](https://www.safe.com/)
Footnote 30: [https://sketchar.io/](https://sketchar.io/)
Footnote 31: [https://www.apple.com/ca/augmented-reality/](https://www.apple.com/ca/augmented-reality/)
Footnote 32: [https://arr.google.com/ar/](https://arr.google.com/ar/)
Footnote 33: [https://www.google.com/ar/](https://www.google.com/ar/)
Footnote 33: [https://pokemongloviec.com/](https://pokemongloviec.com/)
Footnote 34: [https://www.xda-developers.com/](https://www.xda-developers.com/)
like Qualcomm, Nvidia, Samsung, Intel, and AMD (1970-1995) are the major suppliers of the chips and processes, providing us with different central processing units (CPU) and graphics processing units (GPU) that are critical to the performance improvement of the virtual world. The advancement and the powerful processing units created are contributing to the creation of a virtual environment. In addition to the processing units, the Voice over Internet Protocol (VoIP) creation in 1973, and the first software using this technology appeared in 1995 [64]. The adoption of VoIP enriches the connected users' communication layer through the Metaverse. Therefore, users can communicate over the voice channel using a microphone and audio device to speak and listen to others while in the Metaverse.
The reliability of the Metaverse is measured by its security to identify systems that provide access only to legitimate personnel and help protects against adversaries' malicious activities. In addition, access to data storage is crucial to offer scalability and simplicity for individuals to access their digital information across the Metaverse. Furthermore, containerization and orchestration are helping developers and programs to better build and deploy their software over multiple devices and operating systems to offer a service for all individuals. Some of the leading giants provide such services such as VMware 34, AWS 35, Azure 36, Google cloud 37 and Alibaba cloud 38 (1999-2009). These services contribute to creating and managing the apps on the Metaverse. In addition to the infrastructural components of the Metaverse, individuals need IoT devices to connect and experience the sensations inside the virtual worlds. Extended reality IoT devices are several, and they are built in a way capable of engaging with all the different sensations of individuals. The user can now interact with the Metaverse through different devices, such as Mobile input techniques, Hand-based input, Head Mounted Displays (HMD), Haptic, Audio, Brain attachable sensors, Holographic, and Smart Glasses. More details about these technologies and their usage are provided in Section 6.1.
Footnote 34: [https://www.vmware.com/](https://www.vmware.com/)
Footnote 35: [https://aws.amazon.com/](https://aws.amazon.com/)
Footnote 36: [https://azure.microsoft.com/](https://azure.microsoft.com/)
Footnote 37: [https://cloud.google.com/](https://cloud.google.com/)
Footnote 38: [https://www.alibabacloud.com/](https://www.alibabacloud.com/)
The aforementioned technological developments result in significant cooperation in creating more effective and dependable infrastructures, which serve as the main building blocks of the Metaverse creation.
## 5 Novel Metaverse Pipeline Ecosystem
In this survey, we present a unique pipeline ecosystem consisting of a stream of sequential workflows that contributes significantly to this survey and elevates it to a wider and more comprehensive level. The proposed pipeline will provide the necessary knowledge that allows the users to fully grasp the Metaverse concept in a clear and direct manner to fit it into their own perspective. The novelty of the devised pipeline is twofold. On the one hand, it places the building blocks behind the realization of the Metaverse concept and puts it into reality. On the other hand, it provides valuable guidelines for people from different domains (e.g., Academic, Industrial, and Business) to find themselves within the Metaverse paradigm, and allow their expertise to further enhance its foundation.
The pipeline represents the workflow sequencing between the Metaverse's connected entities. This effective sequential design creates a clear order for the Metaverse development process and decomposes it into phases that can simplify its realization. Such pipeline, illustrated in Fig. 10, comprises three layers/tiers. Starting from the bottom tier of the pipeline, the essential **Infrastructure** is set to host the Metaverse. The second tier encloses the steps for the Metaverse **Environment Digitization**. And the top tier defines the **User Interactions** within the constructed environment. Such a design reflects the system's elegance while also allowing for simpler, but more efficient, system analysis.
Each layer is composed of a set of entities. The infrastructure layer is the class responsible for the collaboration between hardware and software. It contains all of the fundamental components required for Metaverse digitization and functioning. This layer is made up of three entities, i.e., **Hardware**, **XR Frameworks**, and **Platforms & Metaverse-as-a-Service (MaaS)**. Environment Digitization is the second tier in charge of creating Metaverse worlds; it ranges from copying real-world objects to human representation as avatars. This layer includes the following entities: **Avatar Modeling**, **Rendering**, and **Sessions**. The aforementioned tiers combined will emerge a Metaverse world that is ready for deployment and usage. Finally, the third tier comprises the interaction components. This layer represents a set of interactions that can happen among actors within the Metaverse. It incorporates three interactive scenarios: **User-User**, **User-Business**, and **User-Objects**.
Figure 11 presents an overview of each of the components in our multi-layered pipeline ecosystem while connecting them to some of their use cases within the metaverse. In the following, we briefly describe each of the aforementioned entities:
* **Hardware**: a wide range of hardware and sensors can be used in the Metaverse. The such component includes everything from cameras and microphones to more complex equipment like haptic feedback devices and motion controllers. In precise, the hardware and sensors used depend on the type of Metaverse experience being created. In general, the main objective is to create a realistic and immersive experience for the user. For this purpose, VR and AR are the two main components that tailor the Metaverse experience. VR allows users to be completely immersed in a computer-generated world, while AR provides the ability to promote the digital world inside the real world in a more realistic way. Further, Computer Vision can be infused with the available frameworks to simulate real-world experiences inside the Metaverse or provide the necessary means to enhance the experience's quality. Moreover, Hardware usage may differ based on the intended application usage. For instance, a VR game will use different hardware and sensors than a social VR platform. Such technologies can perceive realism for the users.
* **XR Frameworks:** unlike the hardware entity, which provides means to access the Metaverse, frameworks can be defined by the tools used within the Metaverse itself. In this context, XR frameworks play a major role in its development [65]. They provide a complete skeleton to build interactive solutions where users can directly engage and communicate with their environment. Furthermore, AI technologies have proven their capacities in this domain. For instance, Deep Machine Learning can enhance users' tracking systems, improve the rendering quality and reduce the bandwidth cost while increasing latency which is a major concern for users in the Metaverse.
* **Platforms & MaaS:** is usually capital and consistent with the rest, which is referred to as a collection of technologies, software, and services that enables the realization of Metaverse projects and concepts. The Metaverse is expected to serve as a globally scaled interactive and immersive platform for individuals worldwide [66]. The new fully decentralized version of the internet allows you to experience the replicated world in real life and vice versa. Metaverse platforms often stand for the big organizations/companies that work on financing and developing the Metaverse projects to be realized [67]. These projects include Meta, Decentraland, The Sandbox, AXIE INFINITY, GALA, Enjin Coin, Metahero, and others. All the aforementioned projects serve as Metaverse key enablers, by offering Metaverse-as-a-Service (MaaS) solutions.
* **Avatar Modeling:** one of the advantages of the Metaverse is the immersive experience it provides for its users to interact with each other. Thus, Metaverse individuals expect a level of realism when creating their avatars to represent themselves. In this context, Avatars can be customized to look like the person's real-world appearance, or they can be completely different depending on the platform. Multiple methods can be employed to create avatars. These methods include using photogrammetry or AI technologies. Each method has its benefits and drawbacks. For instance, photogrammetry is a popular method for creating realistic and accurate avatars. However, it can be time-consuming since it requires the average user to capture and provide all the required details manually. Furthermore, 3D software can be expensive and require prior experience to use effectively. In this regard, AI technologies offer flexibility by automatically mapping humans to shaped and skinned avatars while sacrificing some accuracy of the final results.
* **Rendering:** is creating and displaying an environment in a three-dimensional representation. It entitles the Metaverse to a high degree of control over the environment. Rendering can be used immersively to visualize the virtual world or to simulate real-world experiences that are difficult to generally achieve such as flying through space or visiting other planets. Examples include rendering a realistic representation of a location for an immersive virtual reality experience or creating a stylized environment for a specific application context. There are a variety of different techniques that can be used for such tasks, which depend on the look and feel of the final product. For realistic environments, lighting and shading techniques can be used to create a believable atmosphere. However, for stylized environments, more abstract approaches may be taken to create a unique look depending on the targeted audience, such as the color brightness and geometric shapes. One of the challenges in this context is the balance between realism and performance. Too much realism can be computationally expensive, while too little realism can ruin the experience. Finding the right balance is usually based on trial and error and depends on the project's specific goals. Another challenge is dealing with the vast amount of data required to create a realistic environment. This data can come from a variety of sources, such as photographs, satellite images, and on-site measurements. Organizing and
Fig. 10: Metaverse Pipeline
managing this data can be daunting, but it is necessary to create immersive and realistic environments.
* **Session Management:** Sessions allow users to connect jointly to the Metaverse and explore the virtual worlds. There are a variety of session-based activities that users can participate in, ranging from simple socializing activities to more complex, cooperative, and interactive ones. In this context, session management is a crucial component that requires careful planning and designing. There are many factors to consider, such as security, performance, scalability, and resource management. For instance, managing sessions in a cluster of distributed servers might require specific considerations to allow a large number of users to interact smoothly without causing delays. Moreover, sessions also provide a way for a system to track a user's activity. The system provider can use this information for auditing and security purposes or to improve the user experience by understanding how users interact with the system.
* **User-User Interaction:** this type of interaction refers to the different means of communications between users. Users are active members and actors of the system. Everything users do in the Metaverse has a reaction and an effect elsewhere on the ecosystem. It is a fundamental guideline that designers must follow. User-user or peer-peer interaction is one pivotal aspect of the Metaverse experience. People can join the virtual world and interact with each other, similarly to our physical world. They are able to enjoy their daily digital life such as attending a virtual conference with other peers/avatars or simply chatting and gossiping together with friends and strangers. Users can also form parties to discover new areas or team up to overcome gaming challenges. In addition, boundaries and relationships come to life in the Metaverse. For instance, users can apply some restrictions like blacklisting other users or authorizing each other to enter their own properties/zones.
* **User-Business Interaction:** an interaction that resembles the business services consumed by virtual users. The Metaverse is still in its infancy, but it has the potential to grow into a large and thriving market. Numerous businesses are already operating within the Metaverse, and many more are expected to enter the market in the future. Business owners supply their customers with different services such as virtual real estate and online stores where it offers the capacity of selling or purchasing virtual goods. Several factors can contribute to the growth of the Metaverse economy. For instance, the Metaverse is being marketed as a substantial prospective contributor to business, entertainment, and educational advancement. Another factor can be connected to the increasing number of people who can access the Metaverse thanks to its availability, low hardware cost, and the recent improvement in the communi
Fig. 11: Metaverse Pipeline Layers and Components
cation technologies such as the presence of 6G. This accessibility is expected to increase the Metaverse's popularity, guiding more opportunities to join the market while making the Metaverse more attractive to businesses and individuals alike.
* **User-Object Interaction:** the meaning behind interacting with objects is generally to manipulate things inside the Metaverse. Objects in the Metaverse may represent a digital replica of real-world objects. Hence, these digital objects must abide by the same physical rules as they do in real life. The state of the digital objects should be modified by a user's input, such as being moved, molded, or any other events that may occur in real life to empower an immersive user experience. Digital entanglement of physical-virtual objects is also a critical part of this component. Through the latter, manufacturing, operations, and other physical experiences may take place virtually using digitally entangled IoT and enabling the concept of Digital Twins. Metaverse users can buy/own lands and assets using crypto-currency. Hence, many use cases for user-object interaction can be wrapped up in this component.
In this layered pipeline ecosystem, each of the components is empowered by a set of enabling technologies (i.e., AI, Blockchain, Networking, and Computing) and empowering domains of the Metaverse (i.e., Privacy and Security, Business, Ethics, and Social-psychology) to realize its full potential for serving the Metaverse. A detailed taxonomy is presented in Figure 12. The taxonomy shows the components of the Metaverse, branching their relevant topics that are categorized by their appropriate technologies. Hence, in the following sections, we survey each of the pipeline components against the enabling technologies and empowering domains, including top-of-the-line academic and industrial solutions and efforts.
## 6 Metaverse Infrastructure
The hardware and equipment form the Metaverse infrastructure and the backbone for building immersive, secure, efficient, scalable, cost effective, and secure environments. Guaranteeing a well designed and supported environments empowers the development of additional layers in the Metaverse pipeline, including environment digitization and advanced applications for users interactions. Therefore, in this section, we survey the latest advancements for building a powerful and promising Metaverse Infrastructure, which forms the first layer in the proposed multi-layered Metaverse pipeline ecosystem. Our review includes the latest scientific papers and articles targeting one or more of the following infrastructural components: Hardware & Equipment, XR Frameworks, and Platforms & MaaS. In Figure 13, we refer to the different supporters of the Metaverse Infrastructure. Examples of such supporters include XR headsets, haptic devices, VR cameras, computing machines, cloud, edge, fog devices, third-party services, and technological platforms.
### _Hardware & Equipment_
In this section, we focus on devices and equipment that are built to enable and facilitate the use and immersion into the Metaverse and its related applications. Different experiences in the Metaverse require different types of enabling hardware. For example, for non-immersive experiences, the Metaverse can be explored using traditional day-to-day computing devices (laptops, smartphones), software (web browsers, mobile apps), and traditional sensors (cameras, webcams, and microphones). For immersive experiences, a special type of hardware and equipment is needed that is dedicated and specially built for Metaverse-related applications. These tools provide an entrance into the realm of the Metaverse for the user.
#### 6.1.1 Categories of Metaverse Hardware
In general, Metaverse-related applications are resource hungry. This is true in the case of networking-related resources such as bandwidth and latency, as well as graphical and computing resources. For example, to have an avatar running inside a Metaverse, at least, rates of 30 frames or more per second are needed to provide an enjoyable level of graphics [68]. At the same time, to avoid motion and graphical delays, since these can cause dizziness and sensory disarray to the user, the software running on the device should match the hardware [69]. In the sequel, we present the various main categories of hardware for serving the Metaverse, including Audiovisual, Hand-based Input, Contact Lenses & Glasses, and Wristbands. Each category is based on the type of machinery needed and the purpose to support immersiveness.
#### Audovisual
The primary sensory aspect of the Metaverse is audiovisual. At the moment, the majority of immersive Metaverse-related applications are provided entry using head-mounted displays (HMDs) such as the one in figure (a)a. Size, field-of-view, resolution, latency, audio quality, and battery life are some of the important factors that make a HMD device perform better than another. Despite the fact that modern HMD models cater to Metaverse-related applications, the notion of HMDs has been proposed a while ago in different augmented reality-related applications such as two head-mounted monocular displays (HMMD) and a held-to-head binocular display (HHBD) in the domain of air-traffic controllers [70]. The way HMDs function is they track the orientation of the movement of the head while it moves, and proportionally adjust the movements of the images and the environment inside the Metaverse. Many works in the literature highlight and discuss the notion and the uses of HMDs. For example, in [71], the authors focus on the optical engineering aspect of such devices and divide them into three categorical optical solutions, (1) macro-optics, (2) micro-optics, and (3) nano-optics and highlight the characteristics of each. The authors of [72] provide a survey of HMDs in terms of requirements and goals to build interfaces that make the real and virtual worlds submerge. The authors also focus on the challenges
such as spatial, temporal, and visual realism that make building such devices and interfaces, not a straightforward process. In [73], the authors perform a review of how HMDs in VR can assist and make engineering education better for students and prepare them for engineering careers. In [74], the authors study the practical aspects of the effectiveness as well as the advantages and the limitations of conducting training with the help of HMDs in advancing professional skills and safety. Similarly, in [75] the authors measure the impact of VR in the domain of construction design and perform comparisons between the use of HMDs against regular desktop monitor usage. The authors of [76] also provide a practical work by comparing the eye-tracking latencies between currently commercially available HMDs in the market, while the authors of [77] research how the distance perception would differ between users wearing a HMD compared to people without a one.
By now, it is a known fact that the use of HMDs in immersive VR and AR can sometimes cause dizziness and headaches known as cybersickness, which provides a non-en enjoyable experience for users of such head-mounts. In fact, cybersickness is one of the main reasons that drive away users from the use of VR. The notion of cybersickness has been studied and reviewed a lot in the literature. In [78, 79], the authors review and highlight the works done in terms cybersickness caused by the use of modern HMDs and their effects on the user, while the authors of [80]
Fig. 12: Taxonomy: The Metaverse Development Ecosystem
focus on cybersickness from the perspective of individual susceptibility. Finally, the authors in [81] study the effect of various virtual terrains perceived from a HMD, which could lead to cybersickness.
#### Hand-based Input
Another major hardware enablers for the Metaverse are the hand-based input devices [85][86]. As the name suggests, these devices are mainly controlled by the hands. Based on the design and the objective of the device, they can be held by the hand such as handles or controllers (figure 13(b)), or, they can be based on haptic sensors and installed/worn on the hand (figure 13(c)).
Haptic-based input devices add the feeling of the physical presence of certain objects as they can be touched and felt in the virtual space from one side, and allow the user to sense the changes happening in the virtual environment, such as moving objects or accordingly taking physical actions such as pressing a button from the other side. Such devices can facilitate several actions, as well as interactions. For example, these devices can assist in (1) **navigation tasks**, such as panning, pointing, teleporting, and virtual walking. Haptic-based input devices also assist in (2) **interactions** by performing the exact hand-based tasks users do in real life such as sensing and real touch. Another aspect the haptic-based devices aid in is the (3) **exploration**. This allows the users to touch items in the environment and understand their nature as well as their details, such as the shape of the object. Exploration is mainly done through cues, either tactile or kinesthetic. Another aspect that haptic-based input devices facilitate is the (4) **manipulation** of objects, such as changing the direction, orientation, or position of a certain object in the virtual reality world. Another aspect is the (5) **modification** of objects. This is usually referred to as altering properties of objects other than their position or direction [86] (e.g., physical properties). Furthermore, actuated haptic devices can assist in replicating social scenarios that can elevate one's feelings and emotions by touching and interaction [87], as well as enabling users to feel experiences similar to what they would feel in real life such as tension, force, and resistance.
Hand-based haptic and sensory notions have been around for a while [88], [89][90][91], but with the advent of the Metaverse, more haptic-based and HMD VR concepts are being proposed to provide near real-life experiences for users in virtual reality settings and in the Metaverse. For example, [92] proposed a novel system called 'Haptic Around' which is a hybrid system that can replicate multiple tactile sensations in VR. The system uses basic tools such as a hot air blower, a fan, and a tool that can create mist as well as lighting to provide a fully immersive experience and mimic sensations such as desert heat or snow cold. Another idea is in [93] where the authors come up with a proposal dubbed as 'Altered touch' in which a small form factor fingertip-based haptic display is developed. The Altered touch can sense force, thermal, and tactile feedback and integrate with augmented reality. The display then is used to modify properties of real objects such as soft, hard, hot, and cold sensations in augmented reality. In [85] the authors propose 'Armstrong' which is an arm-anchored augmented 3D virtual User Interface (UI) that can be seen with the help of a HMD. Users can view the content of the virtual display with the HMD and interact with the content such as scrolling or pinching by using hand-based input devices. Research in this regard is also taking place in the industry in cooperation with universities. Meta has collaborated with Carnegie Mellon University to work on an open-source skin called ReSkin [94]. ReSkin can be categorized as a generic tactile-based skin that can mimic sources of contact and assist in haptic AI research. Producing ReSkin is quite inexpensive and Meta is offering with it a much broader open-source ecosystem for the touch processing domain which will include tactile hardware, simulators, data sets, and benchmarks for the AI community to benefit from.
#### Contact Lenses & Glasses
Other types of input devices such as AR contact lenses and glasses allow bypassing the usage of smartphone cameras or even a HMD to access the Metaverse [95]. Users need only to wear the lens the way they wear contact lenses. In addition to providing a gateway to the Metaverse, the lens also corrects the vision of its user. Users can control it with simple eye movements. Another tool for augmented reality outside HMDs and hand-based sensors is the use of smart glasses. A major advantage for these Metaverse enablers is that they shift away from big and large form-factor head-mount devices and provide similar services that the large-sized devices provided in much smaller and compact factors [96]. In that sense, they are a pleasant upgrade for what users actually now already use in their daily life such as contact lenses and glasses.
#### Wristband
Furthermore, a newer category of hardware has emerged recently which is the usage of wristbands. These are small devices, worm similarly to watches, and can detect
Fig. 13: Metaverse Infrastructure
and process slight finger movements and accordingly convert them into other types of inputs and commands. Such wristbands are being integrated more into AR systems where users can type without any actual physical keyboards, by just tapping on any surface, or play games without any physical controllers by just moving their fingers [97].
#### 6.1.2 A Review of Business Driven Hardware
Depending on the type of experience, (i.e., AR, VR, MR), different types of gears would provide different levels of accommodation. For example, for immersive experiences such as going to concerts or museums, VR headsets are needed, while for online shopping and e-commerce AR experiences, different types of equipment might be more adequate. It is noteworthy that even software and web companies such as Microsoft and Meta have jumped on the wagon of producing Metaverse-related hardware. As time passes, and more companies enter the Metaverse race, the challenge is no more producing any Metaverse gear, but rather producing affordable and lightweight Metaverse gear to mass people to purchase and use. Furthermore, in addition to producing hardware, companies are also trying to provide Metaverse-related environments and platforms for developers so that apps, games, and different kinds of software can be written to derive a Metaverse App community which will indirectly increase the usage of the produced hardware. To this end, we showcase in Table I some of the current Metaverse-related gear and equipment that is being worked on. For each type of device, we mention the company behind it, the product's name, its type, what subdomain of extended reality it serves, and the price. In the remainder of this section, we review the existing market products provided by major companies interested in developing hardware for the Metaverse. We highlight the name of the company in bold, and italize the product name.
#### HMD Devices
After purchasing the Metaverse headsets producing company **Oculus, Meta** revamped the headset line and currently offers the _Meta Quest_39 series of head-mount devices in two forms. The _Meta Quest Pro_ as well as the _Meta Quest 2 series_. The difference lies in the hardware specifications between the devices. The Quest Pro version uses thinner lenses than the Quest 2 version as well as it has better resolution. However, both provide an immersive experience and entrance to the Metaverse world. The devices can be used in end-user applications such as games and creative apps, as well as enterprise-level applications such as doing meetings and other work tasks in the Metaverse.
Footnote 39: [https://www.meta.com/quest/](https://www.meta.com/quest/)
**Lenovo** has been releasing Metaverse-related equipment and gear since 2018. Their latest release is called _ThinkReality VRX_40 and offers solutions for immersive collaborations and training on the enterprise level. An interesting notion about the device is that it supports an open ecosystem for developers to write customized software for the ThinkReality VRX headsets through its XR developer program called _Snapdragon Spaces_ using their _openXR_ based SDK (Software development kit).
Footnote 40: [https://www.linrkealityvrx](https://www.linrkealityvrx)
Even though **Microsoft** is a software company at heart, they also have been manufacturing and offering hardware for a while such as laptops, mouses, keyboards, headphones, and webcams. They have also been manufacturing Metaverse-related equipment through their _Hololens_41 line of headsets. Currently, their Hololens comes in three different editions that caters to different target audiences. The base edition which is the _Hololens 2_ can be used in regular environments, while the _Hololens 2 Industrial edition_ and the _Trimble XR10_ with HoloLens 2 editions are geared more towards industrial settings. Microsoft also offers a development edition that developers can use it to build MR-based applications for the device.
Footnote 41: [https://www.microsoft.com/hololens](https://www.microsoft.com/hololens)
**HP** has been producing computing hardware since its inception in 1939, that's why it is no shocker that they are also producing Metaverse-related equipment. Their product is called _HP Reverb G2_42 and is a HMD with hand-controllers which can be utilized for VR-related applications in the domains of Architecture and engineering, healthcare, education and training.
Footnote 42: [https://www.hp.com/us-en/vr/reverb-g2-vr-headset.html](https://www.hp.com/us-en/vr/reverb-g2-vr-headset.html)
**Magic Leap** was founded in 2010 as a dedicated visual gear-producing company. They produce head-mount base equipment geared towards enterprise and industrial
Fig. 14: Examples of different hardware categories for enabling the Metaverse
Metaverse-related applications called _Magic Leap 2_[43]. It has a much compact and simpler form factor than traditional head-mount equipment in the sense that it can be worn as eyewear, although it is bigger than a regular eyeglass size. They also provide tools for programmers so they can develop software for the device.
Phone maker **HTC** is also producing Metaverse-related headsets. Their series is named _Vice_[44] and contains sub-ranges of products with varying hardware capabilities. _Valve_ also has its own Metaverse called _Viverse_[45] which is a full-fledged Metaverse where users can join and partake in events as well as purchase digital artwork.
#### 4.2.2 Gloves, Glasses, Contact Lenses, & Wristbands
The company **HaptX**46 provides haptic gloves to achieve touching and haptic sensory-related tasks in the Metaverse. Their _G1_ gloves line provides micro-fluid-based technology geared toward industrial and mission-critical applications. HaptX also provides a library of tools in the form of an SDK where developers can write applications for Unreal and Unity engines for the Metaverse.
Footnote 46: [https://www.magicleap.com/magic-leap-2](https://www.magicleap.com/magic-leap-2)
Moreover, **Google** produces AR-related gear through their _Google Glass 2_47 product which is geared towards enterprises. Unlike traditional HMDs, Glass 2 is lightweight and compact which uses a transparent display and allows for hands-free work, which can prove to be useful in some industrial tasks.
Footnote 48: [https://www.vive.com/](https://www.vive.com/)
Footnote 49: [https://www.vive.com/](https://www.vive.com/)
Footnote 50: [https://haptx.com/](https://haptx.com/)
Similar to Google's Glass 2, **Epson** and **Vuzix** provide lines of AR enablers through eyeglasses. Epson's _Moverio_48 provides AR gateways and usability in different industries, while Vuzix 49 has several sub-range models which integrate with various _MDM_ (mobile device management) software to provide ease of use.
Footnote 49: [https://www.google.com/glass/start/](https://www.google.com/glass/start/)
Footnote 49: [https://tinyurl.com/nb49](https://tinyurl.com/nb49)*ytd
Footnote 49: [https://www.vuzix.com/](https://www.vuzix.com/)
Apart from HMDs and smartglasses, some companies such as **Mojo Vision**50 are producing smart contact lenses that can be worn on the eye and provide AR experiences for the user. This is similar to wearing a traditional contact lens. In addition to enabling AR experiences for the wearer, The _Mojo lenses_ provide also prescriptive features for the eye in case the user had to wear contact lenses anyhow.
Footnote 50: [https://www.mojo.vision/](https://www.mojo.vision/)
Footnote 51: [https://www.tapwithus.com/product/tap-vr/](https://www.tapwithus.com/product/tap-vr/)
Finally, the wristband from **TAP** dubbed as _TAP XR_51 provides a solution to the Metaverse which eliminates the need for physical or even virtual keyboards and controllers by enabling the user on tapping any surface or even pointing to a direction and accepting that as an input command. Furthermore, **Meta** developed the _Metaverse Wristband_[97], which tracks the communication between the nerves and the brain once worn. The purpose of this device is to understand and adapt to user movements.
Footnote 52: [https://www.playstation.com/ps-vr2/](https://www.playstation.com/ps-vr2/)
#### 4.2.3 Gaming Consoles
**Sony**'s Play Station game consoles have been major sellers in the electronic gaming industry. To this end, it is no surprise that Sony has also joined the Metaverse hardware race and now offers its own VR headsets known as _PS VR 2_53. The VR headsets which also come with controllers are geared more towards gaming and integrate with the games in the PlayStation game consoles. Game developers can develop their games and add immersive experiences for the players.
Footnote 53: [https://www.mojo.vision/](https://www.mojo.vision/)
Footnote 51: [https://www.tapwithus.com/product/tap-vr/](https://www.tapwithus.com/product/tap-vr/)
The company **Valve** has been developing some of the most played games in the industry since its formation such as Half-Life, Dota, and Counter-Strike. They also own and operate one of the biggest game portals in the world known as **Steam**. Similar to Sony, they have also been producing their own VR headset that accommodates their games to provide more immersive VR experiences to their
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Brand** & **Name** & **Device Type** & **Domain** & **Price** \\ \hline Meta & Meta Quest Pro & Headset/Controllers & VR & Starting $1,700.00 USD \\ \hline Lenovo & ThinkReality VPK & Headset/Controllers & VR & To be announced later \\ \hline Meta & Meta Quest 2 & Headset/Controllers & VR & Starting $400.00 USD \\ \hline Sony & PS VR 2 & Headset/Controllers & VR & Starting $549.00 USD \\ \hline Valve & Valve Index & Headset/Controllers & VR & Starting $730.00 USD \\ \hline Microsoft & Hololens 2 & Headset & AR/MR & $3,500.00 USD \\ \hline Microsoft & Hololens 2 Industrial & Headset & AR/MR & $4,950.00 USD \\ \hline Microsoft & Trimble XR Hololens 2 & Headset & AR/MR & $5,199.00 USD \\ \hline HP & Reverb G2 & Headset/Controllers & VR & $599.00 USD \\ \hline Magic Leap & Magic Leap 2 & Headset/Smartglasses & AR & Starting $3,299.00 USD \\ \hline HTC & Vive Flow & Headset & VR & Starting $500.00 USD \\ \hline HTC & Vive Pro & Headset/Controllers & VR & Starting $1370.00 USD \\ \hline HTC & Vive Focus 3 & Headset & VR & Starting $1300.00 USD \\ \hline HTC & Vive Cosmos & Headset & VR & Starting $500.00 USD \\ \hline HaptX & HaptX closures G1 & Haptics/Glowes & Hand-based & Starting $549.00 USD \\ \hline Google & Glass 2 & smartglasses & AR & $999.00 USD \\ \hline Epson & Moverio 2 & smartglasses & AR & $699.00 USD \\ \hline Vuzix & M400 & smartglasses & AR & Starting $1800.00 USD \\ \hline Vuzix & BLADE 2 & smartglasses & AR & Starting $1300.00 USD \\ \hline Mojo Vision & Mojo Lens & Smart Contact Lenses & AR & To be announced later \\ \hline TAP & TAP XR & Wristband & AR/VR & $249.00 USD \\ \hline \end{tabular}
\end{table} TABLE I: Metaverse Related Gears & Equipment
users. Their model is called _Valve Index_53 and presents an entire VR kit that comes equipped with a headset and hand controllers.
Footnote 53: [https://store.steampowered.com/valveindex](https://store.steampowered.com/valveindex)
#### 6.1.3 Hardware Enabling Technologies
We study in this section the list of enabling technologies that support Hardware development and growth in the market of the virtual world as part of the Metaverse pipeline.
**AI.** With the rise of AI applications, its subdomains such as machine and deep learning (ML & DL) have found their way to also being used inside hardware equipment pertaining to Metaverse-related applications such as HMDs. It is worth mentioning that several of the points discussed and highlighted in this section are used in conjunction with each other. For example, when Federated Learning (FL) is used with Metaverse-related hardware, privacy is also applied alongside computing factors. Thus, we can say that many of these criteria are intermingled and work together to provide common goals to the users. For instance, these devices are in need of access control and authentication. Works such as [98] and [99] use DL techniques for authentication and access control purposes. In [98], the authors make use of DL to provide better authentication mechanisms for the distorted iris regions in HMDs, while in [99] the authors also make use of different DL techniques for iris recognition. The authors of [100] leverage the advantages of Neural Networks (NN) and propose a lightweight framework, dubbed as SurgeonAssist-Net, which provides virtual assistance for a predefined set of surgical tasks. The proposed framework can be used on commercially available HMDs such as Microsoft's Hololens. The authors of [101], also make use of NNs in HMDs, however, this time, in the domain of speech and sound feedback, especially for the deaf and people with listening difficulties. Their approach relies on Convolutional Neural Nets and is called 'HoloSound' which utilizes deep learning for classifying and visualizing the identity of the sound and its location alongside providing speech transcription for the users.
**User interface and interaction.** Given the wearable nature of these devices, the way they are designed and the way the users will interact with them make all the difference in these devices' usability and adaptability. For example, hardware might be compact in size and have small form factors. To achieve this, the internal components of these devices should be smaller. This is becoming easier to be achieved because semiconductors are also becoming smaller. Furthermore, smaller micro-electromechanical systems (MEMS) and compact batteries which have longer-lasting capabilities are also a requirement for designing more user-friendly Metaverse gear [102]. The effects of user interfaces with HMDs are highlighted in [103], where the authors discuss the impact of various interfaces in the realm of interactive narratives on AR-related hardware such as HMDs. There are several types of interfaces that can be used in AR, such as Natural User Interfaces (NUI), Tangible User Interface (TUI), and Graphical User Interfaces (GUI). Providing users with enjoyable interfaces for AR/VR hardware has been the goal of several research works. For example, the authors of [104] propose an asymmetric interface for HMD (direct interaction) and non-HMD (multi-viewpoint interaction) users that provide a better experience with respect to their virtual reality environment. The authors of [105] propose a novel collaboration-based interaction method between users wearing HMDs in virtual reality environments. This is achieved through a communication interface that allows the users to exchange information and feedback with their hands and feet.
**Computing & Power.** Given HMDs aim to provide more immersive experiences, they render and contain multiple views per instance as opposed to 2D imaging. Accordingly, this requires more computing power from the HMD. The authors of [106] compare the power consumption of HMDs that offer 3D rendered views to traditional 2D views and provide statistical analysis as to which modules of the HMD consume more power and provide a couple of power-saving mechanisms. Since HMDs are getting smaller in size, their components are also becoming smaller in size. Several attempts to ease the burden on the internal components of the HMDs (i.e., CPU, GPU) in terms of computing and rendering are being moved to software-based techniques. For example, the authors of [107] use foveated rendering which is a method that reduces the rendering computation of the GPU inside a HMD. Besides, the authors of [108] try to overcome the problem of optical distortions on the level of GPU in real-time using a software-based approach where a distortion map is built to fix the distortion and fix the quality of the images. Finally, in [109], the authors convert the process of asynchronous time warping (ATW) that reduces the motion-to-photon latency in HMDs into a programmatic one by proposing a field-programmable gate array (FPGA) based approach which handles the temporal quality compensation without the need for any ATWs.
**Security.** Just like any network-oriented component, HMDs, and other Metaverse-related hardware and equipment are vulnerable to several security risks. These real-time devices create and capture large amounts of input that can be sensitive. For example, hackers can change the perceived reality and inject poisonous data that can have a detrimental effect on the usability of the AR or VR application, such as the driver's focus on a road. Other attacks can include displaying falsified information and causing cyberiskenses [110]. Another important security aspect in Metaverse-related hardware is the issue of authentication. Recently, biometric authentication has come to the surface given its little memory load [111]. The authors of [112] propose a novel method of authentication for HMDs called 'Oculock' which is a human visual system (HVS) based approach. The proposal works through an electro-oculography (EOG) driven human visual system sensing framework and fast authenticating scheme in which different physiological and behavioral features, which are extracted to assist in the authentication. In [113], the authors present a different authentication approach called 'LookUnlock', which uses passwords made from spatial and virtual targets from the environment. This approach
can circumvent shoulder-surfing attacks and provide an additional layer of security for HMD users. Finally, the authors of [114] use iris and periocular recognition to authenticate users through the HMDs, which capture the eye image. By using score-level fusion for the iris and the periocular regions, the authentication performance is improved.
**Privacy.** As these wearables become more ubiquitous and easily affordable commercially, users are more concerned about their privacy during their usage. On the hardware level, unlike other attacks such as the one targeting the application layer, the concern is to keep the low-level code that makes the hardware run safe, secure, and easily available for patches in case of a compromise. Attackers can use hardware-level attacks to take over a device, and later work their way into other layers of the hardware and siphon information. Furthermore, the amount of data captured by these devices makes them interesting platforms to perform different types of learning (i.e., ML, DL). However, these models would entail access to private data. Several works have tried to tackle this issue by proposing the usage of FL [115] with HMDs. For example, in [116], the authors propose a new framework that preserves the user's privacy and uses FL in online viewport prediction in 360-degree streaming. Unlike other learning scenarios, collected and captured data does not leave the HMD towards 3rd party storage for analysis, rather the learning takes place on the HMD. In [117], the authors also use FL coupled with RL to solve the multi-user AR output strategy problem. In their approach, the RL model gets trained on individual AR devices to produce an output strategy, which is aggregated on federated servers.
**Ethics and Sociopsychology.** Various literature efforts have studied the ethical implications of this hardware as they directly interact and intersect with different social, behavioral, and cognitive aspects of humans in general. For example, the authors of [118] supply an overview of various ethical principles in research especially pertaining to child development. The authors of [119], also discuss and explore the ethical as well as safety effects of using existing hardware in low-income high schools, especially in information technology and science classes. Moreover, immersive experiences through various AR/VR glasses and HMDs have had social and psychological impacts. For example, the authors of [120] provide a systematic review regarding the use of HMDs in the education of autistic individuals. Moreover, the authors of [121], highlight the benefits of performing a virtual field trip using HMDs, against showing parts of the trip in regular 2D videos. The reviews reveal that using immersive experiences via the HMDs has more positive effects in terms of self-efficacy and interest near the students even after a couple of weeks from the experiment. The authors of [122] examine the effects of VR in the domain of sport psychology practice. Their work discusses some of the practicalities of using VR technologies through HMDs and provides recommendations to stakeholders that work in the domain of sports psychology as to how they can use it in their practice.
### _XR Frameworks_
XR Framework is the second component in the Metaverse Infrastructure layer of the pipeline. An extended reality (XR) framework is a software development kit (SDK), toolkit, or engine used to create a decentralized Metaverse for augmented reality and virtual reality devices. It allows individuals to easily create and develop applications for the Metaverse by combining various open-source tools and engines. Individuals and business owners can use an XR framework to create 2D/3D environments, manage and coordinate interactions with XR devices, and establish networking connections using client-server technology, among other things. It helps to alleviate the burden of building these tools from scratch and can provide a range of benefits for those looking to develop applications for the Metaverse.
An XR framework typically includes hardware and software components that work together to create and manage immersive virtual experiences. When creating a system architecture to develop an XR framework, several development phases need to be considered to have a complete framework infrastructure that could address the different needs of developers and artists in creating the Metaverse applications. For instance, individuals should consider what technologies the framework will address, where the application will work and using which technology, what hardware devices are required to run, and how we could visualize and interact with it. The technologies concerning the development of such an application need to address audio, video, text chat, client-server communication, graphical editors, and digital avatar creations. In addition to these hardware and software components, an XR framework may also include guidelines and standards for developers to follow when creating immersive virtual experiences within the framework. These guidelines may cover user experience, performance, security, and other issues. We have listed the XR tools, engines, and frameworks in Table II. The table includes a list of popular and widely-used technologies and a brief description of their features and capabilities. Additionally, in Table III, we have included literature proposed references for further research and development in the field of XR.
**Communication and Networking.** In terms of networking and communication, frameworks should
maintain a smooth communication layer between the user interaction with the Metaverse and the users' interaction with themselves in the virtual environment. As such, XR devices request intensive resources, and 5G may face performance limitations. Therefore developers should be able to identify the network challenges and deliver an AR/VR capability in the networks. Some infrastructure technologies can be considered for the matter, such as caching, multi-casting, traffic engineering, quality of service (QoS) optimizations, etc. Other solutions, such as Unity Multiple Networking (MLAPI) [56], and Photon Engine [65] can serve as client-server connections and maintain connectivity for the clients with the usage of Remote Procedure Calls (RPC). Such tools can enrich the networking layers, and developers widely use them. Furthermore, protocols should adapt for updating user positions with low latency or receiving audio/visual sequences with minimal delays, for that matter. A work in [124] addresses the issue of transmission latency and provides a potential solution for the system to allocate resources efficiently for vehicle users in the virtual world.
**AI.** AI has been constantly used in application development, and Metaverse creation using XR frameworks can benefit from the digital intelligence it provides. AI capabilities can enhance realism and immersion, facilitating natural language interaction between users and enabling intelligent agents for an immersive and interactive experience. Examples of potential AI applications for XR frameworks include chatbots, natural language processing, avatar creation, efficient resource optimization, and image and object detection. AI will be working behind the scenes in creating customized avatars to connect people from different cultures by minimizing the language barriers between them and helping with user interaction and object visualization in the virtual world. Specialized models in each field will need to be trained to adapt to the problem in every possible aspect, where intelligence is critical for developing the Metaverse. For instance, the work in [127] proposes a framework named metaAID that enriches language and semantic technologies in creating digital avatars and twins. It enables the Metaverse application's content creation while addressing the users' preferences and needs. The framework uses AI in the creation of intelligent agents. In addition, the proposed framework allows humans to customize and personalize the surrounding
\begin{table}
\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline
**Tool/Engine/Framework** & **Usage** \\ \hline \hline NVIDIA Holodeck [34] & Photorealistic Collaborative Design in VR \\ \hline CoVAR [35] & Remote collaboration tool for AR/VR applications \\ \hline Unity 3D engine [36] & Creating 24/3d materials, rendering, physics, \\ \hline Microsoft Mesh [37] & Tools for spatial rendering, holopotation technology and avatar creation \\ \hline Spatial 3D technologies [38] & 3D modelling and data interop tools \\ \hline Infinite Canvas [39] & Tools for development, engineering and supporting for gaming \\ \hline Rolbics rendering 3D engine [30] & 3D development engine for texturing, environment rendering \\ \hline Meta AR/VR developer tools [34] & Supports designing, building, and supporting AR and VR applications \\ \hline Blender [62] & Provide necessary tools for creating visual effects, animated films, 3d models, and motion graphics \\ \hline SteamVR [38] & Decouples input logic from individual controllers and VR headsets for user social interaction \\ \hline Unity Multiplayer Networking (MLAPI) [34] & Client-server connections and maintain connectivity for the clients \\ \hline Photon Engine [38] & Support multiplayer game development in cross platforms \\ \hline OpenMinded [36] & Open source framework for tools, resources, and libraries related to privacy \\ \hline A-Frame [39] & An open source web framework for tools, strategies, and engines related to business implementations \\ \hline \end{tabular}
\end{table} TABLE II: Emerging XR Tools, Engines, and Frameworks in the Market
\begin{table}
\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline
**Tool/Engine/Framework** & **Usage** \\ \hline \hline Kim, et al. 2020 [123] & Create a virtual environment in collaboration between its different entities using various XR devices \\ \hline Chua, et al. 2022 [124] & Allocate resources efficiently for vehicle users in the virtual world \\ \hline Xu, et al. 2022 [125] & Encryption model for the addresses of the users, their devices and services based on Blockchain technology \\ \hline Kang, et al. 2022 [126] & Privacy-preserving connection for the connected devices based on Blockchain \\ \hline Zhu, et al. 2022 [127] & MetaAID: enrich language and semantic technologies in creating digital avatars and twins \\ \hline Yang, et al. 2022 [128] & Protect the security and privacy of the user data \\ \hline Chu, et al. 2022 [129] & Optimative resources utilization and improve the users’ quality of service \\ \hline Steinau, et al. 2019 [130] & Systematically estimating and comparing data-centric procedures for business lifecycle processes \\ \hline Canhoto, et al. 2020 [131] & Map AI components in identifying the value of artificial intelligence and machine learning for businesses \\ \hline Lee, et al. 2021 [132] & E3XR: Analyze the design of an XR system based on the ethics and learning theory of the human aspects \\ \hline Gong, et al. 2021 [133] & Improve future XR systems’ usability and user acceptance \\ \hline Rompanpa, et al. 2021 [134] & Project Key: Collaboration in creating high-fidelity XR experiences among different XR devices \\ \hline Kern, et al. 2021 [135] & OISS: 2D/3D Drawing and sketching with XR devices \\ \hline Cannavo, et al. 2020 [136] & Facilitate decentralized ownership and management of virtual assets using Blockchain technology \\ \hline Bouchar, et al. 2022 [137] & Usage of AI for Metaverse intelligent agents to create, explore and distribute content over the Metaverse \\ \hline Yu, et al. 2021 [138] & Usage of AI technology by recommending a personalized learning path or feedback on the user’s performance \\ \hline De, et al. 2019 [139] & Security controls that can be used for XR frameworks to enrich security aspects of XR applications \\ \hline Chang, et al. 2022 [140] & Continuous authentication approach to authentica users continuously while in the Metaverse \\ \hline Cai, et al. 2022 [141] & Studies the acceleration of the Metaverse development on data storage, processing and streaming \\ \hline Han, et al. 2022 [142] & Propose tools to overcome the challenges of the limited power source and networking bandwidth speed \\ \hline Dagher, et al. 2018 [143] & Address privacy and security issues in the healthcare industry \\ \hline Nair, et al. 2022 [144] & Address the privacy risks associated with the Metaverse applications \\ \hline Bibr, et al. 2022 [7] & Provide practices to maintain an ethical and private Metaverse interactions \\ \hline \end{tabular}
\end{table} TABLE III: Comprehensive Overview of XR Tools, Engines and Frameworks Available in the Literature Works
environment based on their preferences. A different work in [137] discusses the usage of AI to build Metaverse intelligent agents to create, explore and distribute content over the Metaverse. While another work in [138] discusses using an intelligent virtual reality interactive system to learn how to brow pour-over coffee. The system is based on the ADDIE model, a widely used instructional design model for Analysis, Design, Development, Implementation, and Evaluation. The system leverages AI technology to allow the VR environment to adapt to the users by recommending a personalized learning path or feedback on the user's performance.
**Blockchain.** Blockchain technology can facilitate new immersive experiences and interactions in an XR framework. Specifically, Metaverse communication and enabling transparent and secure tracking of user activity could be developed based on Blockchain technology, in what capability and features it could provide for users' privacy and security. One work that considers Blockchain in XR frameworks [125] in which it proposes an encryption model for the addresses of the users, their devices and services based on Blockchain technology, which adds anonymity over the connectivity of physical and virtual entities across the users and devices used with the connection. The system leverages the distributed and secure nature of Blockchain to create and exchange virtual assets. Furthermore, the work in [136] proposes the usage of Blockchain and XR technologies to enable decentralized marketplaces for trading virtual goods and services within the XR environment, facilitate decentralized ownership and management of virtual assets, and enable the invention and distribution of immersive content.
**Security.** Introducing security techniques into any XR framework can help users verify and authenticate themselves when they need access to the Metaverse or while meeting other avatars. Authentication and recognition of the users must be a top priority in any XR framework development to ensure that this data is protected from unauthorized access. Some tools that can be used to protect against such malicious activity are using authentication factors, including face and voice recognition and two-factor authentication using physical and behavioral biometrics. A work in [128] introduces an XR framework where two-factor authentication can be used based on chameleon signature and biometric-based authentication to protect the security and privacy of the user. The paper discusses developing a secure authentication framework to ensure the traceability of avatars in the Metaverse. The authors proposed a framework that combines Blockchain technology with biometric authentication to enable the traceability of avatars within the Metaverse. Furthermore, the framework uses Blockchain to store users' biometric data and facilitate the creation of unique identities for their avatars. This would allow the avatars to be traced back to their real-world counterparts. Another work that discusses the security aspect of an XR framework is presented in [139], where a study is done on XR security and the system's privacy. The authors identified several security controls that can be used for supporting XR frameworks, including data encryption, access controls, firewalls, intrusion detection systems, secure transmission protocols, and secure boot and firmware update procedures. In addition, introducing Blockchain and AI technologies can boost the security of XR systems. Furthermore, the work in [140] uses a continuous authentication approach to authenticate users continuously while in the Metaverse. The system uses an acoustic channel for authentication based on the ear shape of the users and the sounds they emit while interacting in the XR environment. Moreover, The system uses ear shape recognition and audio processing techniques to verify users in the background continuously.
**Computing.** Edge and fog computing can boost the development process of XR framework. Such technologies are being used in XR frameworks to enable real-time processing and interaction with virtual environments and objects and to support the expansion of intelligent XR frameworks. A work in [141] shows how the acceleration of the Metaverse development will result in excessive demands on data storage, processing and streaming. The demands of such Metaverse applications will increase the merging of the cloud into the network and integration of the edge and fog computing. Furthermore, the work in [142] proposes a framework that provides various tools to overcome the challenges of the limited power source and networking bandwidth speed through the usage of multi-use motion prediction, encode/decode architecture for creating collaborative content, facilitate rendering through cooperation in local and remote tasks. In addition, to address the high demand for resources and computing power, a work in [129] proposes admission control algorithms powered by AI to optimize resource utilization and improve the users' quality of service.
**Business.** An XR framework for business can be used in marketing and sales to demonstrate the effectiveness of a product and engage further with customers. Furthermore, the XR framework can be used for training and education to create a simulated environment where users can practice operations and procedures safely. In addition, collaboration and teamwork will be gained through the immersive experience. Some literary works propose a framework for systematically estimating and comparing data-centric procedures for business lifecycle processes aiming at improving the efficiency and effectiveness of the business management process [130]. The system consists of a set of evaluation methods and criteria to be flexible and adaptable to different management processes. Another framework is proposed to map AI components in identifying the value of AI and machine learning for ML [131]. The framework consists of identifying business objectives using AI and ML, assessing the risks and challenges, evaluating value creation and destruction, and developing strategies to mitigate potential risks. In addition, Unity and Unreal engine offer visual edits, networking communication tools, and physics engines that can help developers create immersive and interactive customer experiences to enrich business applications. Furthermore, A-Frame [145] is an open-source web framework that can be used for XR framework development, which offers a wide range of
tools for developers to enrich their business products. Such strategies, tools, and engines can be leveraged to integrate with the development of an XR business to facilitate business growth.
**Privacy.** The individual should be able to assess how the information is collected and for what reason, avoid any unnecessary data collection or user privacy-preserving tools when necessary, and integrate them into their framework development. In [126], the authors propose an XR framework based on Blockchain technology, which uses FL to provide a privacy-preserving connection for the connected devices so that it uses model weights instead of private row data with a third party--enabling decentralized machine learning for the industrial Metaverse. Such techniques can be used when developing an XR framework to enrich its privacy and security capabilities. A work in [143] is proposed, aiming at analyzing how the framework could address persistent privacy and security issues in the healthcare industry, in addition to the interaction with the various needs of patients, providers, and third parties. Furthermore, a different work in [144] discusses the privacy risks associated with the Metaverse regarding how adversaries can gain access to personal data from a popular Metaverse application like VRchat. The Metaverse hosts a massive amount of personal data that can be collected and processed. Such data may include location, biometric, behavioural, and financial data. The authors highlight effective measures to protect user privacy in this emerging virtual space. In addition, OpenMinded [66] is an open-source framework that offers a variety of tools, resources, and libraries to create privacy-preserving XR applications that aim to protect the privacy of users.
**Ethics & Sociopsychology.** The Metaverse ethics regulation can be made at a framework level by integrating some regulations into the development process. Governments can take specific regulations for engines, tools, and frameworks to ensure ethical procedures are called upon when designing such systems. For instance, a framework introduced in [132] named E3XR aims to analyze the design of an XR system based on the ethics and learning theory of the human aspects. In addition, a study in [7] sheds light on paying more attention to privacy and focuses on the ethical practices of the Metaverse and how it can affect surveillance in terms of the data, location and capitalism. Such practices and best ethical practices must exist in any XR framework to maintain a high level of privacy for the user by having moral rules to follow. Furthermore, several non-profit organizations and institutions [146, 147, 148] are already studying how emerging technologies such as XR affect the social and ethical considerations of humans. Moreover, they propose a wide range of resources, legal guides and policies to enhance the ethical considerations of XR frameworks.
Some efforts are integrated into an XR framework's design and development process. A framework is developed in [133] to improve future XR systems' usability and user acceptance. Furthermore, several institutions in [146, 147, 148] are also studying the sociopsychological impacts of emerging technologies on the users, such as how the technology can affect the user level of social presence, empathy and social interaction.
### _Platforms & MaaS_
On top of the previous two discussed components, the Hardware and XR Frameworks, enterprises can start implementing platforms that offer the base or infrastructure for Metaverse applications, including Metaverse-as-a-Service (MaaS) solutions. They are virtual platforms and ready-to-use solutions that combine AR, VR, and many other technologies to produce virtual experiences. Platforms and MaaS form the third component in the Metaverse pipeline. This component comprise development and scripting tools, hosting and networking tools, as well as tools for montetization and commerce. MaaS companies may also offer supplementary services such as analytics, user interaction, and community management. This can enable new types of social interaction, entertainment, and commerce, as well as provide new chances for invention and experimentation by motivating companies and people to develop and operate their own virtual worlds without investing in the infrastructure. Decentraland and other Metaverse platforms facilitate the construction, trading, and exploration of virtual environments. The growing gaming and entertainment sectors have played a significant role in the evolution of Illuvium, Rolbox, and Sandbox. Blocks of virtual real estate can be found on the Metaverse platform Bloktopia, and social networks can be found on ZEPETO. Major corporations are working to develop such platforms with the aid of MaaS so people can utilize the Metaverse for their employment, education, and social relationships. Metaverse platforms are also now widely used in the domains of education and medicine.
Several factors need to be taken into account when MaaS businesses offer their services and when deploying such enormous platforms to be efficient, trustworthy, safe, and most significantly, to target user requests since every platform must have specific targeted users. To achieve this, people must first specify the platform's intended users, the technology that must be utilized to fulfill the demands, the kind of computing power required to manage it, and the security and ethics measures that must be taken.
**AI.** Metaverse platforms employ AI algorithms to personalize the Metaverse experience for users by providing customized information, recommendations, and experiences, as well as by recommending virtual items for purchase based on their particular preferences and conduct. In [149], the author argues that NLP and data sharing technologies together with data visualization and sentiment analysis can enhance customer satisfaction and expectations in the Metaverse. In addition, AI can be used to automate repetitive tasks and streamline processes, such as content production, moderation, and analytics, to boost productivity [150]. Moreover, Metaverse platforms may face numerous risks and frauds in the present day; therefore, Metaverse platforms and MaaS use AI to detect and prevent security threats and fraud in the Metaverse by, for example, detecting and blocking dangerous bots and identifying and warning of suspicious behavior. AI
plays a crucial role in customizing Metaverse environments and typologies, for instance by adjusting the difficulty of a game or the layout of a virtual environment to the user's preferences. However, as with any other system, the use of technology can be advantageous in many ways but dangerous if certain factors are ignored. Implementing AI into Metaverse platforms and MaaS can be challenging, sophisticated, and knowledge-intensive. If AI is utilized to create lifelike avatars and objects, MaaS and platform owners must verify that these items are not exploited for malicious purposes, such as phishing and impersonation.
**Blockchain.** Blockchain technology is predominantly employed by MaaS companies and Metaverse platforms to enable new forms of ownership, monetization, and governance. Blockchain facilitates the storage and transfer of avatars and assets between platforms in the Metaverse. Utilizing Blockchain technology lends credibility and trust to such transactions. By connecting the Metaverse, [151] is pursuing the ability to move or teleport avatars from one platform to another. As stated earlier, Blockchain technology adds a layer of security to platforms, allowing for the secure storage of assets on such platforms. MetaRepo is a new solution proposed in [152] that enables users to easily and securely store and utilize assets in the Metaverse. Using Blockchain and its designed structure, they intended to provide users with a novel way for interacting with other Metaverse universes without the need for additional verification and security checks. Moreover, Blockchain enables the implementation of smart contracts for the execution of functions and the automatic management of the transfer of digital assets [153].
**Networking.** The network plays a significant part in evaluating the experience quality of every Metaverse platform and MaaS services. The emergence and development of the 6G network infrastructure enables Metaverse systems to benefit from ultra-reliable and low-latency communications and the support for huge numbers of connected devices; Metaverse is now widely recognized as the Internet fuel of the next-generation Internet [154]. The vast amount of user interactions with numerous streams must be gathered and handled in real-time. Specifically, Metaverse platforms must take into account end-to-end latency requirements, while certain Metaverse systems apply 7 to 15 ms end-to-end delay limits [155] and it must be less than 1 ms in critical applications such as a surgery. The authors of [156] suggest a model for optimizing Metaverse applications end-to-end and offer dynamic control.
**Computing.** Power is an essential part to take into consideration when deploying services by MaaS and Metaverse platforms on the servers. Intel states that Metaverse requires a 1,000-fold increase in processing power68. With the development of computing and supercomputers, it is now feasible to have a fully functional Metaverse platform. However, standard and conventional computing power cannot support the adoption of the Metaverse, as Metaverse platforms require supercomputers while measuring processing speed in terms of floating-point operations per second (FLOPS) rather than the conventional Gigahertz (GHz). Companies that host such platforms must guarantee the availability of power and cooling systems. This deployment has a significant influence on hardware and energy consumption [157] since the Metaverse requires extensive simulations and rendering tasks. Emulations in a number of Metaverse platforms, such as Second Life, are centralized and cloud-based [158]. In [159], the authors proposed a new concept for Fog and Edge hybrid computing architecture to use the computing capacity of edge devices in order to perform intensive computational tasks. In addition, [160] emphasizes the usage of mobile edge computing (MEC) by delivering resources close to endpoints in the Metaverse by comparing it to centralized mobile cloud computing.
Footnote 68: [https://www.thehindu.com/sci-tech/technology/intel-says-Metaverse-needs-a-1000-times-computing-power-boost/article37977295.ece](https://www.thehindu.com/sci-tech/technology/intel-says-Metaverse-needs-a-1000-times-computing-power-boost/article37977295.ece)
**Business.** Companies responsible for Metaverse platforms and MaaS must limit the cost of their services by balancing the expected input with the cost incurred by users. On Metaverse, it is possible to develop new company concepts, allowing users to integrate and profit from innovative and new types of businesses. In addition, modern firms are utilizing the Metaverse to improve their products and minimize costs. The authors of [161] explain why businesses should advertise in the Metaverse. The Nike-Roblox case study was examined to demonstrate the importance of advertising and communication in the Metaverse, as well as how this affects their business, in particular the interactions with regional and local markets. Moreover, Disney is constrained by its users and customers, thus it is partnering with the Zepeto Metaverse to extend its consumer base by employing data mining techniques [162]. Aside from that, large corporations are pursuing the usage of Metaverse to lower costs by utilizing telelearning and creating virtual prototypes, which are less
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Founder** & **Tool/Engine/Framwork** & **Usage** \\ \hline Verizon & BlueJeans & Video Platform \\ \hline Nicolas Julia and Adrien Montfort & Sorare & Fantasy Football Games \\ \hline Animoca Brands & The Sandbox & Own and Monetize Assets \\ \hline David Baszucki and Erik Cassel & Robolx & Multiplayer Online \\ \hline Ariel Mellich and Esteban Ordano & Decentraland & 3D Virtual World \\ \hline Kieran and Aaron Warwick & Illuvium & Play-To-Earn Game \\ \hline Robert Gryn & Metahero & Gaming, Profile Pictures, and Social Media \\ \hline Michael Wagner & Star Atlas & Conquer and Gather Game \\ \hline Ross Tavakoli & Blocktopia & Study and Meet New People \\ \hline \end{tabular}
\end{table} TABLE IV: List of platforms and industrial solutions
expensive to create virtually than physically. The authors of [163] targeted the mobility in the Metaverse while taking scenarios of meta-empowered advanced driver-assistance systems to illustrate how Metaverse will shape the future of the existing architectures. Furthermore, company cars can use the Metaverse to simulate real-life car crashes to improve their cars while spending a lot less money on such simulations.
**Security & Privacy.** Security and privacy issues related to data generated from the Metaverse are of major concern for users. Having access to such data raises many concerns regarding the manner in which these platforms utilize the data [164]. AI models can be developed to anticipate unreported information about users, such as their political and sex tolerance. Multiple levels of synthetic data dependability must be considered by platform owners. By eliminating data biases and ensuring data privacy, authenticating synthetic data can improve the fairness and safety of AI algorithms [165]. Such sensitive information must be secured and kept private. Implementing "Know Your Customer" (KYC) [166] in which businesses check the identities of their clients and identify potential hazards for money laundering or financing terrorism is a wonderful way to improve customer experience [167]. Implementing anonymization techniques to secure users and their data is an effective strategy to empower privacy. Furthermore, MaaS cooperation raises doubts regarding their trust. Therefore, Metaverse platforms should implement security and trust safeguards through the service providers. In addition, before implementing a service or platform, owners must examine the security measures that a Metaverse platform enforces to protect the safety of users and the quality of their experience, as a large number of attacks are aimed at these platforms. For instance, adversaries can compromise wireless access to the Metaverse by altering the inputs to the learning algorithms used for user authentication [168].
**Ethics & Sociopsychology.** Ethical considerations are essential for maintaining a safe and clean environment for user interactions. The users of these platforms might experience misbehavior, spam, harassment, and conflicts with other users [169]. The software code of the Metaverse can constrain the shape of the Metaverse, similar to the physical laws of nature. Online surroundings and user behavior are influenced by code [170]. Businesses and developers can select which features will be included on the web platform [171]. In addition, the personal boundary function is significant since it was developed on the Horizon platform after an avatar was sexually attacked by a group [172]. In addition, the platform must be founded on the principle of complete democratization in order to be open and fair. The authors of [173] emphasized the need for moderators to resolve community issues in the platform. In addition, they proposed an incentive structure to encourage beneficial user behavior. Users' inappropriate actions must be dealt with instantly. Misconduct on the platforms must be strictly regulated and punished.
## 7 Metaverse Environment Digitization
Building on top of the infrastructural developments, the next step is to start forming the virtual components and representation in the Metaverse, which we refer to as Environment Digitization. The Metaverse Environment Digitization layer, the second layer of the pipeline, is responsible for the creation and management of the virtual world's pieces. It ranges from copying real-world objects to human representation as avatars. The main components of this layer are Avatar Modeling, Rendering, and Sessions. Below we detail each one of them. The first entry point to users in the Metaverse is their avatar representation. Afterwards, rendering the avatar representation as well as objects and surroundings is the role of rendering engines. Consequently, sessions are a result of data generated from rendered environment, that requires careful data handling, including storage and query operations. To this end, we study in this section existing advancement and state-of-the art research and industrial achievements in each of the components forming the Metaverse environment digitization. In Figure 15, we present an overview of some of the characteristics of environment digitization as part of existing research focus.
### _Avatar Modeling_
This section discusses the various technologies involved in creating Metaverse avatars as part of the Environment Digitization layer of the pipeline. Such a procedure is complex by nature due to an expected level of realism. Metaverse allows its users to represent themselves as avatars while giving them new abilities to control and interact with their environment. Moreover, new types of motions, including leg and hand contraction, are newly introduced by the Metaverse in which avatars should be able to replicate one-to-one physical users' motion. Thus, we instigate a unique sub-pipeline, presented in Fig. 16, dedicated to the available Metaverse's avatar methodologies while Table V aggregates the references used in this section for each category. The pipeline considers the following: 1) Avatar Creation, 2) DT, 3) Interoperability, and finally, 4) Privacy and Security.
**Avatar creation.** Avatar creation creation is a significant aspect of the Metaverse as it gives a capacity for a high level of realism which affects the user experience. Users expect a level of self-representation and the capability of portraying themselves as they do in real life. In contrast, some others
Fig. 15: Environment Digitization
want an unorthodox creation of their avatars, given the necessary tools to complete this process. This domain includes three fundamentals: Rigging, Skinning, and Shaping.
_Rigging_: is the process of creating a skeletal structure for a 3D model. The skeleton is typically made up of bones connected by joints and can be used to control the model in the virtual environment.
_Skinning_: is creating a digital skin of avatars that can be deformed to match the motion of a real-world counterpart. This allows the avatar to mimic the appearance and movement of the user convincingly. Once the rigging is in place, the skinning process uses the skeleton as the foundation for the skinned mesh. The skinned mesh is typically created by starting with a basic shape followed by adding vertices to match the skeleton.
_Shaping_: is the process of modifying the avatar's look to match its motion. When rigging and skinning take place, Shaping is when skinned mesh uses the underlying skeleton as a guide to enable deformations.
Currently, avatar creation can be done by either a manual [174] or a model-based procedure. Manual creation is the most commonly used technique and can be categorized into three types: Photogrammetry [175, 176, 177], 3D Scanning, and Character Creation Toolkits. The final results of these processes are a complete avatar that includes the three principles mentioned above.
_Photogrammetry_: can be used to create realistic and accurate 3D models of users' shapes. It is done by taking a series of photographs from different angles and then using them as input to reconstruct a 3D model from these photos. The major limitation in Photogrammetry is the high dependency on the quality of the given images. Such can be affected by many factors, including the type of camera used, angle, lighting, and distance from the subject. Time can also be a factor, as Photogrammetry can take a Long duration to get a suitable model, depending on the complexity of the image.
_3D Scanning_: is a process of collecting digital data on the shape and appearance of a person in order to build its complementing avatar. This data can be collected using a device, such as a laser scanner, to record measurements and take them as input for the digital 3D model creation. Similar to Photogrammetry, 3D scanning is time-consuming, requires some level of expertise and is expensive for the typical Metaverse consumer.
_Character Creation Toolkit_: can be taken from two perspectives, either in the form of consumers or designers. Consumer kits are business models that build avatars for users in return for monetary value. In this case, users are given a minimum set of tools to hand-pick their avatars based on predefined assets 69 70 71. These toolkits usually include various customization options, such as face shape (eyes, nose, mouth, jaw), body style (skinned, fat, tall, short) and colours (hair, body, eyes). However, this technique has a major constraint in terms of realism as it limits the possibilities of replicating the user's intentions. As for designers, creating avatars for the Metaverse requires a considerable amount of manual labor and a high level of expertise in avatar design domains 72 73.
Footnote 69: [https://xr-marketplace.com/en/makeavatar](https://xr-marketplace.com/en/makeavatar)
Footnote 70: [https://readyplayer.me/](https://readyplayer.me/)
Footnote 71: [https://makedfan.com/](https://makedfan.com/)
The main challenges in manual creation are the time, required knowledge, and material, which might not be convenient when targeting typical Metaverse consumers. As such, the Model-Based techniques address these issues. These techniques employ AI-Based technologies to create a highly realistic one-to-one avatar from a single image. A machine learning algorithm analyzes a collection of images to learn what features are common, including faces and shapes, and then creates a model that can be used to generate new avatars from a given image. This approach has several advantages:
* It can be used to create realistic avatars that look like the person they are meant to represent.
* It is possible to create remarkably expressive avatars showing a wide range of emotions.
* It takes less time and effort than the manual approach.
Different AI technologies are employed to address the aforementioned principles. For instance, most studies consider Linear blend skinning (LBS) [194], a technique used to deform a character's mesh based on the latter bone structure movement. In LBS, a character's mesh is represented as a set of weighted points (vertices) that are connected by lines (edges). Each vertex is assigned a weight which determines how much it will be affected by the movement of the bone structure. The bone structure is represented by a set of bones connected in hierarchical structures. Each bone has a position and orientation in 3D space. When a bone moves, all the vertices connected to it will also move according to its weight. In [178], the authors trained a model based on a
\begin{table}
\begin{tabular}{|l|l|} \hline
**Category** & **References** \\ \hline Avatar Creation & [174, 175, 176, 177, 178, 179] \\ & [180, 181, 182, 183] \\ \hline Digital Twin (DT) & [184, 185, 186, 187, 188] \\ \hline Interoperability & [189, 190, 191, 192] \\ \hline Privacy and Security & [193, 17] \\ \hline \end{tabular}
\end{table} TABLE V: Avatar Modeling References
Fig. 16: Avatars in the Metaverse
generic 3D human body template. It uses a combination of linear blend skinning and pose-dependent blend shapes to represent the shape and pose of a human body. The model is designed to capture the joint angles of a human body and is composed of a set of parameters that can be used to animate the Metaverse's avatars. The model is created by fitting 3D scans to a template, which is then used to generate a set of parameters that can be used to generate an avatar. The model consists of a set of joints and a set of joint angles determined by fitting real people's body scans to a template. The work in [179] trains a facial shape and expression model from 4D scans. The model is based on a deep convolutional neural network and is trained on a large dataset of 4D scans. The model can accurately map 3D coordinates onto facial features such as eyes, nose, and mouth. In addition, it is also capable of predicting facial expressions, including emotions and expressions of surprise, based on the scan data. In [180], the authors propose a modeling scheme for capturing hands. Their scheme uses a combination of 3D scans of various hand positions to train their model to capture the geometry of hands. Different works followed a similar architecture but with incremental advances in accuracy and efficiency. For instance, in [182], the authors extended the SMPL model and combined it with FLAME [179] and MANO [180] to generate a mesh geometry with higher fidelity from a single image. Conversely, the focus in [181] was on avatar clothing. The authors used 3D scans of clothed humans to train a model capable of generating a pose-aware deformable clothed avatar. The authors in [183] propose an auto-rigging approach by matching morphable models to 3D scans. Once the rigged model is identified, the authors apply the skeleton and then the skin to the model. The results are similar to the handmade 3D meshes. Additionally, using morphable models gives the capability of reshaping the results to cope with the intended shape.
**Digital Twin.** The DT represents the physical entity as a digital asset in the Metaverse context. According to [184], DT can be either one-one direct control from the physical entity or a model-based simulation trained from the interactions of the physical entities. Regarding direct control, the state of the arts focuses on sensor-based technologies and pose estimation based on image stream input. Section 6.1 depicts some of the sensor-based avatar motions. Regarding pose estimation technologies, the challenge can be summarized as predicting the 3D keypoints from an image while projecting it on the avatar. [185] provides a survey on the existing approaches of pose estimation-based avatar controls while detailing the advantages and disadvantages of each work. For instance, in [186], the authors provide a scheme able to predict the 3d keypoints of an image even when it is incomplete or truncated. [187] aims to solve another problem where keypoints that do not have equal weighted visibility should not have a similar effect during the training procedure. In [188], rather than motions based on a single image, the authors build a context in which prediction is based on a fusion of images taken from multiple views. By contrast, model-based motion does not require the existence of physical entities once the predictive model is trained. Such a technology can be used to simulate virtual interaction between the Metaverse's avatars as predictive models of the physical world or to improve the in-verse interaction between users' avatars and their environment.
**Interoperability.** No matter where Metaverse users are located or what technology they use, virtual applications should allow free movement between the different worlds. Interoperability is essential to complete this task as it allows users to keep their avatars while efficiently transferring them between digital worlds. For instance, in [189], the author emphasizes the interoperability usage of seamlessly transferring avatars and getting access to any environment without compromising or changing the existing credentials. However, interconnecting multiple virtual worlds will suffer from different standards in the contexts of identity management, currency, modeling, and communication protocols. Blockchain technologies can address all of these issues. In this context, blockchain-based enabling technology supports the interoperability of user avatars into any Metaverse's bypassing the hardware and software limitations. However, placing all available technologies under one blockchain network is far-fetched. As such, the usage of cross-chain technologies is crucial for enabling interoperability in the Metaverse. For instance, [190, 191, 192] discusses the benefits of blockchain interoperability, including improved scalability, interoperability of avatars, and improved security. Additionally, it enables the development of more complex transactions, as well as the ability to access a wider range of environments.
**Security & Ethics.** Technologies and advancements in the Metaverse's avatar allow for intricate and deep representation of the physical users on their avatar counterparts. For instance, recent technologies can accurately project users' expressions on their avatars by either their context (spoken or written) or from a stream of image input. As such, these technologies open opportunities for various security risks, fraud, identity theft, and psychological manipulation. For instance, a user can deploy an ML agent to analyze the avatar's facial expressions and interactions to predict the physical user's personality. As such, an outcome of interaction can be predicted beforehand. Additionally, since an avatar is the identity of its users in the Metaverse, stealing an avatar is equivalent to stealing the original user's identity, which can be employed for impersonation and manipulation schemes. Thus, avatar authentication has a major impact on the Metaverse. In this case, avatar interactions, such as voice, text, and behaviors, can be used for identification. However, it is challenging without having a custom scheme or authority of control over the created avatars. In this case, adversaries can create a model of the physical entities capable of accurately simulating the physical users [193, 17]. Consequently, such authority requires additional personal information that breaches users' privacy.
### _Rendering_
The second component of Environment Digitization layer in the pipeline is rendering. Rendering is a crucial component of computer graphics that transforms 3D models into images and animations, thereby playing a key role in developing immersive Metaverse environments. This customization enhances the user experience by providing a personalized space. Various techniques and machine learning models can be used to create multi-dimensional
models, adjusting the lighting and shadows on objects and surfaces. Despite its importance, rendering can be computationally expensive, and there is a trade-off between the texture quality and the models' performance. The primary goal of rendering is to achieve real-time realism and stylization for an immersive experience. The techniques aim to create vivid views, showcasing the features of visual context and enabling customization of the 3D environments to meet the user's preferences [195].
Modeling, animating, and composing differ from rendering in that they focus on creating geometric shapes or objects in a 3D scene or moving specific frames to create animations and compositions, in which are created by combining multiple layers of images. To create 3D models and images, algorithms, mathematical equations, and AI tools are used to consider material, lighting, brightness, contrast, and other visual elements. The output of these models can be used in VR/AR games, movies, and applications.
Various tools, techniques, and software are used to create high-quality images and animations to improve the visual aspect of the Metaverse environment. One such technique is generating high-quality environment maps for use in front-facing AR applications, as demonstrated in [196]. Another approach, proposed in [197], uses real-time AR technology for environmental acquisition and rendering on mobile devices. Other tools for modeling and environment simulation include Unity3D [198], Auto-CAD [199], Blender [200], ZBrush [201], and others. The development of computer graphics has seen significant growth with the use of rendering techniques, particularly in the movement from 2D to 3D environments and in the Metaverse. Advances in hardware, such as the CPU and GPU, have enabled high-performance rendering and the creation of high-quality models, images, and videos. In [202], a technique for two-field lighting reconstruction is proposed for generating high-quality environmental maps for a limited field of view (FoV) captured from a camera. Such methods can address the challenges of mobile devices' limited sensing capability and mobility-induced noise. Popular rendering techniques for environments include ray tracing, rasterization, and global illumination. The Metaverse allowed individuals with disabilities to freely move within and explore its virtual environment [203]. Responsive subtitling is also provided to offer a personalized experience for each user. In [204], a method to improve occlusions in the blending-based method by predicting visibility and utilizing modular image information is proposed. In addition to the visual aspect, the acoustic aspect is also essential for providing an immersive experience for users. For example, [205] proposes a positional audio playback with extended coverage and resolution in rendering, using a flexible configuration for loudspeakers and encoding and rendering audio scenes in Ambisonic format.
**Realistic and Stylized Environments.** Rendering in the Metaverse can be divided into two main approaches: realistic and stylized environments. _Realistic environments_ aim to simulate the physical structure of objects and environments realistically, including the behavior of light, lighting, shading, and texture. Creating a realistic environment involves data capturing, preprocessing, modeling, and rendering. Realistic environments are typically used in video games, real-world simulations, surgeries, and architectural simulations. In [206], a rendering-aware learning scheme for efficient loss computation and photo-realistic virtual object rendering is proposed. _Stylized environments_ aim to create a unique environment for each simulation to provide an immersive experience for the user. Such environments do not have to follow realistic physics, allowing for custom color palettes, shading, and ambient environments. The creation of stylized environments involves specialized software for 3D modeling, with artists creating initial shapes and appearances tailored to a custom-made scene. Stylized techniques are used in sci-fi-related games, dream-like simulations, and fantasy environments.
**AI.** Intelligence is a major player in rendering, especially for simulating lights reflections, study the physics of surfaces, improve image quality, generate virtual content, in addition to applying styles and completing missing parts of images. First, Ray tracing is a rendering technique that uses AI for simulating light reflection in the Metaverse as a result of interacting with objects. Examples of existing solutions applying Ray tracing techniques using AI are the Nvidia's RTX platform and Microsoft DirectX Ray Tracing (DXR) [207]. In addition, Deep Learning Super Sampling (DLSS) is a promising application for rendering that uses AI and has a potential for improving rendering engines in the Metaverse. DLSS uses AI to upscale low resolution images, which can reduce the computation required for rendering scenes in the Metaverse with realistic and high quality results. The Nvidia's RTX platform utilizes DLSS [208]. Moreover, AI has been adopted to perform style transfer between images, thus it can be applied to produced effects and style objects with reduce computation [209]. Furthermore, neural rendering is another application of AI for improving rendering engines supporting the Metaverse. In neural rendering, AI is used for completing missing parts with realistic results using incomplete or low quality images [210].
**Networking and Computing.** The 3D simulation is more complex when considering the Metaverse. Different 3D objects that will change depending on the situation will need to be rendered. It will need many resources to render a detailed 3D space. Right now, none of the products on the mass market can perform that74. The main idea behind resource management revolves around resource allocation, workload balance, task scheduling, and QoS to achieve performance improvements [211]. One solution to the complexity problem is outsourcing rendering tasks to external servers while sending them back as a video stream to the user's device. Adopting this strategy can be beneficial to reduce the costs for the consumers. Also, a delay problem in rendering while using wireless VR devices persists because of the intensive computation and communications. The following paper [212] proposed a mobile augmented reality MAR-based connection model for the Metaverse and proposes a communication resources allocation algorithm based on outer approximation (OA) to achieve the best utility. Such a solution reduces communication and computation costs. Furthermore, fog servers will also help
to solve the problem of latency and high delay because these servers can be located closer to the user to minimize network latency.
**Blockchain.**Rendering tasks are highly dependent on both central and graphical processing units. The server that handles the rendering tasks will be unable to maintain a sustainable rendering speed due to the increasing demand. One solution can integrate a decentralized network of computers to achieve cumulative rendering power; one such network is the Render Network. It is a distributed GPU power empowered by the Blockchain; it provides a platform for sharing GPU power across users. By relying on such a Platform, users can join the network and offer their idle GPU resources in exchange for monetary profit. Render calls the clients needing GPU power "Creators" and the providers available on the network "Node Operators." When creators need something rendered, they send their files to the network, and a job is created. The Node Operators are then given these tasks, and those who render are given Render Tokens (RNDR)75. The key advantage for creators is the low overall cost of generating their extremely complex files because they do not need to buy and maintain the computers and to optimize the resources. Moreover, in the concept of Blockchain, the stakeholders need to access and hold assets in different virtual worlds. Data interoperability across these virtual worlds is limited due to the different environments in which they are built. It is possible to exchange data on two or more blockchains located in distinct virtual worlds using a cross-chain protocol. Users can migrate more easily between these virtual worlds because of a cross-chain protocol in the Blockchain's interoperability. This protocol allows the exchange of possessions like avatars, NFTs, and payment between virtual worlds, and it will provide the groundwork for widespread Metaverse adoption [13].
Footnote 75: [https://www.gemini.com/cryptopedia/render-network-3d-rendering-software-render-token-rdr-token](https://www.gemini.com/cryptopedia/render-network-3d-rendering-software-render-token-rdr-token)
**Security and Privacy.** The main objective of the Metaverse is to provide a shared environment that simulates the physical world, allowing for personalized digital assets to be rendered publicly in front of multiple users. Nevertheless, Such a shared environment makes it a potential target for security and privacy attacks and increases the risk of malicious activities such as hacking, fraud, and abuse. Thus, it is essential to have strong security measures in place to prevent such incidents and guarantee a safe and secure experience for all users.
Access control checks during asset rendering are a traditional way of ensuring security and privacy in a virtual environment. In this context, integrating an access control mechanism limits the visibility and control of information and assets to authorized entities only. Furthermore, complex access control also includes authorization levels that benefit specific types where the shared environment is a critical aspect of the application. Such an environment includes virtual teaching space [213], social apps, and other entertainment platforms. Typical access control integration is done through Access Control List [214]. However, this mechanism incurs serious drawbacks, such as a lack of delegation capabilities and forcing a central authority which might incur performance and scalability bottlenecks.
An improved version of access control is proposed in [215]. The authors present a spatial-based mechanism that benefits from simulating the location of entire worlds in the Metaverse applications to limit access based on boundaries. Such boundaries include access limitation to either a specific place inside a region or the whole region. The main advantage of this approach is shifting access from a basic access check to conditional access based on object location, where the access or denial is given by going through a sequence of boundaries where the object is located. However, considering only the object's location produces issues when virtual environment places are rendered in different altitudes [213]. In [216], boundaries are accessed through ID cards, keys that can open areas within the Metaverses. Such keys can be shared with others allowing access delegation with multiple parties. Such a methodology is indicated by object-capability security, as such an object is unique and can be interacted with in a specific way. In addition to previous approaches, the authors presented a visual implementation of their work by simulating their mechanism in a real virtual environment. Furthermore, patents regarding data privacy have been registered. For instance, [217] propose a blocklisting mechanism that allows users to ignore others. The main advantage is the avoidance mechanism that depends on the location of blocked users. Rather than denying access to a user in a specific location, the system will notify the primary user of the existence probability of the blocked user in a location. Moreover, in [218], another patent is deployed to protect the Metaverse universe from malicious users and the damage incurred from their existence. Such damage includes damaging virtual world properties or affecting account reputation. Thus, a rollback mechanism is proposed in an attempt to restore the virtual world to a previous state prior to the damage incurred.
### _Sessions_
The last component of the Environment Digitization in the Metaverse pipeline is Sessions. This section focuses on Metaverse sessions' data allocation, collection, security, and usability. Sessions indicate the user's engagement in the Metaverse and the location where the user's sensitive data is handled. For that, session management should be treated carefully. Sessions are created while launching the application prior to logging in. They manage data such as object state that represents the environment object position, status, and the user's personal data. As a result, controlling Metaverse session data is critical for various reasons, including customer experience, data and security regulation, and marketing effectiveness. Depending on the session activities, we can differentiate between two kinds of sessions. A private session belongs to a specific user. It takes on a more personal tone. In contrast, sessions can be public between multiple users where they can join the same environment and interact with each other and with the environment's objects. In such cases, if the state of an object has been manipulated by a user, the object state should be edited for all the other users. Thus, sessions play
a significant role in the real-time synchronization between users.
**AI.** Session structure development and optimization have been studied in the literature using AI from different aspects. By applying AI for decision-making, the communication, computing, and caching resources can be collaboratively optimized [219]. Moreover, AI can play a major role in session data access management in general. Through resource optimization and data learning, AI can influence the user experience and increase user engagement. Dealing with dynamic data sizes, such as session data, usually requires a combination of resource allocation, including cloud, fog, and edge computing technologies [220]. In terms of session optimization, AI plays a significant role by applying data categorization and then distributing the session to multiple destinations/servers based on the classification results. Thus, observing and analyzing the data traffic of sessions is essential. Depending on the data priority level, AI contributed to resource allocation, which is beneficial in the case of stream processing scenarios [221]. Another problem in the session data is scalability. Sessions are dynamic regarding the number of joiners, which is positively correlated to session resource availability. AI can contribute to resource allocation through on-demand-based scalability [222].
**Blockchain.** Critical aspects in the Metaverse are related to the session data storing, privacy, and integrity. The considerable amount of personal data generated from sessions raises a lot of privacy and security concerns [144]. Blockchain might be a suitable solution for session data threats due to its unique characteristics such as data integrity, transparency, and decentralization [126, 13]. Moreover, blockchain can facilitate communication between sessions from different spaces and platforms. Therefore, applying cross-chain for better data flow and transmission enables the communication between various types of Blockchain and offers better scalability [223].
**Networking & Communication.** The Metaverse relies heavily on session networking and communication. The immersive experience requires stable connectivity which is prone to numerous unexpected communication issues. The rising complexity and volume of session data for new Metaverse applications, in particular, pose a tremendous barrier to data sharing security [224]. In order to ensure the quality of user experience in session communication, ultra-high capacity and reliability for the wireless system are needed, which the existing 5G system cannot provide. A revolution in networking and communication is required to attain the Metaverse paradise. For instance, the 6G wireless technology appears to be a potential answer due to its ubiquitous connectivity, ultra-low latency, ultra-high capacity and dependability, and tight security [11].
**Business.** User data are beneficial due to the immersive nature of the technology and the amount of time it will be utilized. Metaverse platforms capture considerably more sensitive information about consumers [225]. This level of detail will be tremendously beneficial to brands. Through session data, business owners analyze users' behavior to drive targeted materials in front of people in the Metaverse. For example, billboards along a virtual street or a non-playable figure standing on the sidewalk enjoying the product enhance product positioning and marketers on a grand scale. Moreover, businesses may investigate intimate and intrusive aspects of their consumers' lives [226]. In addition, similar to social media, business owners may share data with marketers, who could subsequently display related product advertisements [227]. Furthermore, this data might be utilized to feed applications' algorithms to maintain users on their platform for a longer time [228].
**Privacy & Security.** Privacy and security of session data should be protected. Despite the benefits of session data and the improvements that may be brought to the Metaverse, several forms of data in the Metaverse must be protected based on the user's activity and type. In the case where the user is a business, the size and type of data to deal with are substantially larger and different from those of a regular user, who just has his personal data [229, 230]. Metaverse content, personal data, analytical data, and qualitative data are all examples of Metaverse data that should be secured [164, 231]. For instance, some basic solutions can be applied such as data classification and terms continuous updates. Additionally, more novel and reliable methods can be applied such as the use of federated learning, which will work as a promising solution for data privacy [232].
**Sociopsychological & Ethics.** Since session data are very sensitive. Thus companies should be using all the information gained from sessions ethically [233, 7]. From a Sociopsychological perspective, the disclosure of sensitive session data, like the visited places and Metaverse's activities, can negatively affect the user's psychological attitude and may affect the user's trust in the Metaverse. It can impact the user mentally and emotionally [234], since users trust the data collectors to secure their data while respecting their privacy.
## 8 Metaverse User Interactions
Using the Metaverse infrastructure and building effective rendering engines, the last layer of the Metaverse pipeline is building user applications to empower immersiveness and engagement. These applications present different types of interactions involving the user. In this section, we complete our proposed architecture with the last layer, i.e., the Interaction layer. The main types of interactions in the Metaverse from a user perspective are categorized as (1) user-to-user interactions, (2) user-to-business interactions, and (3) user-to-object interactions. In Figure 17, we present an overview of existing applications characteristics presented in the literature, including digital entanglement, digital stalking, business transactions management, as well as security measures.
### _User-User Interactions_
As part of the user interactions layer in the Metaverse pipeline, we present the User-User interactions. A user-to-user interaction mainly refers to the interaction that involves
\begin{table}
\begin{tabular}{|l|l|} \hline
**Category** & **References** \\ \hline AI & [220, 219, 221, 222] \\ \hline Blockchain & [144, 13, 126, 223] \\ \hline Networking \& Communication & [224, 11] \\ \hline Business & [225, 226, 227, 228] \\ \hline Privacy \& Security & [229, 230, 164, 231, 232] \\ \hline Sociopsychological \& Ethics & [233, 7, 234] \\ \hline \end{tabular}
\end{table} TABLE VI: Sessions References
a user - represented by his/her avatar - with one or numerous others. These interactions can take the form of various virtual social encounters. Below, we name a few types:
* _Socializing_: Users in the Metaverse can socialize with each other in many ways, including chatting, forming groups, and attending virtual events and parties. Such a category is the main key to enabling most of the Metaverse applications.
* _Gaming_: Gaming is a popular form of user-to-user interaction in the Metaverse. As Virtual reality games are already approved by the users in the gaming world, it is very likely that these games will be integrated into the Metaverse to allow a smooth experience for Metaverse gamers. A variety of games will be offered through the Metaverse, which will vary from First-person shooter (FPS) games to massively multiplayer online role-playing games (MMORPGs)
* _Virtual events_: Events and concerts can take place virtually without diminishing the user experience. Avatars can interact at virtual concerts, conferences, or exhibitions in the same manner they do physically. This can entail many advantages such as reducing costs and increasing user accessibility, in addition to allowing a customized experience for each user individually.
**AI**. AI is one of the main technological pillars that can enhance the user experience of user-user interaction. First of all, it enables a sophisticated personalization experience where avatars can be recommended for attending virtual events or training sessions according to their preferences. In addition, it can ease up communications between linguistically diverse users. The authors in [235] expect that the AI technologies utilized in nowadays applications for automated translation (e.g., Facebook) can be further adopted seamlessly in the Metaverse concept to facilitate communication. Some recent research in the literature took this aspect a step further when they addressed the problem of socializing with people with hearing loss. The authors in [236] proposed that avatars can be trained to mimic the gesture of the speaking person using an AI system (Figure 18).
**Blockchain.** Furthermore, Blockchain can enforce the Metaverse with a diverse of advantages in such a component. Many works emphasized the importance of integrating Blockchain within the Metaverse to enable powerful features such as data security, privacy, and interoperability in multiple user interaction scenarios [12]. To demonstrate, Blockchain can be helpful in offering cross-game compatibility, where users can interact with other users while preserving their progress across games [237]. For instance, the work in [238] devised a framework that offers interoperability using Blockchain across multiple games and chains. The authors claim that their framework (Figure 19) is capable of facilitating next-generation Blockchain games. They emphasized deploying avatars on smart contracts to enable their growth during the game's progress. In addition to utilizing the Genesis interface to allow games to share the avatar while facilitating an asynchronous gaming experience.
**Networking.** The need for enabling connectivity for interactions rises as the users connecting to the networks are distributed in a highly dynamic, distributed, and ultra-massive network [15]. Most MMORPGs and virtual events will require a solid networking infrastructure where users can feel the presence of others regardless of where they connect from. State-of-the-art emphasized the role of these networking technologies to enable an immersive experience for the users using Beyond 5G and 6G. For instance,
Fig. 19: Interoperable Blockchain Gaming Framework Across Multiple Games and Chains [238]
Fig. 17: Metaverse User Interactions
Fig. 18: Avatar mimicking a letter [236]
Holographic Telepresence was addressed in several works where researchers studied how a 6G of 0.1ms, backed by terabits of bandwidth per second, is needed to allow such an experience [239].
**Computing.** Moreover, any social event in the Metaverse requires computing resources depending on the level of realism the app seeks. According to [240], the majority of the VR games' minimum requirements are a processor Intel i5-4590 (or AMD Ryzen 5 1500X), 8GB RAM, and an Nvidia GeForce GTX 1060 (or AMD Radeon RX 400 Series) in order to provide an adequate session for the users where they can interact within it.
**Business.** Virtual events are gaining a lot of attention lately and companies are racing toward attracting viewers to boost their visibility. Many artists, singers, and performers are collaborating with VR companies to advertise online. To emphasize, one of the biggest events that took place virtually in 2019 was a concert inside Fortrite featuring the artist _Marshmello_[241]. Such a virtual event gathered 10.7M players to see that artist. The official recap on Youtube acquired 62M views as of February 2023. It is worth mentioning that the event had colorful effects and holograms that are not possible yet in our physical world (Figure 20). Moreover, the famous Canadian singer _Justin Bieber_ collaborated with the virtual entertainment company 'Wave' to perform his first live show as an avatar [242].
**Ethics & Social.** Many threats that already exist in our physical world are subject to carrying their burden into the digital one. Threats that may result from interactions among people can be augmented in the Metaverse. Virtual stalking, for instance, is considered a violation of the user's personal life. Stalkers with mental health disorders might track the victim's avatar to spy and collect data for mal-intension purposes. The effect of such kinds of crimes and harassment may have an amplified effect as well on the physical world even when they take effect in the Metaverse [1].
While a lot of research was done on the previously mentioned enabling technologies, there is an urge to study more the relation between the user-to-user interactions with the social enablers, including users' privacy and security, ethical, and psychosocial aspects as the current literature are limited. We address some of these issues in Section IX.
### _User-Business Interactions_
The second type of interactions we study as part of the Metaverse pipeline is the User-Business Interactions. In this section, we describe the set of services offered by businesses to users represented as digital avatars in the Metaverse. For instance, a user can request to ride a car, purchase land or a house, buy clothes, ask for a real-time language translator, or attend universities and get trained. In this current literature, there are no precedented attempts to categorize the User to Business interactions based on the types of applications. Therefore, we conducted a literature review to categorize the groups of applications tailored for offering services to avatars in the Metaverse based on the purpose of the applications and underlying technology used. The resulting categories include (1) Acquisition, (2) Employment, (3) Education, and (4) Entertainment. In the sequel, we present the set of services provided in the Metaverse for each of these categories. Afterward, we describe a common set of enabling technologies that are shared across the different applications offered by businesses.
* _Acquisition:_ Purchase a property, such as land, building, house, apartment, car, clothes, etc., and convert it to a digital asset.
* _Employment_: Attend the virtual twin of the companies and perform the tasks required. From one perspective, the avatar is part of the business providing services to other avatars in the Metaverse. From another perspective, the avatar works for a company or opens a business that is handled by a service provider. In this case, the job of the service provider is to offer the infrastructure needed to manage a proper work environment such as handling tasks and organizing meetings, delivering supplies, and facilitating coordination and communications among employees and customers.
* _Education and training:_ Another form of user-business interaction will be offered for education and training purposes. Avatars will be able to attend virtual classes and training sessions. Furthermore, they can work cooperatively to resolve assignments and discuss projects.
* _Entertainment:_ attend a sporting event, concert, casino, cinema, and theater virtually from the comfort and safety of their own homes.
Business applications simplify and relieve virtual users beyond worries about the underlying complexity, trust, privacy, and security. Services offered by businesses to avatars are created, maintained, and optimized through a set of enabling technologies that are customized in favor of the Metaverse environment.
**AI.** AI is at the core of most of the services provided by businesses in the Metaverse. Through the data generated from various user interactions in the Metaverse, the volume of generated data is enormous and can be employed to reinforce knowledge and enhance intelligence by personalizing experiences and services to improve user satisfaction. Such data is the target of many business owners to increase their market value by investing in the Metaverse and developing applications pointing to the richness of AI solutions. Digital Twin (DT) is one of the
Fig. 20: Marshmello’s Virtual Concert inside Fortrite
driving forces behind building applications in various Metaverse environments [243]. In this context, DT is used as a service offered by businesses. For instance, the authors in [244] and [245] provide an example of how to build a DT of a university inside the Metaverse. More precisely, the authors described existing efforts in the business sector to create virtual universities for improving learning quality and overcoming spatial and temporal limitations to perceive knowledge. Bright opportunities exist to improve students' immersiveness in learning through learning by doing, either through virtual simulations or a deeper view of their heritage and culture by visiting historical places and sites and traveling through time. Furthermore, the authors in [246] present a DT for building manufacturers and automated supply chains, which offer services for customers digitally and deliver the products to their homes. In terms of using AI for growing business and serving users, the possibilities are endless and include building virtual robots inside the Metaverse to serve virtual users, either by offering chatbots, real-time translation, tour guides, event management, and tasks automation on-demand [12].
**Blockchain.** Multiple businesses in the Metaverse attempt to use Blockchain as part of their services for transparent management and increased value by gaining the trust of users [247]. As part of the AI ecosystem, it is important to use trustworthy data, which can be guaranteed through the use of Blockchain [248]. An important aspect to offer a business to users is to allow them to pay for it. Through Blockchain, users can use the digital wallet of cryptocurrencies (e.g., Bitcoin, Ethereum, Dogecoin, etc) to pay for services and transfer funds safely and securely [249]. The Blockchain in the Metaverse is also used for holding the ownership of digital assets such as lands, houses, or other digital objects. The work in [250] studies the importance of using Blockchain in the Metaverse for businesses while focusing on the importance of cyberattacks detection and improvement of intra and inter-organizational communications among manufacturers through the Blockchain [251].
Moreover, businesses are more likely to target a variety of interconnected Metaverse environments, or parallel Metaverse, to offer their services. For the user's convenience, the same subscriptions, digital wallets, ownerships, and benefits should be shared across multiple environments. To this end, it is important to utilize the cross-chain concept to facilitate teleportation and maintain high standards of security and transparency. For instance, the authors in [126], propose the use of cross-chain-empowered federated learning solutions to improve the security and privacy in the industrial Metaverse.
**Networking.** The increasing volume of data generated through the IoT devices and users' interactions with Business services raises the need for producing faster network connectivity for real-time application response [141]. For instance, real-time digital asset rendering and physical world synchronization for supporting DT require higher network resources [11]. Furthermore, data aggregation for processing and conducting intelligent inferences requires networking support. Furthermore, it is vital to maintain seamless transitions or teleportation of the same service across multiple Metaverse Environments when the user moves [154]. With the development of 5G, these network requirements are slightly relieved, due to the high internet speed and scalability offered compared to previous generations [252]. Since a high data rate for rendering is required, where computation is more likely not performed on the headsets or locally, the use of fog and edge computing is becoming handy. Supporting the immersiveness, immediacy, consistency, and reliable API requests between the users and applications is possible through the 5G [253].
**Computing.** Computing requirement is tremendous in the Metaverse due to the need for data processing, AI solutions, XR, and content creation in general. Suitable computing infrastructure that must be adopted by businesses include cloud computing, fog and edge computing, and computing first network (CFN) [253]. While cloud computing offers massive computing power allowing fast processing of large volume of data and handling a vast number of API and service calls, networking delays is the main issue behind relying solely on cloud computing to support the Metaverse [16]. Consequently, edge and fog computing can complement the cloud computing power by offering additional computing resources near the user, eliminating the issue of high delays for virtual rendering and real-time response [19]. However, fog and edge computing have limited computing power to support the Metaverse, especially when the number of users increases while counting more on interoperability across different environments. Finally, a business can resort to CFN, which improves computing and networking availability by leveraging more servers from within the edge computing layer but at different locations, thus empowering distributed edge computing [254].
**Privacy and Security.** Businesses must deal with malicious users attempting to steal users' data and attack services and applications in the Metaverse [1]. Privacy and security risks are connected to each of the enabling technologies supporting the Metaverse. Through attacks on authentication mechanisms, malicious users might attempt to impersonate an avatar and steal the access credentials through simple techniques such as email phishing or by imitating voice, behavior, and appearance [231]. As a result of the huge volume of data from users' interactions which are mainly transferred for analysis and AI model updates, such data can be accessed by the attacker through the network for stealing valuable information or tampering.
**Business.** Business in the virtual world has a different feel following all the advancements of technologies, more importantly, AR, MR, and AI [255]. Marketing and branding using AI with the help of generated data from interactions and the existence of intelligent virtual assistants. To this end, more opportunities exist for businesses to scale and grow by offering more flexibility for the users to check the products in AR mode. While the user is in a virtual shopping store, the computing machines in the background hold the history of data related to all interactions and time spent looking for certain categories, allowing the
development of more robust marketing. All these factors lead businesses to invest millions of dollars in developing Metaverse applications to widen their market and benefit from the hype of virtual technology and intelligence [225].
**Ethical & Social.** Ethical and social aspects are of immense importance in the Metaverse where businesses can develop and serve applications without precautions of the impacts on human lives [256]. A major issue is related to data collection, where concerns are raised about the amount of data collected, the sensitivity of such data, and the privacy implications [172]. Furthermore, aligned with the objectives of Metaverse to offer immersiveness and near-real-life experience, some applications offered by businesses can have a negative impact on users' lives (e.g gaming industry). Furthermore, lack of awareness and cyberbullying are major concerns that can increase with the circumstances and freedom of behavior provided in the virtual world [257]. From another perspective, when utilized for the correct purposes, the Metaverse is a great opportunity for expanding businesses and providing opportunities for users to develop economies and earn living by making money as they work [256].
The Metaverse offers a golden opportunity for businesses to grow and increase their profit and market value by embracing the new advancement in virtual technologies and intelligence. Despite the growing trend towards building more applications for serving users in the Metaverse, it is challenging to account for the concerns related to trusting AI, securing user data and transactions, mitigating security issues and vulnerability, and respecting ethical guidelines and social implications on human lives.
### _User-Object Interactions_
Finally, another type of interactions as part of the User Interactions layer of the Metaverse pipeline is the User-Object Interactions. The user-to-object interactions represent any sort of interaction where the avatar is in contact with virtual models or a digital environment in the Metaverse. These interactions can take various forms in the Metaverse such as owning objects, using objects, and having DTs of IoT devices. Furthermore, object rendering and manipulation can be affected by the rendering features used and the types of applications, such as having non-static environments, a custom field of view or viewport, and offering advanced texturing and sensations. Users are assumed to be able to manipulate any of these digital objects as they do in the physical world in order to provide immersion. Below, we discuss these categories and highlight the research efforts toward enabling this type of interaction.
* _Owning objects_: This is where users can exclusively possess and control items in the Metaverse. Items may include real-estates, cars, pictures, or cosmetics.
* _Using objects_: This category refers to the ability of the user being able to manipulate the item state by triggering a certain action or to use it for other purposes, such as user relocating.
* _DT IoT devices_: Some devices integrated within the Metaverse can represent a real-world IoT device that could be controlled in a similar manner. Digital smartwatches are an example where a user can track his progress while completing a task (e.g., gaming, health-related, or work-related tasks) inside the Metaverse. Strong interaction occurs when such devices exhibit real-time manipulation on both physical and digital ends.
* _Custom field of view_: Personalized features and objects based on the user preferences, such as manipulating the surrounding object colors and shapes for helping colorblind users, or for adjusting stress and depression levels.
* _Textures and sensations_: Capture tactile features to virtualize the sense of touch and playback sensations following the user interaction with the object. An example is to be able to sense the texture of clothes in fashion stores inside the Metaverse.
* _Non-static environments_: Environments that change over time as a result of object manipulation and user interaction. Specifically, the virtual world should resemble the physical life, therefore objects are affected by metabolism and metamorphosis depending on the environment and surrounding conditions.
**AI.** In the Metaverse, the environment is composed of virtual objects, that are either built manually using advanced visualization software or constructed using AI. Following the advancements in AI, it is now possible to build meshes of different objects in the Metaverse [12]. These objects (even human body parts) are rendered and manipulated by the users, thus the resulting shape, rotation, direction, and movement can also be handled using AI solutions through an automated and intelligent workflow. Nvidia proposed GANverse3D, which takes images of objects in the physical world and transforms them into 3D virtual objects with impressive accuracy [258]. There also exist other attempts to handle object manipulation through shape update and texture learning using generative models [259]. In the same context of object rendering and manipulation in the Metaverse, DT requires synchronization between the physical and virtual worlds depending on the action applied to the object in either direction. Using AI and DT, it is possible to achieve this synchronization through state analysis, movement prediction, task learning and autonomous completion, risk reduction, and predictive maintenance [260]. Furthermore, businesses focus on the development of DT tools and frameworks, which are empowered by AI for object modeling and simulation. For instance, DT is utilized in the manufacturing industry through the Siemens Digital Twin tool 76. This tool offers means for simulation, modeling, data analytics, and visualization. Similarly, Microsoft Azure Digital Twin77 offers a cloud-based DT creation and integration for object modeling and simulation. Besides, the Digital Twin software by GE 78 is another platform offering DT solutions for managing models of industrial assets.
Footnote 78: [https://www.plm.automation.siemens.com/global/en/our-story/glossary/digital-twin/24465](https://www.plm.automation.siemens.com/global/en/our-story/glossary/digital-twin/24465)
Footnote 79: [https://azure.microsoft.com/en-us/products/digital-twins/](https://azure.microsoft.com/en-us/products/digital-twins/)
**Blockchain**: Providing immutability is also one of the main pillars to have a reliable environment. Blockchain is
envisioned to be integrated within the core of the Metaverse even at the level where users are interacting with objects. NFTs on the one hand are derived from smart contracts to appear as tradeable objects that are preserved by Ethereum. It offers immutability, decentralization, and interoperability to the objects to be utilized across platforms [25]. Furthermore, many concerns arose from the centralization of different objects, such as games and their rules, where privacy, latency, and rules manipulation were not accepted to be handled by central authorities. These concerns motivated the research industry to investigate a decentralized computation and token management infrastructure empowered by Blockchain [261]. They also claimed that the proposed framework can give further consistency to the NFT trading system across players and games.
**Networking.** Creating a seamless and integrated experience inside the Metaverse is a step toward achieving immersion. A DT of a certain object may truly reflect its real-world counterpart if it was digitally entangled. A user-to-object interaction provides a sense of sight and control to the digital replica of a certain IoT device. DT Network relies on advanced telecommunication technologies to achieve seamless connectivity among multiple objects and their digital replicas [262]. The entanglement of such devices must rely on several factors to operate properly. For instance, the physical device needs to be connected through either (1) Bluetooth 5, (2) Wifi 6, or (3) LoRa in order to transmit data. In parallel, using high-speed network links (e.g. optical fiber) is essential while relying on optimal data compression techniques in order to enable real-time data communication and reduce latency [263]. In such a manner, alternating the state of the object, digitally or physically, may result in real-time changes to its counterpart. Figure 21 reflects how IoT devices can be connected through the network.
**Computing.** More objects to render results in the need for more computing power to achieve real-time immersive interaction between users and the surrounding environment. The creation of the environment at first requires powerful machines to render the whole scene, which is usually done by cloud computing. Following user-to-object interactions, fog and edge computing can be used to offer sub-rendering as a consequence of object manipulation [264]. Furthermore, fog and edge computing can be used to empower personalized experiences which require separate computing power for the user that usually resides nearby. There exist some efforts in the literature to pre-render the scenes before the users ask them, such as loading the scene by predicting the next angle from the field of view or viewport the user would look at from the headset [116]. This mechanism can reduce the delay that the user should wait to have the environment rendered. In addition, considering different preferences or customized fields of view for users raises the burden on computing requirements to handle the increasing volume of requests [265]. Moreover, Data generated following object manipulation that is either shared or owned by users requires real-time digestion and inference using AI. Hence, more computing and storage are needed to handle the data.
**Business.** Trading virtual objects is something very common nowadays. There exist businesses that function on the idea of advertising and trading for NFTs which can vary from virtual real estate to in-game items. One of the well-known companies is _NBA Top Shot_79 which offers a platform to trade officially licensed highlights from the NBA. A lot of users are already supporting the NFT concept to the extent that some items are being sold for millions of dollars. As of 2021, an artist named _Pak_ sold his artwork for $91.8M, the most expensive NFT to ever be purchased [266].
Footnote 79: [https://nbatopshot.com/](https://nbatopshot.com/)
**Ethics & Social.** Users and businesses of the Metaverse should follow certain rules when it comes to object creation, rendering, and personalization. For instance, personalizing the experience for the user based on some preferences could affect this person's personality and demands for the same objects or experiences in real-life [172]. If every user in the Metaverse has the power to control the objects, then there is no control over the whole environment due to the lack of proper policies. Proper control over the types of objects and manipulation permitted should take place, which requires a careful study of the implications on the psychology of users, in the long run [256].
**Security & Privacy** Just as in real life, the Metaverse is assumed to offer freedom for interacting with objects while conserving the physical laws as much as possible. However, owned objects in the Metaverse should not be stolen or sabotaged by others. NFTs can empower such a feature while preserving the state of the object in case it was modified by a nonauthorized user. Furthermore, Federated Learning can also be applied to this field to study how users are reacting to object manipulation, and to extract knowledge from their sensors in a secure manner.
Fig. 21: Digitally entangled IoT device
## 9 Existing Challenges & Research Directions
The world is not yet ready for the full adoption of the Metaverse. The literature still requires a careful study of limitations and requirements to elevate the user experience and achieve the Metaverse objectives. Following our detailed literature review for each of its components, we study in this section the existing challenges of academia and industry facing the Metaverse realization. In Figure 22, we summarize the list of challenges per component in the proposed pipeline and multi-layered Metaverse architecture.
For each set of challenges per component, we present a detailed study of potential impactful and effective research directions. Our presented directions are a result of a thorough study of the current demands to empower immersiveness and realism in the Metaverse ecosystem. In Figure 23, we summarize these directions.
In the sequel, each section contains a list of challenges followed by potential research directions.
### _Hardware & Equipment_
Despite the major benefits that the hardware and equipment enable the end users, entities, and organizations, there are some challenges that need to be tackled in the future. One of those challenges is lowering the production costs of these devices so that they can be more affordable to the public, especially since Metaverse-related applications rely heavily on these devices to provide richer experiences to end users. Another challenge is to encompass or fit more computing and networking power as well as more graphical abilities alongside more features while keeping the device size as compact as possible. Furthermore, existing hardware still faces many limitations in supporting technological advancements. These limitations include the field of view, rendering resolution, and battery life [267]. Some hardware also still lack the support for advanced features that empower immersiveness, such as eye tracking and haptic feedback capabilities. In terms of compatibility and cross-platform support, most of the hardware and equipment are not compatible when deployed for supporting cross-platforms or applications across different Metaverse environments. In addition, these devices require constant
Fig. 23: Future directions per component in the Metaverse ecosystem
Fig. 22: List of challenges per component in the Metaverse ecosystem
updates to cope with the technological and software-based advancements to adjust and upgrade their capacities. Such upgrades require constant maintenance and are expensive and time-consuming.
Existing Hardware and Equipment for supporting the creation, development, and deployment of the Metaverse are still in their infancy. Research is required to develop new display technologies, such as holographic and light-field displays, to improve realism and immersiveness using VR and AR technologies [268]. In addition, research on new technologies for building input devices is required, such as gloves or full-body suits. In addition, accessibility should be addressed by research on new techniques to support people with disabilities to use the Metaverse. This is possible through improving haptic feedback and utilizing eye-tracking technologies. It is also important to support voice recognition and control by improving its accuracy and responsiveness. Moreover, sensors on hardware and equipment require additional improvement to maintain stable accuracy by improving calibration algorithms and utilizing machine learning. Furthermore, the sensors' range and flexibility should be improved, which is important for motion-capturing devices. Range and flexibility can be increased by improving the speed and quality of the signal processing algorithms while incorporating and optimizing the use of additional sensors [269]. Concerning manufacturing, hardware and equipment production must adhere to sustainability and eco-friendly requirements. Moreover, the energy utilization of devices should be further studied by investigating the use of low-power components, such as processors and displays. It is also possible to reduce the energy usage on the devices by resorting to the cloud or fog for faster computing and developing power management software on these devices.
### _XR Frameworks_
Several aspects are negatively reflected regarding the challenges of creating and adopting an XR framework. There exist several challenges in relation to the use of these frameworks, due to the limited features and capabilities offered. These limitations are represented as the inability to access and use the XR experience since XR frameworks may not operate with all devices and operating systems, causing communication or interaction delays. In addition, the overall complexity of XR frameworks for the users would make them less likely to use or interact with the XR experience. Furthermore, in terms of communication, the XR experience's reliance on networked communication to enable real-time interactions could impair its quality, restricting its ability to gather and analyze data in real time [270]. These challenges limit the possibility of integration between the XR frameworks and any of the pipeline components related to the Metaverse development ecosystem. This includes limited support for rendering and user interactions. In addition, applications offered or supported by these frameworks are exposed to various limitations affecting user immersiveness. For instance, existing XR frameworks have poor tracking and spatial understanding, leading to imperfect rendering and processing of the surroundings. On the other end, XR frameworks are subject to performance issues leading to latency or drops in frame rates. This can be a result of poor development quality or the lack of compatibility on multiple hardware and equipment, platforms, and Metaverse environments. With regard to existing guidelines, there are no standards that can regulate the deployment, development, and compatibility of the XR frameworks, which may lead to fragmentation problems. Additionally, the XR frameworks may communicate private information, proprietary information, and location data across the network, exposing them to security risks, including hacking and data leaks, which may affect the user's confidence in using the XR experience. Finally, there is a challenging trade-off that existing XR frameworks should study, which is the balance between improving performance and realism. In other words, improving and optimizing the performance of these frameworks could lead to reducing the immersiveness, quality of rendering, spatial understanding, and realism, and vice-versa.
In order to reflect on the challenges discussed over the XR frameworks, several aspects of potential directions and measures can be taken to lessen them and suggest potential solutions to ensure a positive experience for both developers and end users. An important direction is to adopt standards and protocols, ensuring that various XR frameworks and systems may communicate and interact with one another through developing and creating open-source libraries or APIs. Another direction is to create XR experiences that are simple and intuitive for users to understand by incorporating user feedback into the design and development process. Furthermore, creating new networking protocols and technologies that are more effective and resistant to latency and bandwidth limitations could improve network performance [154]. Moreover, the XR framework should offer solutions for the increased complexity of supported applications through dynamic scaling approaches and improving compatibility measures. Besides, real-time movements and tracking synchronization are vital for supporting interoperability, which is possible by developing standards to be adapted across these framework creators. In addition, maintaining the security and privacy of users' data by implementing a security mechanism that limits access to adversaries, such as encryption and authentication procedures, and ensures compliance with security legislation. Finally, creating AI models and algorithms that are transparent and accountable, and take into account diversity and inclusion during the data collection and training processes [271]. AI-based solutions should be developed to support social interactions by developing advanced natural language models and tools. Therefore, collaboration among specialists from different fields is vital in improving the XR framework development and usage.
### _Platforms and MaaS_
As a result of the size and complexity of Metaverse Platforms, their deployment and development are being negatively impacted by a variety of factors. The interactions amongst avatars necessitate tight latency
and network dependability requirements. 6G is indeed a crucial deployment solution for the Metaverse [272]. However, the communication protocols and optimization techniques must be thoroughly explored to match the anticipated needs. When speaking of Metaverse platforms, enormous computing power and resources are necessary. The availability and efficiency of existing algorithms to compile such voluminous amounts of generated data are challenged by these factors. In addition, improvements in the performance of distributed computing are required to improve the operation and efficiency [273]. Besides, users will be subject to security and privacy risks concerning identity theft and cyberbullying within the platforms [274]. Moreover, a high degree of trust must be maintained between users and the platform to provide a positive and secure user experience. Numerous aspects emphasize the necessity for strong procedures to regulate the conduct of avatars in the Metaverse to prevent criminal and immoral activities, which is still missing and requires careful investigation. In terms of regulation, there is no content monitoring in existing platforms to avoid harmful or offensive behaviors in the Metaverse. Besides, there is a need for a monetization model that controls the platforms and MaaS providers. In this context, it is challenging to balance profitability with user experience and fairness. Finally, there is no support for interoperability between different Metaverse platforms [275]. In addition, the integration between various platforms within the same complex Metaverse environment is not yet supported.
The back end forming the backbone of Metaverse platforms and services requires flexibility and scalability in terms of computing and networking performance. Therefore, the Quality of Service (QoS) can be improved by integrating the use of cloud and fog computing that utilize containers and micro-services architectures for seamless on-demand deployment [276, 277]. Through a distributed on-demand fog and edge computing architecture, the computing and networking load are reduced between Metaverse users and the cloud, relaxing resource utilization. Furthermore, it is important to develop resource management solutions, ensuring effective and efficient dynamic and scalable service placement, host selection, and horizontal and vertical resource scalability [278, 279, 280]. Resource management for supporting the Metaverse requires proactive and demand-driven decisions. As a potential direction, Reinforcement Learning (RL) and reward shaping solutions can be developed on top of an environment modeling design to meet the resource management requirement for Metaverse platforms and MaaS. Moreover, networking protocols should be offered and adapted to the platforms and services to ensure packet delivery by developing advanced traffic engineering, network slicing, and quality-based routing mechanisms. Furthermore, reducing latency and improving content delivery through caching mechanisms are of immense importance. Therefore, additional resources must be dedicated to developing Content Delivery Network (CDN) solutions for faster and more reliable delivery, by caching the content closer to users [281]. Using CDN, requests are routed to the nearest server in the network that has a copy of the content. On another end, applications on platforms and algorithms running behind the services can be improved by utilizing Quantum Computing. Applications of Quantum Computing as tools to support the Metaverse platforms and MaaS developments include: (1) optimization towards rendering; (2) physics and environment simulation to study and improve the user experience; (3) advancement and security in computing and networking architectures; and (4) Quantum AI can be used to optimize the machine learning solutions and autonomous agents for a more immersive experience. Moreover, providers of the Metaverse platforms and services should manage content creation and offer curation tools to find and share relevant content that is more user-friendly. This is possible by offering solutions for digital assets management, content discovery, social sharing, and collaborative curation. With regards to controlling user behaviors, research can focus on developing gamification solutions that can incentivize virtual users or avatars to promote positive behavior and engage in the environment. Furthermore, new research directions should focus on user identification and building trust in cross-platform environments, where users can interpolate platforms to immersive different experiences [1]. This is possible through (1) developing distributed cross-chain solutions, (2) utilizing a reputation mechanism, (3) utilizing multi-factor authentication (MFA), and (4) partnering with trust identity providers, such as government agencies or banks.
### _Avatars_
Various challenges are yet to be solved when considering avatar modeling for the Metaverse. For instance, realism is one of the primary directions during the avatar creation in which a regular, non-skilled, Metaverse user expects a high-quality representation of their body, skin, face, and voice tone. However, till now, none of the mentioned approaches in avatar modeling could generate a highly flexible and representative model of their users. Even though manual avatar creation can achieve some level of realism, it requires a fair amount of knowledge, time, and experience, which is absent for the average user. Creating avatars using AI techniques has the potential to produce realistic results, but it also presents several challenges that must be solved. Such techniques still need to be improved in critical areas when considering an interactive virtual environment expected from a Metaverse application [282]. Thus, we highlight two key challenges: realism and interaction. Realism is affected by the narrow shape variation that limits the creation of highly realistic avatars for its users. It affects the accuracy of generating correctly shaped avatars, especially considering users' weight. In this regard, avatar creation models should be able to correctly represent their users while considering the limitations in terms of collected information. Moreover, facial reconstruction and expression are affected, and none of the available approaches can achieve realism when considering facial representations. Furthermore, these models are limited in their interaction capabilities, posing constraints on users' movements and engagement with the environment. Various approaches aim to solve a distinct aspect of each part of the model. However, such a domain still lacks a combinatory solution that integrates multiple models into one. Furthermore, the creation of animations
and lifelike expressions, such as realistic movements and emotions, are not yet supported in existing avatar creation and animation solutions, which limits the user experience and engagement with others.
With regards to cross-platforms and as a result of interpolation across environments, the support for diversity in the avatar representation is important and not yet supported. For instance, a diverse representation of identity depending on the culture is necessary, which requires a large set of libraries and assets to create the personalized model representation. Furthermore, moving from one platform or environment to another raises compatibility issues for the avatar creation or rendering tools, which has not been addressed in the literature. In the same context, synchronizing avatars' movements and interactions across platforms is needed. In addition to the technical challenges of creating tailored avatar models, security and ethics pose major concerns. For instance, creating and animating avatars often involves the collection and use of personal data, such as images, sensory data, and behaviors. Access to this sensitive information can make users hesitant to enter the Metaverse, as they may be concerned about the publicity of their raw data. Finally, from the social and ethical aspects, there is no study that presents effective prevention measures to avoid harmful avatar behaviors, hate speech, or virtual harassment [283].
In the future, it is expected that users might care more about their appearances in the Metaverse vice the real or physical world. The first impressions of users entering the Metaverse are driven by their look and appearance as avatars. Therefore, it is necessary to address the existing challenges related to avatars creation, movement, and management to enable the basic requirements for users as part of their Metaverse experience and onboarding. A research direction concerning avatar creation and modeling is to use machine learning generative models for more customizable and realistic avatars [284]. This includes learning models for creating faces, textures, clothing, and voice tone adjustments. Existing advancements in convolutional and recurrent neural network design are a promising start for building such solutions. Furthermore, 3D scanning of the physical appearance of the user can be used for realistic avatar creation [285]. The first step in 3D scanning for avatar creation is to perform scanning of physical appearance using photogrammetry software or just smartphones. The second step involves modeling, where scanned users are transformed into 3D digital representations subject to customization and adjustments. The last two steps include texturing (skin, hair, and clothes) and animation of the avatars to simulate realistic movements and interactions. Animation can be performed with the support of AI that utilizes motion capture to develop procedural animation systems, lip-syncing with mouth movements, and facial animation and expressions. In terms of animation, using advanced device sensors involves transmitting the sense of touch in addition to visual or audio feedback. The sensation for improving animation requires further integration and support of haptic feedback, tactile, and temperature sensors in the platforms, In addition, using biometric data, avatars become more realistic with automated personalization features [286]. For instance, facial features can be accurately represented on the avatar face by using face recognition mechanisms, which result in real-time face animation. Moreover, the user voice can be analyzed for identity verification and the development of self-learning and adaptive AI models to create avatars that speak like the real user while controlling expressions and mouth movements. Furthermore, integrated biometric sensors in input devices can be used to capture physical states or movements. For instance, reading heart rate can be reflected in the avatar state in the Metaverse. In terms of security and privacy aspects, it is important for regulation on personal data acquisition to be updated accordingly. Additionally, applications should implement robust security measures to protect personal data, as well as be transparent about how the data is being used [287]. Finally, ethical and social implications must be carefully investigated through a set of rules and regulations to avatar design implications and behaviors in the Metaverse.
### _Rendering_
Despite all the benefits that rendering engines provide to users and Metaverse platform providers, there still exist challenges and problems related to performance, quality, and consistency. One of the main challenges related to rendering engine performance is the limited capability to render large and complex environments, such as large cities or natural landscapes, in real time [288]. Real-time rendering is essential for producing sensory images while forming a continuous flow rather than discrete events. Real-time rendering also entails the construction of 3D worlds that communicate with avatars, thus requiring quick environment reflections of the consequences of such interactions. Due to the increased latency and complexity of existing rendering engines, latency is increased, thus affecting the immersiveness and quality of experience for Metaverse users. In addition, there is a scalability problem. Numerous users will coexist in the virtual worlds of the Metaverse and interact with each other there. Real-time user interactions and the rendering of 3D environments will require computation-intensive calculations in addition to high-performance communication networks [19]. In the same context, rending high-fidelity textures and assets without exceeding the computing and networking limitations of the infrastructure is challenging. Besides, simulating realistic lightning is challenging and computationally expensive. Lightning includes simulating the behavior of light when affected by different atmospheric conditions, requiring accurate simulations of materials and surfaces [289]. This increase in complexity also applies to simulating reflections, shadows, and physics in general, which are not fully supported in existing rendering engines. Therefore, there is a major challenge in balancing the trade-off of realism vice performance and scalability. Moreover, there is a challenge in achieving consistency in rendering across various platforms and applications, requiring careful integration and compatibility with other components of the development pipeline. Finally, there is a privacy challenge in the emergence of AR/VR rendering because data can be collected in new modalities. For instance, eye-tracking may
be captured when using HMDs. While this information may be crucial for enhancing the efficiency of rendering, companies can also utilize it to determine consumer attention spans to better promote their products.
To address the challenges posed by existing rendering engines in the context of the Metaverse, we present a list of directions that can improve the quality, speed, and performance of rendering solutions. For addressing the need for realistic rendering of light in the Metaverse, the ray tracing rendering technique can be used and augmented for supporting the high complexity in the Metaverse environments. Ray tracing is computationally expensive but results in realistic light simulation by respecting the underlying physics of different materials and surfaces [290]. For addressing the issue of performance and increased delays by rendering engines in the Metaverse, research should focus on advancing existing hardware technology and consider performing distributed rendering. More specifically, multiple computing machines can be used to render a single scene, which helps reduce the amount of time and resources required for rending. Another direction can be developed by relying on AI techniques to support scene rendering in the Metaverse [291, 292]. In this regard, AI can be used to optimize the rendering settings and resources required without sacrificing the quality. Besides, AI and heuristics can be used to cache content for streaming by studying the virtual user behavior and predicting the need for rendering to act proactively and avoid latency. Furthermore, rendering engine performance can be improved by investigating the use of procedural generation in the context of Metaverse, where content is generated on the fly [293]. For instance, procedural generation can be used to generate textures and models, which reduces the need for large storage or high computation power. Moreover, photogrammetry can be employed to create 3d models of objects with enhanced quality textures and details. As a result, effective resource allocation is necessary to maximize service delivery performance to a large number of users at the edge.
### _Sessions_
Despite the importance that sessions support the Metaverse providers and users, some problems related to session resource optimization, scalability, data collection, and usability still need to be addressed. In this regard, several Metaverse questions arise related to the session timing, resource allocation and release, and the time the data should be stored, used, and maintained on the servers. One of the main challenges affecting sessions and data management in the context of Metaverse is related to performance and scalability issues. Particularly, as millions of concurrent users join the same Metaverse environment, which already requires extensive resources concerning the prior pipeline components (e.g., rendering, XR frameworks), session management increases the burden on resource utilization, including computation and networking resources. Consequently, the infrastructure faces a constant increase in the volume of data and user traffic from environments as a result of a set of user interactions and activities. With the increase in the volume of generated data, another challenge arises related to data representation and modeling. Therefore, scalable, efficient, and effective storage, processing, and analysis mechanisms still need to be investigated, especially in the context of the Metaverse. In addition, Metaverse environments are dynamic and require fast adaptation to changes, thus a dynamic change in the session management mechanism to read, store, process, and analyze the information. Furthermore, the literature is still missing an effective data-sharing mechanism inside the Metaverse environment, where various entities cooperate to achieve a task.
The availability of several platforms that provide Metaverse services, and the possibility of moving between these platforms (interoperability), raises concerns related to the mechanism used to move data between sessions across different platforms [294]. In this context, cross-platform session management is challenging to manage and synchronize the session state in the different environments effectively and in real time. To this end, session management mechanisms must be compatible with different environments, platforms, and Metaverse services. Additionally, there is no common standardization or governance over the session management mechanisms in the Metaverse. For instance, there is no agreement on how data can be managed and stored in a single or cross-platform environment. However, users should be given control over their session data, in addition to requiring consent to access or analyze it.
Finally, session data storage and exchange are subject to increasing security and privacy risks in the Metaverse, which requires further investigation and elaborated protections and countermeasures.
In the context of sessions and data management for the complex Metaverse environment handling millions of concurrent users, some technological contributions need to be explored to address existing challenges. The first direction would include investigating a set of distributed data storage mechanisms for managing sessions in highly loaded environments. In this context, data will be stored across distributed servers to improve accessibility and redundancy. Consequently, data retrieval can be improved as a result of parallel queries from distributed storage. In this regard, effective distributed storage and retrieval mechanism should be investigated, where AI mechanisms can be effective. Solutions to achieve distributed storage are to use distributed (1) cloud computing, (2) edge and fog computing, (3) Content Delivery Networks (CDN), and (4) Blockchain-based storage. Cloud computing offers automatic scaling and load balancing of storage and data analysis compared to on-premise servers (i.e., edge and fog servers). Furthermore, edge and fog servers offer reduced network utilization and latency, as well as fast analysis and reduced loads on external servers. To this end, a hybrid architecture is most likely to be the effective approach; however, several proposals should be designed per solution to obtain its full potential (i.e., divide and conquer strategy). Additionally, CDN can be used to deliver session-related information to users to reduce latency and computation on the Metaverse infrastructure. Besides, Blockchain is a distributed ledger technology that uses distributed storage by
nature and requires a consensus mechanism for approving the addition of data in the form of transaction blocks. Thus, the use of Blockchain for managing sessions would improve distributed storage and retrieval, in addition to adding a robust security layer to protect user data [271]. The role of AI is major in managing the computing infrastructure of each of these solutions. In this regard, AI can be used for automatically managing resources on-demand by studying the change in loads and acting proactively for assigning storage and distributing analysis tasks while performing intelligent vertical and horizontal scaling and load balancing [278]. Furthermore, AI is the primary tool adapted to create personalized experiences by offering user sessions that adapt to user preferences. For instance, reinforcement learning and recurrent neural network can be used to create adaptable models that study the change in preferences over time. Besides, serverless Metavverse services bring additional flexibility and performance improvement due to the ability to create and delete instances on the fly, assign additional storage capacity, and reduce the cost of management [295]. Hence, serverless computing and storage architectures are promising and should be properly investigated.
To further improve the efficiency and cost of data storage, data compression mechanisms can be employed [296]. Data compression, when effective, is guaranteed to reduce latency and improve the performance of data flow management and transmission. On another note, quantum computing combined with AI (i.e., Quantum AI) is promising to reduce and optimize the computation through faster processing of session data.
Finally, advanced networking architecture is required to support data management and transmission. Therefore, the 6G network perspective and contributions can be used to examine the resource heterogeneity problem between different devices, while supporting session synchronization and reliability.
### _User-to-User Interactions_
The user-to-user interactions still lack the maturity to express straightforward guidelines on how to interact with others [2]. There are still many questions to be asked regarding this matter. First of all, uncontrolled behaviors of avatars inside the Metavverse are one of the main challenges, such as toxic behaviors or harassment. For example, stalking in real life can be facilitated as the stalker would be able to collect information about the victim's daily life. On the other hand, digital stalking is a topic of no less importance. Digital stalkers might follow the victim's avatar and keep track of his virtual activities and behaviors. Furthermore, the concept of blacklisting people in the virtual world is important. Currently, there is no specific guideline for application providers to follow on how to restrict people from encountering others. Besides, some of the concerns are also relevant to Voice Propagation [205]. Specifically, voice propagated between avatar representations of users is subject to multiple existing issues, including (1) quality of audio, (2) latency, (3) spatial audio, and (4) accessibility. Spatial audio should allow the avatar to hear from the direction of the speaker. Furthermore, accessibility is not yet addressed in the literature, allowing users with impairment or disability to have access to this feature. Furthermore, voice propagation is subject to privacy and security concerns, as conversations might be interrupted by others. In terms of guidelines and regulations, there is no common scheme for defining the type of applications or interactions in the Metavverse, which should ensure meaningful interactions for benefiting from their presence in the Metavverse. The quality of these interactions between users is not currently monitored by any tool or solution. Furthermore, cultural and language barriers must be avoided to expand the Metavverse's capabilities and overcome communication limitations. Furthermore, there is no existing mechanism for managing user disputes, allowing reporting and addressability. Finally, there is a major trade-off of freedom vice moderation and control, which should be heavily studied to achieve a well-balanced interaction between users.
For addressing the existing challenges entailing user-to-user interactions in the Metavverse, there is a need for collaboration among multiple companies and regulators. Furthermore, efforts are required to state the guidelines for such a type of interaction, especially when hosting an environment for multicultural users. Empowering social presence among avatars in the Metavverse is of immense importance for achieving immersiveness and engagement in the virtual space. Social presence can be achieved in different ways. One of them is implementing spatial audio to improve interactions and create a sense of presence [297]. There are several steps to implement spatial audio, which include (1) capturing data, (2) creating a virtual audio environment, (3) assigning audio sources, (4) simulating sound propagation, and (5) rendering the audio stream in the virtual space. In addition, social network analysis and optimization can be achieved using AI technologies for learning and improving past and undergoing users' interactions in the environments, thus leading to an increased quality of experience [298]. Furthermore, utilizing avatars that mimic emotions and expressions enriches the social presence experience for Metavverse users. Another important research direction is to investigate non-verbal communication techniques, such as text or sign language, for voice propagation accessibility.
Tracking and sensing technologies can also improve user-user interactions and immersiveness by achieving accurate and responsive rendering feedback during communications. Investigating advanced tracking and sensing technologies leads to improving body tracking, gesture recognition, eye tracking, and haptic feedback.
Another method to improve user-to-user interactions is by creating virtual agents or assistants and utilizing natural language generation (NLG) [299]. In specific, virtual assistants support avatars and guide them to navigate the Metavverse and perform actions. NLG and agents can also be used for real-time translation to overcome the barrier of multi-cultures and communications. NLG can be used to generate natural-sounding speech based on the interaction context, thus empowering more engagement and dynamicity.
### _User-to-Business Interactions_
Business face challenges concerning interactions occurring in the Metavverse following the services and applications
offered to users in the virtual space [300]. First, the computing and networking infrastructure is limited per the Metaverse environment. With many businesses sharing the same environment, it is challenging to assign and scale computing resources for each application per business while considering the fairness factor. Besides, the marketplace composed of businesses should be trustworthy for customers to ensure smooth buy and sell operations. In particular, applications should be tested for various issues that may arise from using AI, Blockchain, overloading networking, and computing resources, and most importantly, ensuring the maximum possible protection against attacks and malicious users. Furthermore, no solution can let businesses verify users' identities for building trust during business interactions. Moreover, in virtual economies, there is no mechanism that ensures a fair and transparent pricing system and currency exchange rates. Besides, one of the challenges facing user-to-business interaction is the lack of a reliable and efficient payment processing mechanism, including a transaction tracking and management lifecycle to support users in receiving a higher quality of experience from businesses in the Metaverse.
Every business must account for being responsible for damaging or exposing user information and details by following some guidelines to make sure that the minimum requirements are met before deployment in the Metaverse. Henceforth, one of the main challenges arising from such type of interaction is to achieve and control a balance between protecting the user privacy and security, vice the need for personalized business services, and for forming targeted advertisements for promoting products [301]. No single entity owns the Metaverse; however, multiple mini-environments are more likely to be developed for different experiences, where multiple businesses can share applications. To this end, there are currently no guidelines for businesses to follow when deciding to join the Metaverse to offer their customers an immersive virtual experience by promoting their products.
Following the list of challenges facing User-to-Business interactions, we present a list of research directions that can potentially address some of these challenges. Starting with the infrastructure limitations, there is no clear mechanism for identifying the available computing and networking infrastructure and how to use it by each. Henceforth, a clear study about the management and scheduling of such resources is essential, while considering fairness and trust among applications and customers. Furthermore, the Metaverse is hungry for resources, thus providing fair allocation and innovative coordination between the physical and virtual worlds for sharing network and computing resources is necessary. More specifically, computing resources should be allocated fairly to cope with the increase in demands as users start relying more on Metaverse to engage socially and perform daily tasks [277]. Moreover, the use of cloud, fog, edge, and CDN resources and technologies would essentially improve the support for businesses in the Metaverse. Through business-to-business collaboration, various challenges can be conquered.
From a user perspective, a research direction should consider building and managing the virtual economy of businesses, as well as transactions in the Metaverse [302]. One step is to motivate the creation of virtual showrooms for shopping, browsing virtual products, and hosting events for brand management. Furthermore, virtual assistants can be created through the help of AI with customized user experience to help create a trusted bond and relationship between customers and businesses [303]. Those assistants can also be used as customer support agents for managing users' complaints. Virtual marketing and immersive advertisement strategies are also important for supporting virtual businesses and building a powerful economy. This also entails using the Metaverse for product design and prototyping [304]. Powerful rendering engines are also promising for managing the complete supply chain in the Metaverse, which has a positive impact on businesses. For instance, supply chain management in the Metaverse can include tracking inventory and performing required logistics in real-time. Furthermore, the Metaverse can help businesses in expediting training and onboarding sessions for new employees while facilitating team building and enabling remote work opportunities [305].
On another note, the use of Blockchain technology by businesses empowers secure and transparent digital commerce to motivate the actions of buying, selling, and trading virtual goods. Moreover, interoperability raises additional challenges, including ways to teleport required data across various environments, establish reliable connectivity with distant servers, monitor and maintain access, and standards for financial exchange. In this regard, the Blockchain can be used to address some of the interoperability challenges by employing a cross-chain technology with a consortium Blockchain between partners.
### _User-to-Object Interactions_
This type of interaction entails challenges that could hinder the user experience, including reliability, response time, accessibility, interoperability, and security. First, the user-object interactions are subject to an increase in the computing and networking resources of the Metaverse. As the number of users inside the same environment rise, interactions with objects increase while still expecting a reliable and real-time response. Due to the increase in resource demands, the underlying infrastructure can become overloaded and might not account for all the requests, leading to service unavailability and degraded quality of experience. Furthermore, users might request personalized experiences that will increase the load in rendering separate objects and sub-environments per a single or group of users. Besides, the issue of scalability and increased resource usage arises when creating a DT of objects inside the Metaverse to replicate real-world objects in real-time and vice-versa [306]. Examples of existing DT frameworks and tools from the industry are 3D Experience platform by Dassault Systems 80 and Anasys Twin Builder 81, which help in creating and managing DTs through data modeling, simulations, and data analysis. In this scenario
and following the features offered by these frameworks, DT requires an increase in network usage to read and transfer data between the virtual and physical worlds in real time, in addition to the computational resources needed to maintain a real-time rendering scheme for the object.
Due to the immersiveness requirement, manipulating objects should be a natural and intuitive action, such as picking up or moving objects, which requires additional research efforts in the computer graphics field [307]. Similarly, object physics and interactions should be natural and close to real life, which is also challenging to achieve, especially across multiple platforms. In many environments, achieving consistent object manipulation is challenging. Furthermore, object interaction should be supported for different types, sizes, or weights of objects forming an interactive environment. In terms of hardware support and sensors, latency in input devices and a decreased accuracy of object tracking degrade the system performance, especially as the number of concurrent interactions increases. In addition, there is a need for a mechanism for managing ownership and sharing objects in collaborative settings. To this end, there is a challenging balance between guaranteeing freedom for users with regard to interactions with objects, and the increase in control with additional limiting guidelines. Achieving a balance for this trade-off is problematic.
On another end, there will be various Metaverse owners, which leads to the problem of interoperability [294]. With regards to user-object interactions, owning digital assets raises security and theft concerns due to their exposure to the public internet and management by various operators. Furthermore, sharing the owned objects across environments is not straightforward and requires some guidelines and regulations. In the same context, guaranteeing accessibility to objects and features by anyone and everyone is mandatory, mandating a set of rules and regulations which have not yet been studied and investigated.
Promising potential solutions to the existing limitations as a result of user-to-object interactions are many folds. First, developing on-demand fog and edge computing layers next to users leads to rendering real-time personalized experiences, this accounting for DT requirements. Consequently, fewer network and computing resources are consumed by transferring the load to neighboring servers. Furthermore, it is essential to integrate advanced, fast, and scalable AI techniques to decide on the right time and place to deploy the on-demand fog servers and perform the required resource scheduling [278, 308]. In addition, AI solutions can advance the rendering capabilities by pre-rendering interactions with objects by studying users' behaviors and predicting the next needed object manipulation inside the Metaverse. Besides resource management and proactive decision-making, AI has great potential in providing assistance for users to facilitate user-to-object interactions. This includes the role of AI to process voice commands and provide personalized interaction experiences. Furthermore, an improvement in sensory technology combined with the power of AI can improve the accuracy of object tracking to reduce latency in manipulation. Furthermore, combined with Blockchain, AI can be used to create a smart distributed ownership management system, adding an additional layer of security and transaction management mechanism [271]. Additionally, using AI, utilizing swarm robots is another promising solution for elevating the interactions with objects, where robots can collaborate together to process and handle users' commands and feedback. Furthermore, gamification can be utilized with an AI synergy in order to motivate users through rewards to interact more with the surrounding, thus increasing engagement inside the Metaverse.
Achieving real-time synchronization of objects and environment manipulation between the real and physical world, in addition to studying the surroundings in the case of augmented reality requires additional research. Potential research directions consist of developing mechanisms for (1) accurate tracking, (2) real-time mapping, and (3) dynamic object placement [309]. For advancing tracking accuracy, techniques such as computer vision and sensor fusion can be used. Moreover, real-time mapping can be achieved using advanced sensors that can capture detailed information about the environment, including Li-DAR and depth cameras.
Another promising track of research is the use of generative design to enhance user-to-object interaction [310, 311]. Using the generative design, the following features are guaranteed: (1) customization, (2) efficiency, (3) realism, and (4) innovation. In particular, generative models empower personalized user experiences by providing a tool for customizing objects in relation to preferences. Besides, efficiency is guaranteed by optimizing the design of the objects in the Metaverse, leading to less resource utilization for rendering and post-interaction manipulation. In addition, generative models enable realistic object design with an increased margin of innovation by providing a wider range of implementing designs. Therefore, generative design is promising and can push the boundaries of efficiency, realism, and innovation.
Furthermore, it is essential to integrate the use of Blockchain, which is integrated and shared across Metaverse environments, where users can easily share and transfer objects ownerships (e.g. using the cross-chain concept). Furthermore, a set of regulations and guidelines must be outlined, shared, and adapted by all Metaverse owners or managers to make sure that all objects and features for available interactions and services are available for everyone and not limited to a group of users.
Finally, with the increase in diversity and need for additional object designs, it is essential to share a public repository including large libraries of design and manipulation options. Moreover, the Metaverse can be used to provide training for new users joining and teach them about the existing types of interactions and experiences.
## 10 Conclusion
A Metaverse is a virtual place that allows people to become the masters of their own world and businesses giving the chance to embark on new potentials and widen their visions. In this survey, we present a detailed study of the Metaverse development ecosystem by devising a novel multi-layered
pipeline ecosystem composed of: (1) Infrastructure, (2) Environment Digitization, and (3) User Interaction. Furthermore, we present a multi-level classification of each component within each layer in the pipeline with respect to a set of the most recent and state-of-the-art literature work contributing to its development and success. The Infrastructure layer of the pipeline is composed of a study related to the Hardware, XR Frameworks, and Platforms and MaaS. With regards to the Environment Digitization, the components are planned as follows: Avatar Modeling, Rendering, and Session Management. Finally, the User Interaction layer is the application layer of the Metaverse where users can interact with their surroundings, which is composed of User-User Interaction, User-Business Interaction, and User-Object Interactions. The advancements within each component are investigated against a set of technologies and social enablers, including AI, Blockchain, computing, networking and communications, privacy & security, ethics, sociopsychology, and business aspects. Furthermore, our survey presents a list of existing challenges behind each component, followed by desirable criteria that help the research community devise academic and business-oriented solutions.
We believe that this survey is the first to comprehensively present the full development ecosystem of the Metaverse against a set of technologies, empowering domains, and social enablers for the academic and business aspects. In addition, our survey is the first that devise future directions for each challenge entailed by the components of the Metaverse ecosystem.
|
2301.06975 | Vision Based Machine Learning Algorithms for Out-of-Distribution
Generalisation | There are many computer vision applications including object segmentation,
classification, object detection, and reconstruction for which machine learning
(ML) shows state-of-the-art performance. Nowadays, we can build ML tools for
such applications with real-world accuracy. However, each tool works well
within the domain in which it has been trained and developed. Often, when we
train a model on a dataset in one specific domain and test on another unseen
domain known as an out of distribution (OOD) dataset, models or ML tools show a
decrease in performance. For instance, when we train a simple classifier on
real-world images and apply that model on the same classes but with a different
domain like cartoons, paintings or sketches then the performance of ML tools
disappoints. This presents serious challenges of domain generalisation (DG),
domain adaptation (DA), and domain shifting. To enhance the power of ML tools,
we can rebuild and retrain models from scratch or we can perform transfer
learning. In this paper, we present a comparison study between vision-based
technologies for domain-specific and domain-generalised methods. In this
research we highlight that simple convolutional neural network (CNN) based deep
learning methods perform poorly when they have to tackle domain shifting.
Experiments are conducted on two popular vision-based benchmarks, PACS and
Office-Home. We introduce an implementation pipeline for domain generalisation
methods and conventional deep learning models. The outcome confirms that
CNN-based deep learning models show poor generalisation compare to other
extensive methods. | Hamza Riaz, Alan F. Smeaton | 2023-01-17T15:58:29Z | http://arxiv.org/abs/2301.06975v1 | # Vision Based Machine Learning Algorithms for Out-of-Distribution Generalisation
###### Abstract
There are many computer vision applications including object segmentation, classification, object detection, and reconstruction for which machine learning (ML) shows state-of-the-art performance. Nowadays, we can build ML tools for such applications with real-world accuracy. However, each tool works well within the domain in which it has been trained and developed. Often, when we train a model on a dataset in one specific domain and test on another unseen domain known as an out of distribution (OOD) dataset, models or ML tools show a decrease in performance. For instance, when we train a simple classifier on real-world images and apply that model on the same classes but with a different domain like cartoons, paintings or sketches then the performance of ML tools disappoints. This presents serious challenges of domain generalisation (DG), domain adaptation (DA), and domain shifting. To enhance the power of ML tools, we can rebuild and retrain models from scratch or we can perform transfer learning. In this paper, we present a comparison study between vision-based technologies for domain-specific and domain-generalised methods. In this research we highlight that simple convolutional neural network (CNN) based deep learning methods perform poorly when they have to tackle domain shifting. Experiments are conducted on two popular vision-based benchmarks, PACS and Office-Home. We introduce an implementation pipeline for domain generalisation methods and conventional deep learning models. The outcome confirms that CNN-based deep learning models show poor generalisation compare to other extensive methods.
Keywords:Vision Machine Learning, Domain Generalisation, Domain Adaptation, Domain Shifting, Domain Specific Learning
## 1 Introduction
The field of machine learning (ML) has created tremendous success stories by solving many complex problems like object classification, detection, segmentation and reconstruction in videos, natural language processing (NLP), medical image analysis, robotics, and many more. These developments in ML algorithms and databases, and the fusion of various fields of ML help researchers achieve
high-level goals. The majority of current applications are built on what we call traditional ML where we usually have millions of example datapoints with labels to train a model under supervised learning (SL).
Since 2011, with the help of deep learning (DL) which is a sub-domain of ML that deals with various datasets to automatically extract features, scientists are now using DL to handle various supervised and unsupervised learning problems as described in [1]. Pure DL-based models are static systems, and have many problems including overfitting, they commonly need huge datasets, have data biases, and they do not have significant potential for generalisation and domain adaptation [2].
Previous works illustrate that the majority of the time ML tools fail to generalise when processing out of distribution (OOD) data. The main reasons for wanting to design and analyse such generalised tools are applications like vision based autonomous systems, and medical imaging [2]. For example, when only a few conditions change during an inference process in image processing such as light variations, shapes, locations, or the pose of objects, then models perform poorly because they did not have interaction with similar variations during their training and thus did not learn how to perform under such unpredictable circumstances [3, 4, 5]. The work in [6] conveys information about the collapse of ML tools for generalisation of OOD data which actually happens when ML models learn fake correlations instead of capturing real factors behind such variations in data. These fake correlations can be racial biases, texture statics, and object backgrounds.
In the research literature, researchers have developed many methods to tackle domain generalisation. For instance, one of the first solutions that was tried is about increasing the size of the training dataset with the same tasks but in different environments. The goal of the domain generalisation algorithm is to learn the invariances and features for all possible domain shifts.
Before we go to the contributions this article, it is important to understand the difference between domain generalisation and domain adaptation. When algorithms process samples of data from different distributions, ML algorithms suffer from a common problem called the domain shift. This introduces two further major issues namely domain generalisation (DG) and domain adaption (DA). DG deals with the comparatively hard situations where several different but related domains are given, and the purpose of a machine learning algorithm is to learn a model which could be generalised on unseen test data. The main goal of DG is learning a representation of its training data that could have the potential to perform well in unseen domains by leveraging more source domains during training.
The idea behind DA is different to DG in that it is to maximise the performance of algorithms or models on a given target domain using existing training source domains. The main difference between DA and DG is that DA has access to the target domain data which implies that it can see the data while DG cannot see anything from the target domain during training. This makes DG more challenging than DA but more realistic and more favourable in practical appli
cations. There are many generalisation-related research topics or solutions such as domain adaptation, meta-learning, transfer learning, covariate shift, lifelong learning, and zero-shot learning.
This article addresses a direct comparison study between domain-specific and domain generalised methods with respect to vision-based applications, especially classification. To achieve our goal, we have implemented a pipeline with 9 well-known domain generalised algorithms and 7 domain-specific models. The comparison study is conducted on two popular benchmarks namely PACS and Office-Home, from which some sample images are shown in Figure 1. We also trained and tested 16 models by using fine-tuning. Our research shows the learning curves of methods for both benchmarks. The result section considers accuracy as a measure of generalisation for supervised learning benchmarks.
Figure 1: Sample images of the same classes across all domains in the PACS and Home-Office datasets.
The rest of article is structured as follows: section 2 covers some related work. Section 3 introduces brief details about the algorithms we use. Section 4 is about the study of benchmarks. Section 5 describes our proposed method including the experiments and formulation of DG. Results and discussion are presented in Section 6 while Section 7 presents conclusions and future work.
## 2 Related Work
To handle domain generalisation, few methods have been proposed to select hyperparameters so that a model can maximise the performance for OOD [8]. The parameters update with the rest to a function which calculates relatedness between the different domains. Similarly, in [9] the authors illustrate ways to select models based on an algorithm-specific regularisation. These previous articles worked for a specific kind of problem only, although the scope for this article is to analyse vision-based domain generalisation for various up-to-date methods.
In the literature we find that domain generalisation has many available algorithms which can be classified into data manipulation, representation learning and strategy learning algorithms [7]. One piece of information which we can extract from [7] on data generation and adversarial training is also a kind of common way to optimise ML tools for OOD. In this regard, the well known ImagNet challenge [10] also updated for different kinds of OOD where authors created new benchmarks and then tried to solve them with new approaches [11, 12, 13]. Nevertheless, these methods did not consider solving domain generalisation in vision-based applications with the most up-to-date approaches as explained in Domainbed [6]. Furthermore, [11, 12, 13] have variations in ImageNet challenge to solve domain generalisation paradigm even though they tried few domain specific methods, on other hand, totally missing out domain generalisation frameworks. Table 1 provides a clear picture in this regard.
The most related state-of-the-art research papers are [6] and [7]. In [6], the authors implemented a framework named Domainbed which has support for various domain generalised methods to analyse vision-based domain generalisation. Moreover, Domainbed has variety in model selection, training schemes and hyper parameters using Resnet as backbone model for domain generalisation frameworks. The anaylsis provided by authors in those papers were insightful. However, [6] does not has information about domain specific methods and does not provide training and inference analysis for domain specific models.
Similarly, [7] also conducted the same kind of study but improve the limitations of Domainbed in implementation and coding flexibility. However, neither of these works discussed the effect or performance of traditional deep learning methods for OOD generalisation which is also the scope of our research. In this paper, we perform an analysis of typical deep learning methods and then compare their performance with domain generalised methods. Therefore, table 1 describes the gaps in other works and improvements in terms of contributions for our article.
## 3 Implemented Algorithms
Our paper uses an implementation of 16 algorithms in total, including 9 domain generalised and 7 conventional deep learning. This section briefly enumerates each of them.
### Up-to-Date Algorithms for Vision-Based Generalisation
* Empirical risk minimisation (ERM) is a simple method which actually minimises the total sum of all errors in the given domains [6].
* Group distributionally robust optimisation (DRO) [14] also performs ERM but gives more focus to the domains with larger errors. We can say that it is an extension of simple ERM.
* Inter-domain mixup (Mixup) [15, 16] uses ERM on linear interpolations of data in domains.
* Domain adversarial neural networks (DANN) [17] explore features with distribution matching in the external domains.
* Class-conditional DANN also known as C-DANN [18], is the extension of DANN and instead of matching features in the data distributions, it matches conditional distributions across the domains and their respective data labels.
* Deep CORAL [19] utilises the matching between the mean and covariance of features across the distributions.
* Maximum mean discrepancy (MMD) [20] measure the alignment of distributions across all the domains and use adversarial feature learning to match aligned distributions.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Articles & Model & Training & Multiple & Domain Gen- & Domain \\ & Selection & Framework & Datasets & realisation & Specific \\ & & Selection & & Frameworks & Frameworks \\ \hline Gulrajani and Lopez-Paz [6] & ✓ & ✓ & ✓ & ✓ & ✗ \\ Wang _et al._[7] & ✓ & ✓ & ✓ & ✓ & ✗ \\ Hendrycks _et al._[11] & ✓ & ✗ & ✓ & ✗ & ✗ \\ Hendrycks and Dietterich [12] & ✓ & ✗ & ✗ & ✗ & ✓ \\ Hendrycks _et al._[13] & ✓ & ✓ & ✓ & ✓ & ✓ \\ Ours & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison with previous articles for domain generalisation and domain specific methods where ✓ indicates whether an article has the details and �means article is missing that point
* Invariant risk minimisation (IRM) [21] learns a linear classifier on top of representation matching techniques.
### Conventional Deep Learning Algorithms for Vision-Based Applications
We implemented 7 well-known deep learning networks including AlexNet, VGGNet16, ResNet18, ResNet50, InceptionV3, DenseNet, and SqueezeNet, each of which are covered in an overview article [22]. These networks are popular in computer vision applications, therefore further details on them may not be needed.
The most related state-of-the-art research papers are [6] and [7]. In [6], the authors implemented a framework named Domainbed which has support for various domain generalised methods to analyse vision-based domain generalisation. Similarly, [7] also conducted the same kind of study but improve the limitations of Domainbed in implementation and coding flexibility. However, neither of these works discussed the effect and performance of traditional deep learning methods for OOD generalisation which is also the scope of our research. In this paper, we will perform an analysis of typical deep learning methods and then compare their performance with domain generalised methods.
## 4 Benchmarks Used
To carry out experiments on domain generalisation, we consider only vision-based benchmarks for supervised learning. We restrict this work to vision-based
\begin{table}
\begin{tabular}{l c c c c} \hline
**Datasets** & _Domains_ & _Classes_ & _Samples Descriptions_ \\ \hline Office-Caltech & 4 & 10 & 2,533 & Caltech, Amazon, Webcam, DSLR \\ Office-31 & 3 & 32 & 4,110 & Amazon, Webcam, DSLR \\ PACS & 4 & 7 & 9,991 & Art, Cartoon, Photos, Sketches \\ VLCS & 4 & 5 & 10,729 & Caltech101, LabelMe, SUN09, VOC2007 \\ Office-Home & 4 & 65 & 15,588 & Art, Clipart, Product, Real World \\ Terra Incognita & 4 & 10 & 24,788 & Wild animal images recoded at four different locations L100, L38, L43, L46 \\ Rotated MNIST & 6 & 10 & 70,000 & Rotated Hand written Digits \\ DomainNet & 6 & 345 & 586,575 & Clipart, Infograph, Painting, Quickdraw, Real, Sketch \\ \hline \end{tabular}
\end{table}
Table 2: Benchmarks used in supervised learning
benchmarks in order to narrow the scope because if we were to work with benchmark datasets across multiple domains covering vision, robotics, language processing, etc. then our results would have an overhanging question of whether results would have had as much to do with the domains chosen.
Table 2 presents a summary of some of the open benchmarks used in the literature for evaluating supervised learning. For our initial experiments, we use two of these, PACS and Office-Home benchmarks. In the case of reinforcement learning, RoboSuite, DMC-Remastered, DMC-GB, DCS, KitchenShift, NaturalEnvs MuJoCo, CausalWorld, RLBench, Meta-world and many others are commonly used benchmarks as described in [23].
Of the two benchmarks we use, one is a relatively simple dataset (PACS) with 4 different domains including images presented as Art, Cartoon, Photos, and Sketches. Each domain has 7 classes and there are 9,991 samples in total. The second benchmark is Office-Home, also consisting of images. This also has 4 domains namely Art, Clipart, Product, and Real World with 65 classes in each domain. The main idea behind choosing the first is to work on a benchmark which could have comparatively less complex classification tasks so that we can observe the behaviours of domain-specific and domain generic models. We select Office-Home as a second benchmark because of the higher number of classes (supervised tasks) which adds complexity into the tasks for each domain.
## 5 Experimental Methods
To investigate our research questions we create a pipeline consisting of both types of algorithms and Figure 2 provides an overview of our approach. Figure 2 contains three blocks including selection criteria, training and analysis. In the first block, we choose the dataset or benchmark for which we want to try the proposed pipeline. This block also includes the model selection step in which we identify which type of leaning method our pipeline will use, either conventional deep learning like VGGNet, ResNet or one of the more recent vision-based domain generalised methods like ERM, DROP, etc. The training block is the second block and includes data pre-processing according to selected conditions, training and validation of models. The third block, analysis, saves the best model according to checkpoints and early stopping. It also measures the generalisation in the form of accuracy and loss metrics and compares the performance by computing learning curves.
### Formulation of DG and Experiments
In domain generalisation, let us assume we are given \(\mathcal{N}\) training (source) domains, \(S_{train}=\{S^{i}|i=1,...,N\}\) where \(S^{i}=\{x_{j}^{i},y_{j}^{i}\}\) denotes the i-th domains. The joint distributions between each domain are different with \(D_{XY}^{i}\neq D_{XY}^{j}\), \(N\geq j\neq i\geq 1\). The objective of domain generalisation is to learn a robust and comparatively generalised predictive function \(f:X\to Y\) by using N training domains to get minimum error on an unseen test domain \(D_{X}\to S_{test}\) where
\(S_{test}\) cannot be accessed in training and \(D_{XY}^{test}\neq D_{XY}^{i}\). Therefore, the model's goal is to minimise the loss function \(L\) on \(S_{test}\)
\[\min_{f}E_{(x,y)}\in S_{test}[L(f(x),y)]\]
where \(E\) is the expectation and \(y\) are the labels. Figure 3 presents a graphical representation of domain generalisation.
### Domain generalisation model experiments
Here we look at the model pipeline for the training and the inference domain generalisation. The current version of our approach has been executed on two benchmarks PACS and Office-Home and for each of these, 9 models which were commonly used in recent literature have been used, these models being GroupDRO, ANDMask, Mixup, MMD, DANN, CORAL, VREx, RSC, and ERM. Experiments were performed using Pytorch as a backend with an Nvidia GPU RTX 3090, with 24 GB memory.
Experiments were conducted with original images from the benchmarks without any additional data augmentation and in the pre-processing phase only re
Figure 2: Overview diagram summarising our work
sizing of the images was implemented. Each model has its own set of hyperparameters but the common parameters are batch size 32, epochs 120, momentum 0.9, learning rate 0.01, weight decay 0.0005, input size (3, 224, 224), and baseline model Resnet-18. From each source domain of each of the two datasets, PACS and Office-Home, models utilise 80% of the data in training and validation, and keep 20% of the data as the unseen or target domain.
### Domain-specific (DL) model experiments
Our domain-specific pipeline has different settings to the domain generalisation pipeline and it also supports the same two benchmarks as well as 7 domain-specific models namely AlexNet, VGGNet16, ResNet18, ResNet50, InceptionV3, DenseNet121, and SqueezeNet. In this system, we use the same Pytorch environment with the same Nvidia GPU RTX 3090, with 24 GB memory. These models use a fine tuning technique in which pre-trained weights can be used as a feature extractor and the last fully-connected layers could be re-initialised and trained.
For these experiments, a model does training and validation in one domain and then performs inference in another target domain. For example, in the case of PACS, models explore the domain of "art painting" (with 1,638 samples) in training and in validation, and then use 20% of the target domain's "cartoon images" (with 410 samples). Similarly, for the Office-Home dataset, models use the domain of "clipart" (with 3,492 samples) as the source, and then images categorised as "real world" (with 873 samples) as the target.
During training, models do not have any access to the target domain and initially models use only one domain as a source. The common hyperparamters are batch size 64, epochs 120 with early stopping 20, momentum 0.9, learning rate 0.0001, weight decay 0.0005, input size (3, 224, 224), and cross-entropy loss.
Figure 3: A graphical representation of domain generalisation
## 6 Results
Table 3 presents the results for 9 domain generalisation frameworks and 7 conventional deep learning or domain-specific models shown in blue font.
In Table 3, the columns marked "Validation" and "Target" represent the accuracy figures for the validation and for the unseen or target testing set respectively with results presented for two different benchmarks. Well-trained models will have adequately high validation accuracy and target accuracy always tries to follow validation accuracy. According to various types of data distribution, a model can have different values of validation and target accuracy but for a balanced dataset, an accuracy figure close to 90% can be considered good enough to deploy in some application domains.
Table 3 has five columns and the rows include the names of the models used and the validation and target accuracy performance figures for both datasets. The first 9 models belong to the domain generalisation method and the remaining 7 models in blue relate to domain-specific methods. Overall, if we examine the PACS dataset, domain generalisation methods have clearly higher validation and target accuracy compared to domain-specific methods and VREx shows the best performance compared to the others. In the case of domain-specific models, InceptionV3 performs better than the others. Similarly, for the Office-Home dataset, both types of models have identical behaviour.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{PACS} & \multicolumn{2}{c}{Office-Home} \\ \hline Models & Validation & Target & Validation & Target \\ \hline GroupDRO & 0.95 & 0.73 & 0.82 & 0.52 \\ ANDMask & 0.95 & 0.72 & 0.81 & 0.44 \\ Mixup & 0.97 & 0.72 & 0.83 & 0.53 \\ MMD & 0.94 & 0.69 & 0.82 & 0.52 \\ DANN & 0.94 & 0.73 & 0.83 & 0.51 \\ CORAL & 0.95 & 0.77 & 0.84 & 0.55 \\ VREx & 0.97 & 0.80 & 0.76 & 0.49 \\ RSC & 0.97 & 0.77 & 0.83 & 0.50 \\ ERM & 0.97 & 0.78 & 0.84 & 0.57 \\ \hline AlexNet & 0.74 & 0.45 & 0.56 & 0.30 \\ VGGNet16 & 0.80 & 0.47 & 0.50 & 0.23 \\ ResNet18 & 0.86 & 0.51 & 0.65 & 0.52 \\ ResNet50 & 0.89 & 0.57 & 0.70 & 0.62 \\ InceptionV3 & 0.90 & 0.55 & 0.68 & 0.66 \\ DenseNet121 & 0.86 & 0.44 & 0.62 & 0.35 \\ SqueezeNet & 0.80 & 0.50 & 0.54 & 0.29 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Experiments with domain generalisation and domain-specific methods
In the case of domain-specific models, they perform close to the performance levels of the domain generalisation methods for the PACS dataset but we need to consider that during the training, they use a single domain and are tested on another single domain and results on the remaining domains will be different.
On the other hand, the Office-Home benchmark has more complex tasks than PACS therefore domain-specific models perform comparatively poorly. From this we can conclude that with more variation in tasks and domains, the performance of conventional deep learning methods is not stable and fluctuates rapidly.
Even though we try to explain domain generalisation with the help of accuracy matrices, there is no absolute way to measure the performance of domain generalisation. For example, sometimes, a model can have low scores during the training and validation but that model can still have better generalisation because the model will be more stable towards unseen data domains.
We now present a more detailed analysis of accuracy and loss for both benchmarks. Figure 4 represents the validation and testing accuracy across the datasets. Figure 4(a) highlights the accuracy curves for PACS and Figure 4(b) presents accuracy for Office-Home. The graph shows information for 7 conven
Figure 4: Accuracy analysis for the PACS and the Office-Home benchmarks. The length of the curves indicate the stopping points.
tional deep learning models. In Figure 4(a) Alexnet shows the lowest accuracy, Resnet50 and InceptionV3 show the highest but almost the same accuracy level, which is around 90% for the validation case. Moreover, in the case of testing/unseen accuracy, Alexnet and VGGNet have almost the same but lowest accuracy from among the others and Resnet50 clearly outperforms other models. From Figure 4(a), for the PACS benchmark, the key information which we can extract is that less deep or smaller models show low accuracy relative to larger models that give high accuracy. Therefore, we can also say that larger models have better generalisation properties for out-of-distribution datasets.
Figure 4(b) illustrates accuracy analysis for the Office-Home benchmark dataset and in the case of validation curves, InceptionV3 and Resnet50 show the highest results and VGGNet performs poorly. On the other hand, if we look at Table 3, even though Resnet50 has 70% and InceptionV3 68% validation accuracy and based upon these numbers we can not possibly say that overall Resnet50 has better generalisation. The reason behind this argument is again numbers in table 3 for testing or target set which means that on the target or unseen domains it is InceptionV3 that has higher accuracy among all. Figure 4(b) testing accuracy curves also contain similar trends like validations curves but one vital piece of information which we can clearly see in it, is the gaps between InceptionV3 and Resnet50 increased. Therefore, for Office-Home, InceptionV3 has better domain generalisation than the other models.
Figure 5(c) is the validation and testing losses for the PACS benchmark and Alexnet performs worst in both cases and InceptionV3 perform best in both cases. Similar to the accuracy pattern in Office-Home, Figure 5(c) also shows increments in gaps between the losses of Resnet50 and InceptionV3. Figure 5(d) illustrates losses for Office-Home. Likewise the accuracy analysis of Office-Home, for the validation loss, VGGNet give high losses, Resnet50 and InceptionV3 are close to each other but have lowest losses. On unseen data, overall VGGNet shows relatively stable behaviour and Alexnet crosses VGGNet around 40 epochs and becomes a higher loss-giving network. Correspondingly, Resnet50 performs well at the start of analysis but around 35 epochs InceptionV3 crosses Resnet50 and becomes the lowest loss-giving network. Hence, based on losses curves, InceptionV3 has lower losses than other networks and it supports our above-mentioned hypothesis that InceptionV3 has better domain generalisation ability than other conventional models.
## 7 Discussion
The main purpose of this article is related to performance analysis for popular benchmarks of domain generalisation. It also contains experiments for conventional domain specific deep learning methods and recent domain generalisation training frameworks. The results section especially Table 3, Figure 4 and Figure 5 convey the vital message that domain specific models perform poorly most of the time if we try to explain generalisation with accuracy and loss matrices. Moreover, this article tries to highlight another parameter using graphs in Fig
ure 4 and Figure 5 which show the gaps between validation and testing accuracy. Higher gaps mean the target model has poor domain generalisation and lower gaps mean comparatively higher generalisation.
Another outcome which we can extract from the findings of this article is that even domain specific models perform less effectively when we compare them with domain generalisation frameworks but larger models have better domain generalisation. Therefore, in Table 3 ResNet50 has the best domain generalisation results for both benchmarks including PACS and Office-Home compared to other domain specific models in blue colour. Furthermore, models having skip connection like ResNets, DenseNets are better for domain generalisation compared to models without skip connections like AlexNet and VGGNets.
## 8 Conclusions and Future Plans
This paper presents important information in the form of a summary of the performances of various vision-based machine learning tools and connects these results with the emerging areas of domain generalisation and domain adaptation. The two base pipelines presented help us to understand that for the PACS and Office-Home benchmarks, domain-specific methods perform poorly.
Figure 5: Loss analysis of conventional models for the PACS (top) and the Office-Home (bottom) benchmarks.
The work here successfully demonstrates that in the field of supervised learning, domain generalisation learning is better than domain-specific learning for some kinds of benchmarks. Our evaluation is performed on relatively complex benchmarks and by determining their accuracy, we try to explain generalisation.
Other ways for measuring domain generalisation have been proposed in the literature like measuring the gap between source and target domains, which is one of our future research directions. Furthermore, we will extend our experiments to cover attention based vision transformers as it would be insightful to introduce an attention mechanism for such benchmarks. Meanwhile our present results are for benchmarks which have 4 domains, and as next steps we will increase the number of domains by using benchmarks like DomainNet which has 345 classes. To test domain generalisation for OOD we will create our own testing benchmark.
This article has achieved its aims partially as we were able to test the findings for only supervised learning with vision based benchmarks. Hence, it would be interesting to explore other areas and applications like unsupervised learning and audio benchmarks. Besides such concerns, our work highlights important future research directions to explore domain generalisation.
## Acknowledgments
HR and this publication has emanated from research conducted with the financial support of Science Foundation Ireland under Grant number 18/CRT/6183. For the purpose of Open Access, the author has applied a CC-BY public copyright licence to any author accepted Manuscript version arising from this submission. AS is part-funded by Science Foundation Ireland (SFI) under Grant Number SFI/12/RC/2289_P2 (Insight SFI Research Centre for Data Analytics), co-funded by the European Regional Development Fund.
|
2305.14886 | How Graph Convolutions Amplify Popularity Bias for Recommendation? | Graph convolutional networks (GCNs) have become prevalent in recommender
system (RS) due to their superiority in modeling collaborative patterns.
Although improving the overall accuracy, GCNs unfortunately amplify popularity
bias -- tail items are less likely to be recommended. This effect prevents the
GCN-based RS from making precise and fair recommendations, decreasing the
effectiveness of recommender systems in the long run.
In this paper, we investigate how graph convolutions amplify the popularity
bias in RS. Through theoretical analyses, we identify two fundamental factors:
(1) with graph convolution (\textit{i.e.,} neighborhood aggregation), popular
items exert larger influence than tail items on neighbor users, making the
users move towards popular items in the representation space; (2) after
multiple times of graph convolution, popular items would affect more high-order
neighbors and become more influential. The two points make popular items get
closer to almost users and thus being recommended more frequently. To rectify
this, we propose to estimate the amplified effect of popular nodes on each
node's representation, and intervene the effect after each graph convolution.
Specifically, we adopt clustering to discover highly-influential nodes and
estimate the amplification effect of each node, then remove the effect from the
node embeddings at each graph convolution layer. Our method is simple and
generic -- it can be used in the inference stage to correct existing models
rather than training a new model from scratch, and can be applied to various
GCN models. We demonstrate our method on two representative GCN backbones
LightGCN and UltraGCN, verifying its ability in improving the recommendations
of tail items without sacrificing the performance of popular items. Codes are
open-sourced \footnote{https://github.com/MEICRS/DAP}. | Jiajia Chen, Jiancan Wu, Jiawei Chen, Xin Xin, Yong Li, Xiangnan He | 2023-05-24T08:35:43Z | http://arxiv.org/abs/2305.14886v1 | # How Graph Convolutions Amplify Popularity Bias for Recommendation?
###### Abstract
Graph convolutional networks (GCNs) have become prevalent in recommender system (RS) due to their superiority in modeling collaborative patterns. Although improving the overall accuracy, GCNs unfortunately amplify popularity bias -- tail items are less likely to be recommended. This effect prevents the GCN-based RS from making precise and fair recommendations, decreasing the effectiveness of recommender systems in the long run.
In this paper, we investigate how graph convolutions amplify the popularity bias in RS. Through theoretical analyses, we identify two fundamental factors: (1) with graph convolution (_i.e.,_ neighborhood aggregation), popular items exert larger influence than tail items on neighbor users, making the users move towards popular items in the representation space; (2) after multiple times of graph convolution, popular items would affect more high-order neighbors and become more influential. The two points make popular items get closer to almost users and thus being recommended more frequently. To rectify this, we propose to estimate the amplified effect of popular nodes on each node's representation, and intervene the effect after each graph convolution. Specifically, we adopt clustering to discover highly-influential nodes and estimate the amplification effect of each node, then remove the effect from the node embeddings at each graph convolution layer. Our method is simple and generic -- it can be used in the inference stage to correct existing models rather than training a new model from scratch, and can be applied to various GCN models. We demonstrate our method on two representative GCN backbones LightGCN and UltraGCN, verifying its ability in improving the recommendations of tail items without sacrificing the performance of
popular items. Codes are open-sourced 1.
Footnote 1: [https://github.com/MEICRS/DAP](https://github.com/MEICRS/DAP)
Received month dd, yyyy; accepted month dd, yyyy
###### Abstract
We propose a novel approach to the problem of finding a new set of parameters that can be used to find a new set of parameters
correct the already-deployed GCN models. In contrast, another line is revising the inference stage in a post-hoc way. For example, [13] performs personalized re-ranking on the generated candidates of the RS model to suppress popular items. However, both lines work on the general issue of popularity bias in recommendation, leaving how GCN model suffers from and amplifies the popularity bias untouched.
Towards the research gap, we propose a new problem to solve -- rectifying the popularity bias of GCN model in the inference stage. Such a solution can be used to revise already-trained GCNs, thus is easier to deploy in practice than training a new model. Given a GCN model, we first cluster the node representations to automatically discover the highly-influential nodes. Then, the amplification of popularity bias for each node within its cluster is estimated based on the prior theoretical analyses. Thereafter, the amplification effect in the node representation can be intervened to control the bias. This post-hoc method can be easily deployed in practice to correct existing GCN models and promote the recommendations of tail items without sacrificing the performance of popular items. To summarize, this work makes the following contributions:
* Providing in-depth theoretical analyses to interpret the popularity bias amplification problem in GCN-based recommenders;
* Developing a new method working at each graph convolution layer in the inference stage to correct the popularity bias for GCN;
* Conducting extensive experiments on three real datasets to demonstrate the effectiveness of our method on LightGCN and UltraGCN backbones.
## 2 Preliminaries
Suppose that there are a set of users and a set of items \(\mathcal{U}=\{u_{1},u_{2},\cdots,u_{M}\}\), \(\mathcal{I}=\{i_{1},i_{2},\cdots,i_{N}\}\) in a dataset \(D\). Let \(y_{ui}=1\) be the positive label if the user \(u\) has interacted with the item \(i\), otherwise \(y_{ui}=0\). We can construct a user-item bipartite graph \(\mathcal{B}=(\mathcal{V},\mathcal{E})\) based on the interaction history, where \(\mathcal{V}\) consists of the set of user and item nodes, and \(\mathcal{E}\) denotes the set of edges. If \(y_{ui}=1\), there is an edge between the user \(u\) and the item \(i\).
Recently, many studies opt for powerful GCNs to learn user and item node representations [14; 15]. Particularly, we introduce LightGCN [6], which is neat and well represents the GCN-based recommenders. One graph convolution block of LightGCN can be expressed as:
\[\mathbf{e}_{u}^{(l)}=\sum_{i\in\mathcal{N}_{u}}\frac{1}{\sqrt{d_{u}}\sqrt{d_{ i}}}\mathbf{e}_{i}^{(l-1)},\quad\mathbf{e}_{i}^{(l)}=\sum_{u\in\mathcal{N}_{i}} \frac{1}{\sqrt{d_{i}}\sqrt{d_{u}}}\mathbf{e}_{u}^{(l-1)}, \tag{1}\]
where \(d_{u}\) (\(d_{i}\)) is the degree of user \(u\) (item \(i\)) in the graph \(\mathcal{B}\), \(\mathcal{N}_{u}\) (\(\mathcal{N}_{i}\)) is the one-order neighbor nodes of the user \(u\) (item \(i\)), \(\mathbf{e}^{(0)}\) is an ID embedding of a user or an item. After stacking several graph convolution layers, LightGCN combines the embeddings obtained at each layer to form the final representa
\begin{table}
\begin{tabular}{l|l} \hline \(\mathcal{U}\), \(\mathcal{I}\) & User set, item set \\ \hline \(\mathcal{N}_{n}\), \(\mathcal{N}_{i}\) & The one-order neighbors of user \(u\) or item \(i\) \\ \hline \(d_{u}\), \(d_{i}\) & The degree of user \(u\) or item \(i\) \\ \hline \(\mathbf{e}_{u}^{(l)}\), \(\mathbf{e}_{i}^{(l)}\) & The embedding of user \(u\) or item \(i\) at the \(l\)-th graph convolution layer \\ \hline \(\mathcal{L}^{ui}\) & The individual loss term of an interaction \((u,i)\) \\ \hline \(\mathcal{C}_{p}^{(l)}\) & A set of nodes in the \(p\)-th cluster obtained by using Kmeans given the embeddings \(\mathbf{E}^{(l)}\) \\ \hline \(\mathbb{H}_{r}^{(l)}\) & A set of nodes \(\{j\in\mathcal{C}_{p}^{(l)},d_{j}>d_{r}|_{v}\in\mathcal{C}_{p}^{(l)}\}\) \\ \hline \(\mathbb{L}^{(l)}\) & A set of nodes \(\{j\in\mathcal{C}_{p}^{(l)},d_{j}<d_{r}|_{v}\in\mathcal{C}_{p}^{(l)}\}\) \\ \hline \(\boldsymbol{\hat{\vartheta}}_{H_{r}}^{(l)}\) & The pooling representations after normalization \\ of \(\mathbb{H}_{r}^{(l)}\) & The pooling representations after normalization \\ \hline \(\boldsymbol{\hat{\vartheta}}_{L_{r}}^{(l)}\) & The pooling representations after normalization \\ of \(\mathbb{L}_{r}^{(l)}\) & \\ \hline \end{tabular}
\end{table}
Table 1: Main notations used in the paper.
tion \(\mathbf{e}\) of a node. Thereafter, the model prediction is defined as the inner product of user and item final representations, _i.e._, \(\hat{y}_{ui}=\mathbf{e}_{u}^{\top}\mathbf{e}_{i}\). Another representative work is UltraGCN [14] which skips infinite layers of graph convolution. We also conduct experiments on this model to verify the generality of our method.
To optimize model parameters, prior work usually frames it as a supervised learning task, and utilize a pointwise or a pairwise loss for model training, which can be summarized by the following formula: \(\mathcal{L}=\sum_{(u,i)\in D}\mathcal{L}^{ui}\), where \(\mathcal{L}^{ui}\) is the individual loss term of an interaction \((u,i)\). Without loss of generality, we investigate the popularity bias amplification problem in GCNs based on BCE loss [16] in the next section. The formal formulation of BCE loss is
\[\mathcal{L}^{ui}=-[y_{ui}\ln\sigma(\hat{y}_{ui})+(1-y_{ui})\ln(1-\sigma(\hat{ y}_{ui}))]. \tag{2}\]
## 3 Methodology
In this part, we attempt to analyze and resolve the amplified popularity bias of GCNs.
### Popularity Bias Amplification in GCNs
To understand why GCNs amplify the popularity bias, we conduct theoretical analyses and empirical experiments on GCNs: (1) we start with defining the influence between nodes based on the training loss; (2) we prove that popular items with higher degrees exert larger influence on neighbor users than tail items with lower-degrees; (3) we reveal that popular items commonly have higher probabilities of being recommended by users after representation updating and graph convolution in GCNs.
Concisely, we take the \(\mathbf{e}=\mathbf{e}^{(L)}\) at \(L\)-th graph convolution layer in Eq. (1) as the final representation of each node. Next, we give the definition of the influence of a user-item pair loss on their neighbors exploiting the concept of influence functions [17, 18, 19]:
**Definition 1** (Influence of an observed interaction on a node's representation learning): Suppose that \((u,i)\) is an observed interaction, _i.e._, there is an edge between node \(u\) and node \(i\), some other node \(k\) is reachable from node \(u\) or \(i\), then the influence of an interaction \((u,i)\) on node \(k\) is defined as \(\frac{\partial\mathcal{L}^{ui}}{\partial\mathbf{e}_{k}}\).
Without loss of generality, we mainly consider \(y_{ui}=1\) in BCE loss and have
\[\begin{split}\frac{\partial\mathcal{L}^{ui}}{\partial\mathbf{e} _{k}}&=-\frac{\partial\ln\sigma(\hat{y}_{ui})}{\partial \mathbf{e}_{k}}=-\frac{\ln\sigma(\hat{y}_{ui})}{\partial\sigma(\hat{y}_{ui})} \cdot\frac{\partial\sigma(\hat{y}_{ui})}{\partial\hat{y}_{ui}}\cdot\frac{ \partial\hat{y}_{ui}}{\partial\mathbf{e}_{k}}\\ &=-\frac{1}{\sigma(\hat{y}_{ui})}\cdot\sigma(\hat{y}_{ui})(1- \sigma(\hat{y}_{ui}))\cdot\frac{\partial\hat{y}_{ui}}{\partial\mathbf{e}_{k} }\\ &=-[1-\sigma(\hat{y}_{ui})]\frac{\partial\hat{y}_{ui}}{\partial \mathbf{e}_{k}}\\ &=-\lambda_{ui}\frac{\partial\hat{y}_{ui}}{\partial\mathbf{e}_{k }},\end{split} \tag{3}\]
where \(\hat{y}_{ui}\) is the prediction between the user \(u\) and the item \(i\) and \(0<\lambda_{ui}<1\).
**Definition 2** (Influence of a node on another node's representation learning): Suppose that a node \(i\) can reach a neighbor node \(k\) on a graph3. The influence of the loss for the target node \(i\) on the node \(k\) is defined as \(\frac{\partial\mathcal{L}}{\partial\mathbf{e}_{k}}\), where \(\mathcal{L}_{i}=\sum_{j\in N_{i}}\mathcal{L}^{ij}\).
Footnote 3: Without loss of generality, we ignore whether a node is a user or an item.
In fact, the influence provides a fine-grained scrutiny of updating information of each node at the lens of gradient derivation. We then have the following lemma.
**Lemma 1**: If node \(i\) with degree \(d_{i}\) can reach node \(k\) after stacking \(L\) layers of graph convolution, then the influence of node \(i\) on node \(k\) follows
\[\mathbb{E}\left(\frac{\partial\mathcal{L}_{i}}{\partial\mathbf{e}_{k}}\right) \propto-d_{i}^{\frac{3}{3}}\boldsymbol{\vartheta}_{i}, \tag{4}\]
where \(\mathbb{E}(\cdot)\) is the expectation, \(\mathbf{\vartheta}_{i}=\mathbb{E}\left(\sum\limits_{p=1}^{\Phi_{j}}\prod\limits_{l= L-1}^{1}\frac{1}{\sqrt{d_{p^{\prime}}}}\mathbf{e}_{j}\right)\) represents the expectation on paths starting from a neighbor node \(j\) of the node \(i\) to the node \(k\), \(\Phi_{j}\) is the set of all \((L-1)\)-length paths from the node \(j\) to the node \(k\), \(p^{L-1}\) and \(p^{1}\) are the node \(j\) and the node \(k\), respectively.
Proof.: According to Eq. (3) and Eq. (1), we obtain
\[\begin{split}\frac{\partial\mathcal{L}_{i}}{\partial\mathbf{e} _{k}}&=\sum\limits_{j\in\mathcal{N}_{i}}\frac{\partial\mathcal{L} ^{ij}}{\partial\mathbf{e}_{k}}=-\sum\limits_{j\in\mathcal{N}_{i}}\lambda_{ ij}\frac{\partial\hat{\mathbf{v}}_{ij}}{\partial\mathbf{e}_{k}}=-\sum\limits_{j \in\mathcal{N}_{i}}\lambda_{ij}\frac{\partial\mathbf{e}_{i}}{\partial \mathbf{e}_{k}}\mathbf{e}_{j}\\ &=-\sum\limits_{j\in\mathcal{N}_{i}}\lambda_{ij}\sum\limits_{p= 1}^{\Phi}\prod\limits_{l=L}^{1}\frac{1}{\sqrt{d_{p^{\prime}}}}\mathbf{e}_{j}, \end{split} \tag{5}\]
where \(p^{L}\) is the node \(i\), \(\Phi\) is the set of all \(L\)-length random paths on the graph from nodes \(i\) to \(k\), and we omit the transpose symbol of the partial derivative for brevity. Further, we have
\[\begin{split}\mathbb{E}\left(\frac{\partial\mathcal{L}_{i}}{ \partial\mathbf{e}_{k}}\right)&=-\mathbb{E}\left(\sum\limits_{j \in\mathcal{N}_{i}}\lambda_{ij}\sum\limits_{p=1}^{\Phi}\prod\limits_{l=L}^{1} \frac{1}{\sqrt{d_{p^{\prime}}}}\mathbf{e}_{j}\right)\\ &\propto-\sqrt{d_{i}}\mathbb{E}\left(\sum\limits_{j\in\mathcal{ N}_{i}}\sum\limits_{p=1}^{\Phi_{j}}\prod\limits_{l=L-1}^{1}\frac{1}{\sqrt{d_{p^{ \prime}}}}\mathbf{e}_{j}\right)\\ &\approx-d_{i}^{\frac{3}{2}}\mathbb{E}\left(\sum\limits_{p=1}^{ \Phi_{j}}\prod\limits_{l=L-1}^{1}\frac{1}{\sqrt{d_{p^{\prime}}}}\mathbf{e}_{j} \right)=-d_{i}^{\frac{3}{2}}\mathbf{\vartheta}_{i}.\end{split} \tag{6}\]
Finish the proof.
We visualize the results of \(d^{\frac{3}{2}}\|\mathbf{\vartheta}\|\) and \(\|\mathbf{\vartheta}\|\) in log scale in Figure 2. Items are evenly divided into several groups in ascending order of their degrees. For each item group, we show its average \(\ln(d^{\frac{3}{2}}\|\mathbf{\vartheta}\|)\) and \(\ln(\|\mathbf{\vartheta}\|)\) in terms of the two-hop neighbor nodes (_i.e.,_\(L=2\) in Lemma 1) when training LightGCN. As we see, \(\ln(d^{\frac{3}{2}}\|\mathbf{\vartheta}\|)\) enlarges as degree increases. Compared to \(\ln(d^{\frac{3}{2}}\|\mathbf{\vartheta}\|)\), \(\ln(\|\mathbf{\vartheta}\|)\) is relatively smaller and flat across at different degrees. This illustrates that the degree of nodes plays a crucial role in the influence. Following the assumption in [19], we posit \(\|\mathbf{\vartheta}_{i}\|=\phi\) for any node \(i\).
**Conclusion 1:** If nodes \(r\) and \(s\) with \(d_{r}>d_{s}\) both can reach node \(t\) after \(L\)-hop neighborhood aggregation in a graph, we could have
\[\left\|\mathbb{E}\left(\frac{\partial\mathcal{L}_{r}}{\partial\mathbf{e}_{t}} \right)\right\|>\left\|\mathbb{E}\left(\frac{\partial\mathcal{L}_{s}}{ \partial\mathbf{e}_{t}}\right)\right\|. \tag{7}\]
It suggests that the nodes with higher degrees exert larger influence on L-hop neighbor nodes than lower-degree nodes in the training stage of GCN-based models. In other words, the popular items dominate the updating information of neighbor users. As a result, popular items would make reachable users get closer to them in the representation space. Based on the results in the upcoming lemma, we further prove that the popular items tend to have higher probabilities of being recommended by users.
Figure 3: The average aggregation weight of one-order neighbor users in each items group. Items are sorted into groups in ascending order of their degrees.
Figure 2: Average \(\|\mathbf{\vartheta}\|\) and \(d^{\frac{3}{2}}\|\mathbf{\vartheta}\|\) in each items group. Items are sorted into groups in ascending order of their degrees.
**Lemma 2**.: Suppose that the items \(r\) and \(s\) could reach the user \(t\) by stacking \(L\) layers of graph convolution, and \(d_{r}>d_{s}\). After \(L-1\) rounds of graph convolution, the expectation of prediction difference between the two items with regard to the user \(t\) is \(\mathbb{E}\left[\mathbf{e}_{t}^{\top}(\mathbf{e}_{r}-\mathbf{e}_{s})\right]\). By performing the \(L\)-th graph convolution and after the representation of user \(t\) updates the gradients \(\left(\frac{\partial\mathcal{L}_{r}}{\partial\mathbf{e}_{t}}+\frac{\partial \mathcal{L}_{s}}{\partial\mathbf{e}_{t}}\right)\), the prediction difference becomes larger, i.e.,
\[\mathbb{E}\left\{\left[\mathbf{e}_{t}-\left(\frac{\partial\mathcal{L}_{r}}{ \partial\mathbf{e}_{t}}+\frac{\partial\mathcal{L}_{s}}{\partial\mathbf{e}_{t }}\right)\right]^{\top}(\mathbf{e}_{r}-\mathbf{e}_{s})\right\}\geq\mathbb{E} \left[\mathbf{e}_{t}^{\top}(\mathbf{e}_{r}-\mathbf{e}_{s})\right]. \tag{8}\]
Further, after \(e_{r}\) and \(e_{s}\) updates, the prediction difference continues to enlarge,
\[\mathbb{E}\left\{\left[\mathbf{e}_{t}-\left(\frac{\partial \mathcal{L}_{r}}{\partial\mathbf{e}_{t}}+\frac{\partial\mathcal{L}_{s}}{ \partial\mathbf{e}_{t}}\right)\right]^{\top}(\mathbf{e}_{r}^{\prime}-\mathbf{e }_{s}^{\prime})\right\} \tag{9}\] \[\geq\mathbb{E}\left\{\left[\mathbf{e}_{t}-\left(\frac{\partial \mathcal{L}_{r}}{\partial\mathbf{e}_{t}}+\frac{\partial\mathcal{L}_{s}}{ \partial\mathbf{e}_{t}}\right)\right]^{\top}(\mathbf{e}_{r}-\mathbf{e}_{s}) \right\}.\]
where \(\mathbf{e}_{r}^{\prime}\) and \(\mathbf{e}_{s}^{\prime}\) are the representations of the items \(r\) and \(s\) after aggregating the user \(t\), respectively.
Proof.: After the influence of the items \(r\) and \(s\) propagate to the user \(t\) by stacking \(L\) graph convolution layers, this changes the representation of user \(t\) from \(\mathbf{e}_{t}\) to \(\mathbf{e}_{t}-\left(\frac{\partial\mathcal{L}_{r}}{\partial\mathbf{e}_{t}}+ \frac{\partial\mathcal{L}_{s}}{\partial\mathbf{e}_{t}}\right)\). Assume that the influence \(\mathbb{E}\left(\frac{\partial\mathcal{L}_{r}}{\partial\mathbf{e}_{t}}\right)= -\nu_{1}d_{r}^{\frac{3}{2}}\boldsymbol{\vartheta}_{r}\) (\(\nu_{1}>0\)) and \(\frac{\boldsymbol{\theta}_{s}}{\left\|\boldsymbol{\vartheta}_{r}\right\|}= \nu_{2}\frac{\mathbf{e}_{r}}{\left\|\mathbf{e}_{r}\right\|}\) (\(\nu_{2}>0\) for the local homogeneity), likewise, \(\mathbb{E}\left(\frac{\partial\mathcal{L}_{s}}{\partial\mathbf{e}_{t}}\right)= -\nu_{1}d_{s}^{\frac{3}{2}}\boldsymbol{\vartheta}_{s}\) and \(\frac{\boldsymbol{\theta}_{s}}{\left\|\boldsymbol{\vartheta}_{s}\right\|}= \nu_{2}\frac{\mathbf{e}_{s}}{\left\|\mathbf{e}_{s}\right\|}\). Now we calculate the prediction difference between the items \(r\) and \(s\) on the user \(t\),
\[\mathbb{E}\left\{\left[\mathbf{e}_{t}-\left(\frac{\partial\mathcal{ L}_{r}}{\partial\mathbf{e}_{t}}+\frac{\partial\mathcal{L}_{s}}{\partial \mathbf{e}_{t}}\right)\right]^{\top}(\mathbf{e}_{r}-\mathbf{e}_{s})\right\} \tag{10}\] \[= \mathbb{E}[\mathbf{e}_{t}^{\top}(\mathbf{e}_{r}-\mathbf{e}_{s})]\] \[+\nu_{1}\mathbb{E}\left[d_{r}^{\frac{3}{2}}\boldsymbol{\vartheta} _{r}^{\top}\mathbf{e}_{r}-d_{r}^{\frac{3}{2}}\boldsymbol{\vartheta}_{r}^{\top} \mathbf{e}_{s}+d_{s}^{\frac{3}{2}}\boldsymbol{\vartheta}_{s}^{\top}\mathbf{e }_{r}-d_{s}^{\frac{3}{2}}\boldsymbol{\vartheta}_{s}^{\top}\mathbf{e}_{s}\right]\] \[= \mathbb{E}[\mathbf{e}_{t}^{\top}(\mathbf{e}_{r}-\mathbf{e}_{s})]+ \nu_{1}\nu_{2}\mathbb{E}\left[d_{r}^{\frac{3}{2}}\frac{\left\|\boldsymbol{ \vartheta}_{r}\right\|}{\left\|\mathbf{e}_{r}\right\|}\mathbf{e}_{r}^{\top} \mathbf{e}_{r}-d_{r}^{\frac{3}{2}}\frac{\left\|\boldsymbol{\vartheta}_{r}\right\| }{\left\|\mathbf{e}_{r}\right\|}\mathbf{e}_{r}^{\top}\mathbf{e}_{s}\right.\] \[+ d_{s}^{\frac{3}{2}}\frac{\left\|\boldsymbol{\vartheta}_{s}\right\| }{\left\|\mathbf{e}_{s}\right\|}\mathbf{e}_{s}^{\top}\mathbf{e}_{r}-d_{s}^{ \frac{3}{2}}\frac{\left\|\boldsymbol{\vartheta}_{s}\right\|}{\left\|\mathbf{e }_{s}\right\|}\mathbf{e}_{s}^{\top}\mathbf{e}_{s}\right]\] \[= \mathbb{E}[\mathbf{e}_{t}^{\top}(\mathbf{e}_{r}-\mathbf{e}_{s})]\] \[+\nu_{1}\nu_{2}\phi\mathbb{E}\left[d_{r}^{\frac{3}{2}}(\left\| \mathbf{e}_{r}\right\|-\frac{\rho}{\left\|\mathbf{e}_{r}\right\|})-d_{s}^{ \frac{3}{2}}(\left\|\mathbf{e}_{s}\right\|-\frac{\rho}{\left\|\mathbf{e}_{s} \right\|})\right],\]
where \(\rho=\mathbf{e}_{r}^{\top}\mathbf{e}_{s}\) and \(\rho\leq\left\|\mathbf{e}_{r}\right\|\left\|\mathbf{e}_{s}\right\|\). Since the magnitude of the node representation increases as its degree increases [20], for \(d_{r}>d_{s}\), there is \(\left\|\mathbf{e}_{r}\right\|>\left\|\mathbf{e}_{s}\right\|\) generally. Let \(\left\|\mathbf{e}_{r}\right\|=\kappa\left\|\mathbf{e}_{s}\right\|\) with \(\kappa>1\). Therefore, we have
\[d_{r}^{\frac{3}{2}}(\left\|\mathbf{e}_{r}\right\|-\frac{\rho}{ \left\|\mathbf{e}_{r}\right\|})-d_{s}^{\frac{3}{2}}(\left\|\mathbf{e}_{s}\right\| -\frac{\rho}{\left\|\mathbf{e}_{s}\right\|}) \tag{11}\] \[= (d_{r}^{\frac{3}{2}}\kappa-d_{s}^{\frac{3}{2}})\left\|\mathbf{e} _{s}\right\|-(d_{r}^{\frac{3}{2}}-d_{s}^{\frac{3}{2}}\kappa)\frac{\rho}{\left\| \mathbf{e}_{r}\right\|},\]
since \(d_{r}^{\frac{3}{2}}\kappa-d_{s}^{\frac{3}{2}}>d_{r}^{\frac{3}{2}}-d_{s}^{ \frac{3}{2}}\kappa\) and \(\left\|\mathbf{e}_{s}\right\|\geq\frac{\rho}{\left\|\mathbf{e}_{r}\right\|}\), thus Eq. (11) \(>0\). Based on this, we derive the expression of (8).
Furthermore, after the items \(r\) and \(s\) aggregate the information of the user \(t\), we obtain
\[\mathbf{e}_{r}^{\prime}=\mathbf{e}_{r}+\omega_{rt}\left[\mathbf{e}_{t}- \left(\frac{\partial\mathcal{L}_{r}}{\partial\mathbf{e}_{t}}+\frac{\partial \mathcal{L}_{s}}{\partial\mathbf{e}_{t}}\right)\right]=\mathbf{e}_{r}+\omega_{ rt}\widetilde{\mathbf{e}}_{t}, \tag{12}\]
where \(\omega_{rt}\) is the weight of aggregation. Likewise,
\[\mathbf{e}_{s}^{\prime}=\mathbf{e}_{s}+\omega_{st}\left[\mathbf{e}_{t}-\left( \frac{\partial\mathcal{L}_{r}}{\partial\mathbf{e}_{t}}+\frac{\partial \mathcal{L}_{s}}{\partial\mathbf{e}_{t}}\right)\right]=\mathbf{e}_{s}+ \omega_{st}\widetilde{\mathbf{e}}_{t}. \tag{13}\]
Now we calculate the rating difference again,
\[\begin{split}&\mathbb{E}\left\{\left[\mathbf{e}_{t}-\left(\frac{ \partial\mathcal{L}_{r}}{\partial\mathbf{e}_{t}}+\frac{\partial\mathcal{L}_{s }}{\partial\mathbf{e}_{t}}\right)\right]^{\top}(\mathbf{e}^{\prime}_{r}- \mathbf{e}^{\prime}_{s})\right\}\\ =&\mathbb{E}\left[\tilde{\mathbf{e}}^{\top}_{t}[( \mathbf{e}_{t}-\mathbf{e}_{s})+(\omega_{rt}-\omega_{st})\tilde{\mathbf{e}}_{t} ]\right]\\ =&\mathbb{E}\left[\tilde{\mathbf{e}}^{\top}_{t}( \mathbf{e}_{r}-\mathbf{e}_{s})+(\omega_{rt}-\omega_{st})\|\tilde{\mathbf{e}}_{ t}\|^{2}\right],\end{split} \tag{14}\]
when \(\omega_{rt}-\omega_{st}\geq 0\), the expression of (9) holds. We visualize the average aggregation weight \(\omega\) of one-order neighbor users for each item groups when training LightGCN in Figure 3. From the results, it can be observed that \(\omega\) is larger as degree increases generally. Therefore, the expectation of the rating difference would be enlarged for \(d_{r}>d_{s}\) after graph convolution. Finish the proof.
**Conclusion 2:** The theoretical analyses show that the gap of the prediction scores between popular items and tail items _w.r.t._ users enlarges with deep layers of graph convolution. It indicates that popular items would become more influential by affecting more high-order neighbor users. As a consequence, popular items are more likely be over-recommended as GCNs go deeper. It reveals how GCN-based models amplify the popularity bias, providing theoretical supports to the phenomenon shown in Figure 1.
### Our Method -- DAP
In this section, we propose our method DAP (**De**bias the **A**mplification of **P**opularity) to alleviate the issue of the popularity bias amplification of GCN in the inference stage.
From Lemma 2, the popularity bias amplification comes from the updating of node representation and neighborhood aggregation after graph convolution when training GCN backbone models. Taking LightGCN as an example, we can quantify the bias after graph convolution at each layer in a unified form as
\[\begin{split}\mathbf{e}^{(l)}_{v}&=\sum_{j\in N_{v} }\frac{1}{\sqrt{d_{v}}\sqrt{d_{j}}}\mathbf{e}^{(l-1)}_{j}\\ &=\sum_{j\in N_{v}}\frac{1}{\sqrt{d_{v}}\sqrt{d_{j}}}(\mathbf{ \hat{e}}^{(l-1)}_{j}+\alpha^{(l-1)}_{H_{j}}\boldsymbol{\theta}^{(l-1)}_{H_{j} }+\alpha^{(l-1)}_{L_{j}}\boldsymbol{\theta}^{(l-1)}_{L_{j}})\\ &=\mathbf{\hat{e}}^{(l)}_{v}+\alpha^{(l)}_{H_{v}}\boldsymbol{ \theta}^{(l)}_{H_{v}}+\alpha^{(l)}_{L_{v}}\boldsymbol{\theta}^{(l)}_{L_{v}}, \end{split} \tag{15}\]
where \(\mathbf{\hat{e}}^{(l)}_{v}\) is the ideal representation of the node \(v\) at the \(l\)-th layer, \(\alpha^{(l)}_{H_{v}}\boldsymbol{\theta}^{(l)}_{H_{v}}\) and \(\alpha^{(l)}_{L_{v}}\boldsymbol{\theta}^{(l)}_{L_{v}}\) are the bias that comes from higher-degree and lower-degree neighbors respectively. Specifically, there are two amplification effects: (1) the higher-degree neighbors have large influence and dominate the updating of the target node's representation. It is inclined to make the target node position close to the higher-degree neighbors in the representation space; (2) for the lower-degree neighbors, the representations of them are influenced by the target node, leading to biased learning; after graph convolution, such bias would be further aggregated into the target node. Those two amplification effects are need to be estimated and intervened.
In addition, in order to estimate the bias of each node, we employ clustering algorithms to group node representations into clusters. The clustering can automatically discover the highly-influential nodes as they are close to each other in the representation space. For each node, we estimate its amplification effect within its cluster and then intervene the bias in the representations. The specific debiasing process is as follows.
Given a well-trained GCN-based backbone model, we could obtain the 0-th layer representations \(\mathbf{E}^{(0)}\in\mathbb{R}^{(M+N)\times D}\) (D is the embedding size) of all nodes. Then, these representations are fed into the next layer of graph convolution. As we discussed earlier, the bias appears after graph convolution and nodes which in the same cluster are most possible
to affect each other. Thus, \(Kmeans\) is employed to group nodes in the representation space. For the \(l\)-th layer node representations \(\mathbf{E}^{(l)}\) of all nodes, \(Kmeans\) automatically divides them into \(P\) clusters, _i.e._,
\[\{\mathcal{C}_{1}^{(l)},\mathcal{C}_{2}^{(l)},\cdots,\mathcal{C}_{P}^{(l)}\}= Kmeans(\mathbf{E}^{(l)}), \tag{16}\]
where \(P\) is a hyper-parameter of \(Kmeans\). For a node \(v\), we can know the cluster it belongs to. To intervene the amplified bias effect \(\mathbf{b}_{v}^{(l)}\) after \(l\)-th graph convolution, we have the following strategy: for the node \(v\) with degree \(d_{v}\) in the cluster \(\mathcal{C}_{P}^{(l)}\), we can obtain a set of higher-degree nodes \(\mathbb{H}_{v}^{(l)}=\{j\in\mathcal{C}_{p}^{(l)},d_{j}>d_{v}\}\), and a set of lower-degree nodes \(\mathbb{L}_{v}^{(l)}=\{j\in\mathcal{C}_{p}^{(l)},d_{j}<d_{v}\}\). For the two parts \(\mathbb{H}_{v}^{(l)}\) and \(\mathbb{L}_{v}^{(l)}\) of node \(v\), their pooling (e.g., mean pooling, weighted average pooling with degree) representations after normalization \(\mathbf{\hat{\mathbf{\theta}}}_{H_{v}}^{(l)}\in\mathbb{R}^{1\times D}\) and \(\mathbf{\hat{\mathbf{\theta}}}_{L_{v}}^{(l)}\in\mathbb{R}^{1\times D}\) can be computed respectively. Thereafter, its amplification bias \(\mathbf{b}_{v}^{(l)}\) after \(l\)-th layer graph convolution is estimated by
\[\mathbf{b}_{v}^{(l)}=\alpha\mathcal{M}(\mathbf{e}_{v}^{(l)},\mathbf{\hat{\mathbf{ \theta}}}_{H_{v}}^{(l)})\mathbf{\hat{\mathbf{\theta}}}_{H_{v}}^{(l)}+\beta\mathcal{M}( \mathbf{e}_{v}^{(l)},\mathbf{\hat{\mathbf{\theta}}}_{L_{v}}^{(l)})\mathbf{\hat{\mathbf{\theta }}}_{L_{v}}^{(l)}, \tag{17}\]
where \(\alpha\) and \(\beta\) are hyper-parameters that intervene the effect of bias in the final representation at the \(l\)-th layer since the popularity bias may be not completely harmful [21]. The larger the values of \(\alpha\) and \(\beta\), the greater the bias on the node. \(\mathcal{M}\) is a similarity calculation function (_e.g._, cosine similarity) for measuring how strongly the node \(v\) is affected by different parts.
After the above operations, we intervene the bias effect and revise the representation of the node \(v\) at \(l\)-th layer graph convolution, _i.e._,
\[\mathbf{\hat{\mathbf{e}}}_{v}^{(l)}=\mathbf{e}_{v}^{(l)}-\mathbf{b}_{v}^{(l)}. \tag{18}\]
Then the revised representation \(\mathbf{\hat{\mathbf{e}}}_{v}^{(l)}\) is fed into the next layer in GCNs and we get the node representations at \((l+1)\)-th layer \(\mathbf{E}^{(l+1)}\). In an iterative manner, we can obtain all the ideal representations at each layer. For all representations rectified at different layers, they are assembled in the manner as the original model does to get the final representations.
## 4 Experiments
In this section, we conduct experiments to evaluate the performance of our proposed DAP, aiming at answering the following research questions:
* **RQ1**: Does DAP outperform other debiasing methods?
* **RQ2**: How do higher-degree part, lower-degree part and the hyper-parameters affect the recommendation performance?
* **RQ3**: Can DAP mitigate the popularity bias?
### Experiments Settings
We conduct experiments on three real-world datasets, Table 2 lists the statistics of three datasets. In order to guarantee a fair comparison, we follow the settings of LightGCN [6] and randomly split the training set and test set. The test set is called Overall. Since our DAP is expected to mitigate the popularity bias and improve the performance on the tail items, we specially split a subset of tail items from the whole test set, named Tail test set, in contrast to the Overall counterpart. In addition, we randomly split 20% data from the training set as the validation set for tuning the hyper-parameters. Note that the same splitting strategy is applied to the validation set.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline Dataset & \#Users & \#Items & \#Interactions & Density \\ \hline Gowalla & 29,858 & 40,981 & 1,027,370 & 0.00084 \\ \hline Yelp2018 & 31,668 & 38,048 & 1,561,406 & 0.00130 \\ \hline Amazon-book & 52,643 & 91,599 & 2,984,108 & 0.00062 \\ \hline \end{tabular}
\end{table}
Table 2: Dataset description
#### 4.1.1 Compared Methods
To evaluate the debiasing performance on recommendation, we implement our DAP with the GCN-based recommender models LightGCN and Ultra-GCN to explore how our DAP improves the recommendation performance for GCNs. In addition, three methods for solving the popularity bias and two methods for improving tail node representations are compared:
* **BFGCN**[22]: This is a novel graph convolution filter for the user-item bipartite network to improve long-tail node representations.
* **UltraGCN**[14]: This is a state-of-the-art method that achieves the best performance on the three datasets. It is an ultra-simplified formulation of GCN which skips explicit message passing and directly approximate the limit of infinite graph convolution layers.
* **IPSCN**[23]: This method adds max-capping and normalization on IPS [24] value to reduce the variance of IPS. IPS eliminates popularity bias by re-weighting each item according to its popularity.
* **CausE**[25]: It requires a large sample of biased data and a small sample of unbiased data. CausE adds a regularizer term on the discrepancy between the item vectors used to fit the biased sample and their counterpart representations that fit the unbiased sample. Because there is no unbiased data in our datasets, we adopt the sampling method in [10] and obtain 20% unbiased data from the training set.
* **DICE**[10]: DICE is a method to handle with the popularity bias problem by learning causal embeddings. It is a framework with causal-specific data to disentangle interest and popularity into two sets of embeddings.
* **MACR**[9]: This is a state-of-the-art method to eliminate the popularity bias by counterfactual reasoning. It performs counterfactual inference to remove the effect of item popularity.
* **BxQuAD**[13]: BxQuAD is a typical post-hoc method for improving tail item recommendations. It suffers from the recommendation accuracy drop for controlling popular items. In this paper, we adopt the Binary-xQuAD method of the original paper and set the hyper-parameter \(\lambda=0.9\).
* **Tail**[26]: This method learns a neighborhood translation from head nodes, which can be further transferred to tail nodes to enhance their represen
\begin{table}
\begin{tabular}{c|c c|c c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{
\begin{tabular}{c} Dataset \\ \end{tabular} } & \multicolumn{3}{c|}{Gowalla} & \multicolumn{3}{c|}{Yelp2018} & \multicolumn{3}{c}{Amazon-book} \\ \cline{2-13} & \multicolumn{2}{c|}{Overall} & \multicolumn{2}{c|}{Tail} & \multicolumn{2}{c|}{Overall} & \multicolumn{2}{c|}{Tail} & \multicolumn{2}{c|}{Overall} & \multicolumn{2}{c|}{Tail} & \multicolumn{2}{c}{Overall} & \multicolumn{2}{c}{Tail} \\ \hline \multirow{2}{*}{Models} & Recall & NDCG & Recall & NDCG & Recall & NDCG & Recall & NDCG & Recall & NDCG & Recall & NDCG \\ \hline LightGCN & 0.1820 & 0.1546 & 0.0434 & 0.0191 & 0.0627 & 0.0516 & 0.0091 & 0.0046 & 0.0414 & 0.0321 & 0.009 & 0.0051 \\ BFGCN & 0.1083 & 0.0805 & 0.0468 & 0.0245 & 0.0389 & 0.0311 & 0.0124 & 0.0076 & 0.0276 & 0.0211 & 0.0097 & 0.0059 \\ LightGCN-IPSCN & 0.1325 & 0.1132 & 0.0477 & 0.0213 & 0.0473 & 0.0391 & 0.0136 & 0.0077 & 0.0285 & 0.0221 & 0.0118 & 0.0069 \\ LightGCN-CausE & 0.1334 & 0.1137 & 0.0485 & 0.0225 & 0.0492 & 0.0405 & 0.0141 & 0.0085 & 0.0299 & 0.0230 & 0.0127 & 0.0078 \\ LightGCN-DICE & 0.1337 & 0.1138 & 0.0493 & 0.0241 & 0.0350 & 0.0409 & 0.0132 & 0.0073 & 0.0348 & 0.0264 & 0.0121 & 0.0074 \\ LightGCN-MACR & 0.1188 & 0.0928 & 0.0478 & 0.0219 & 0.0343 & 0.027 & **0.0233** & 0.0126 & 0.0269 & 0.0204 & 0.0108 & 0.0065 \\ LightGCN-Tail & 0.1647 & 0.1391 & 0.0628 & 0.0319 & 0.057 & 0.0466 & 0.0154 & 0.0095 & 0.0369 & 0.0283 & 0.0151 & 0.0094 \\ LightGCN-BxQuAD & 0.1378 & 0.1130 & 0.0689 & **0.0360** & 0.0545 & 0.0431 & 0.0209 & 0.0123 & 0.0389 & 0.0304 & 0.0164 & **0.0108** \\ \hline LightGCN-DAP-o & **0.1834** & **0.1564** & 0.0538 & 0.0245 & **0.0634** & **0.0521** & 0.0137 & 0.0073 & **0.0436** & **0.0339** & 0.0134 & 0.0079 \\ LightGCN-DAP-t & 0.1672 & 0.1427 & **0.0708** & 0.0354 & 0.0562 & 0.0461 & 0.0218 & **0.0129** & 0.0414 & 0.0328 & **0.0166** & 0.0102 \\ \hline improve & 0.77\% & 1.16\% & 23.96\% & 28.27\% & 1.12\% & 0.97\% & 50.55\% & 58.70\% & 4.83\% & 5.61\% & 48.89\% & 54.90\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance comparison between our method DAP and other counterparts on the Overall and Tail test sets. The ‘improve’ is the relative improvement of LightGCN-DAP-o over LightGCN.
tations. It is devised for node classification and we transfer it to the field of recommendation.
We report the all-ranking performance _w.r.t._ two metrics: Recall and NDCG cut at 20.
#### 4.1.2 Hyper-parameter Settings
For a fair comparison, all compared GCN-based models are implemented with 3 layers except for the UltraGCN. We optimize all models with Adam [27] with batch size as 4096. For our method, the number of clusters \(P\) is searched in \(\{1,5,10,20,30,\cdots,70\}\). Note that we keep the same \(P\) in each layer when operating \(Kmeans\). The hyper-parameters \(\alpha\) and \(\beta\) in Eq. (17) are tuned in the range of [0, 2.0] with step of 0.1.
### Recommendation Performance (RQ1)
We compare all methods on the Overall and Tail test sets in Tables 3 and 4, where the hyper-parameters of DAP-t and DAP-o are tuned to the best on the Tail and Overall validation sets, respectively. The promotions reported in Tables 3 and 4 are calculated by comparing LightGCN-DAP-o (UltraGCN-DAP-o) with LightGCN (UltraGCN). In general, our DAP significantly boosts two GCN methods on the Tail test set. The main observations are as follows:
* In all cases, our DAP-o brings performance gain in Recall and NDCG for LightGCN on the Overall and Tail test sets while other baselines only boost LightGCN on the Tail test set. These comparison methods mainly rely on suppressing popular items in exchange for the promotion of tail items. However, our method revises the node representations by intervening the popularity bias based on theoretical analyses on GCNs. It is more applicable to GCN-based backbones.
* In terms of the performance on the Tail test set, our DAP-t has a significant improvement over LightGCN, the average improvements of LightGCN-DAP-t over LightGCN on the three datasets are 95.71% on Recall and 121.92% on NDCG, respectively. In the same time, the performance on the Overall test set only has a small drop. Although some competitive baselines outperform DAP-t on some metrics, DAP demonstrates stronger comprehensive abilities on different test sets.
* In order to further verify the effectiveness of our method compared to LightGCN, we show the performance comparison at each layer in Figure 4. Overall, it can be seen that DAP boosts LightGCN
\begin{table}
\begin{tabular}{c|c c|c c|c c|c c|c c|c c} \hline \hline \multirow{2}{*}{
\begin{tabular}{c} Dataset \\ \end{tabular} } & \multicolumn{3}{c|}{Gowalla} & \multicolumn{3}{c|}{Yelp2018} & \multicolumn{3}{c}{Amazon-book} \\ \cline{2-13} & \multicolumn{2}{c|}{Overall} & \multicolumn{2}{c|}{Tail} & \multicolumn{2}{c|}{Overall} & \multicolumn{2}{c|}{Tail} & \multicolumn{2}{c|}{Overall} & \multicolumn{2}{c|}{Tail} \\ \cline{2-13} & \multicolumn{2}{c|}{Recall} & NDCG & Recall & NDCG & Recall & NDCG & Recall & NDCG & Recall & NDCG \\ \hline UltraGCN & 0.1862 & 0.1579 & 0.0447 & 0.0213 & 0.0676 & 0.0554 & 0.0127 & 0.0074 & 0.0682 & 0.0556 & 0.0436 & 0.0297 \\ UltraGCN-IPSCN & 0.1345 & 0.1123 & 0.0451 & 0.0208 & 0.0401 & 0.0324 & 0.0144 & 0.0087 & 0.0442 & 0.0356 & 0.0458 & 0.0317 \\ UltraGCN-CausE & 0.1408 & 0.1177 & 0.0449 & 0.0209 & 0.0411 & 0.0329 & 0.0151 & 0.0096 & 0.0459 & 0.0369 & 0.0463 & 0.0320 \\ UltraGCN-DICE & 0.1424 & 0.1201 & 0.0512 & 0.0247 & 0.0516 & 0.0417 & 0.0157 & 0.0096 & 0.0545 & 0.0423 & 0.0491 & 0.0343 \\ UltraGCN-MACR & 0.1311 & 0.1078 & 0.0517 & 0.0252 & 0.0387 & 0.0323 & **0.0248** & **0.0141** & 0.0501 & 0.0398 & 0.0488 & 0.0335 \\ UltraGCN-Tail & 0.1788 & 0.1521 & 0.0634 & 0.0321 & 0.0618 & 0.0501 & 0.0167 & 0.0102 & 0.0599 & 0.0499 & 0.0531 & 0.0378 \\ UltraGCN-BxQuAD & 0.1482 & 0.1289 & 0.0694 & 0.0361 & 0.0591 & 0.0482 & 0.0218 & 0.0136 & 0.0623 & 0.0517 & **0.0547** & 0.0386 \\ \hline UltraGCN-DAP-o & **0.1868** & **0.1580** & 0.0551 & 0.0271 & **0.0678** & **0.0555** & 0.0135 & 0.0079 & **0.0688** & **0.0562** & 0.0462 & 0.0316 \\ UltraGCN-DAP-t & 0.1701 & 0.1483 & **0.0714** & **0.0362** & 0.0607 & 0.0493 & 0.0237 & 0.0135 & 0.0625 & 0.0520 & 0.0543 & **0.0391** \\ \hline improve & 0.32\% & 0.06\% & 5.59\% & 6.57\% & 0.30\% & 0.18\% & 6.30\% & 6.76\% & 0.88\% & 1.07\% & 5.96\% & 6.40\% \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance comparison between our method DAP and other counterparts on the Overall and Tail test sets. The ‘improve’ is the relative improvement of UltraGCN-DAP-o over UltraGCN.
stably layer by layer. Particularly, on the Amazon-book, when LightGCN degrades the performance as the graph convolution goes deeper, DAP has no accuracy drop. It indicates that the effectiveness of our debiasing method for improving the representations of nodes.
* For the method UltraGCN, we implement our DAP on it. Because UltraGCN directly uses infinite layers of graph convolution, we only can debias on its final representations. In Table 4, compared with other baselines, our method shows a similar trend to Table 3. It validates the effectiveness of our method. It should be noted that the improvement is relatively small on the Tail test set compared to that on LightGCN. This is mainly because we can not debias at each layer of graph convolution and obtain the most ideal representations.
To conclude, DAP can effectively improve the performance of tail items for GCN backbones and outperforms baselines in general.
### Ablation Study (RQ2)
#### 4.3.1 Performance of Variants.
To evaluate the amplification effect brought by higher-degree nodes and lower-degree nodes, we design five variants of DAP implemented on LightGCN including: (1) DAP-kh: this variant is derived by setting \(\beta=0\) in Eq. (17) which aims to evaluate the amplification effect of higher-degree nodes; (2) DAP-kl: this variant is obtained by setting \(\alpha=0\) for evaluating the effect of lower-degree nodes; (3) DAP-nh: this variant treats one-order neighbors of the target node as its cluster instead of using \(Kmeans\) and sets \(\beta=0\) for exploring the effect of higher-degree neighbors; (4) DAP-nl: this variant is the opposite of DAP-nh, using one-order lower-degree neighbors (_i.e.,_\(\alpha=0\)); (5) DAP-m: it is derived by removing the similarity calculation func
Figure 4: Performance comparison between LightGCN and LightGCN-DAP with different layers of graph convolution on the Overall test set.
tion \(\mathcal{M}\) in Eq. (17) and is without considering the effect of the lower-degree part (_i.e._, \(\beta=0\)).
For each variant, we need only tune \(\alpha\) or \(\beta\). For example, we adjust \(\alpha\) for DAP-kh from 0 to 1 on the Gowalla dataset. Figure 5 shows the results on the Overall test set and we have the following observations: (1) on the three datasets, DAP-kh achieves the best performance. It reflects that higher-degree nodes have the greater impact on other nodes than lower-degree ones. (2) DAP-kh and DAP-kl have a better performance than DAP-nh and DAP-nl on the three datasets. It indicates that estimating the amplified bias among one-order neighbors is not enough and may introduce noise. \(Kmeans\) can automatically help discover the highly-influential nodes. In this way, we would estimate the amplified bias more accurately. (3) Compared to the performance of DAP-kh and DAP-m, it can be found that DAP-kh outperforms DAP-m. It reflects that the similarity function \(\mathcal{M}\) could capture the relation strength among nodes and thus help estimate the amplified bias well.
#### 4.3.2 Effect of Different Hyper-Parameters \(\alpha\) and \(\beta\).
Tuning \(\alpha\) and \(\beta\) is important for the performance at the inference stage. We plot the performance under different \(\alpha\) and \(\beta\) settings on the three datasets in Figure 5. It can be observed that the performance on the Overall test set increases gradually and then decreases while LightGCN is for \(\alpha=0\) and \(\beta=0\). This is to say, the popularity bias is not completely harmful. Directly eliminating all the bias is not reasonable in recommendation, which conforms with the finding of [21].
#### 4.3.3 Effect of Different Hyper-Parameter \(P\).
Table 5 reports our experimental results on the three datasets _w.r.t._ the hyper-parameter \(P\). As can be seen, the performance increases as \(P\) becomes large and then decreases. For different datasets, when \(P\) is too small or large, the popularity bias among
Figure 5: Ablation study of DAP with different hyper-parameters \(\alpha\) and \(\beta\) on the Overall test set.
nodes can not be captured accurately, therefore the performance is unsatisfactory. \(Kmeans\) is a simple tool for clustering, other advanced unsupervised clustering tools can be explored for improving the performance.
### Alleviating Popularity Bias (RQ3)
We already discuss that GCNs amplify the popularity bias in the introduction. In this part, we would verify that our debiasing framework can mitigate the amplification of popularity bias issue. The result of TR@20 (the ratio of recommending tail items cut at 20) is shown in Figure 6. As can be seen, when the TR@20 result of LightGCN significantly decreases with more graph convolution layers, DAP could restraint continuous worsening of the popularity bias and gradually improve the TR@20 on the three datasets. It means that the tail items can be recommended more when GCNs go deeper, and thus the popularity bias is effectively alleviated by DAP. In addition, combining Figures 4 and 6 gives the conclusion that DAP can not only promote the overall performance, but also can alleviate the popularity bias at each layer.
## 5 Related work
In this section, we review the research topics related to our work: recommendation debiasing and GCNs debiasing in classification.
### Recommendation Debiasing
Recommendation debiasing is a recently emerged research topic [1] focusing on various biases in recommendation, for example, popularity bias [28][29][30], exposure bias [31], position bias [32], etc. Many methods have been explored to analyze and alleviate the popularity bias in recommender systems. For example, [12, 24] propose inverse propensity score (IPS) methods that reweights the interactions to debias the popularity bias in the training loss. However, these methods are difficult to tune because estimating the propensity score is challenging. [33] proposes another method which combines data imputation and IPS to jointly learn an unbiased recommender. However, it is a brute manner for improving the long-tail items with a huge performance drop on the recommendation accuracy. Other empirical approaches such as adversarial learning [34], meta learning [35] and transfer learning [36] are
\begin{table}
\begin{tabular}{c|c c|c c|c c} \hline Dataset & \multicolumn{2}{c|}{Gowalla} & \multicolumn{2}{c|}{Yelp2018} & \multicolumn{2}{c}{Amazon-book} \\ \hline P & Recall & NDCG & Recall & NDCG & Recall & NDCG \\ \hline
1 & 0.1819 & 0.1547 & 0.0629 & 0.0517 & 0.0423 & 0.0329 \\ \hline
5 & 0.182 & 0.1548 & 0.063 & 0.0518 & 0.0436 & 0.0339 \\ \hline
10 & 0.1823 & 0.1552 & 0.0633 & 0.0521 & 0.0435 & 0.0339 \\ \hline
30 & 0.1831 & 0.1563 & 0.0628 & 0.0518 & 0.0435 & 0.0338 \\ \hline
50 & 0.1834 & 0.1564 & 0.0626 & 0.0517 & 0.0428 & 0.0334 \\ \hline
70 & 0.1833 & 0.1564 & 0.0623 & 0.0515 & 0.0426 & 0.0331 \\ \hline \end{tabular}
\end{table}
Table 5: Effect of \(P\) over DAP for the three datasets on the Overall test set.
Figure 6: DAP can effectively alleviate the popularity bias.
developed without estimating the propensity weight. Ranking adjustment is an another line of research to solve the popularity bias issue [13, 37].
Recently, methods based on causal inference have been widely applied to solve bias issues in recommendation, for example, MACR [9] and DICE [10]. And [11] proposes a causal framework to leverage the popularity bias in recommendation. Other causal methods for learning an unbiased recommender [25, 38, 39, 40, 41] are proposed to take various bias issues. We refer the readers to a system survey for more details [2].
### GCNs Debiasing in Classification
Recently, GCN-based models have demonstrated promising performance and advancements on classification tasks. However, some recent studies also reveal various issues of GCNs including over-smoothing, vulnerability and degree-related biases [19, 42, 43, 44, 45]. The over-smoothing issue can be treated as a kind of bias in GCNs, which means the node representations are inclined to be distinguishable, and degrades the performance with many graph convolution layers. Some methods are proposed to tackle this problem. [46] proposes GCNII which extends the vanilla GCN model with initial residual connection and identity mapping to prevent the over-smoothing. [45] proposes two methods including adding a regularizer to the training objective and optimizing the graph topology based on the model predictions. Nevertheless, [47] argues that the over-smoothing issue only happens after a large number of iterations and the current results with several layers of graph convolution are relatively far from the ideal over-smoothing situation. For recommendation, one node can reach all the other nodes by stacking 7 layers [7]. In this regard, we could treat the nodes clustering issue with shallow graph convolution layers and over-smoothing with a large number layers differently.
Besides, [19] points out that GCNs are biased towards higher-degree nodes with higher accuracy than those lower-degree nodes. The authors analyze this issue and argue that nodes with low degrees tend to have very few labeled nodes, which results in sub-optimal performance on low-degree nodes. Therefore, [19] proposes a method exploiting pseudo labels to enhance the representations for low-degree nodes. [48] also proposes a training algorithm adding confident data with virtual labels to the labeled set to enlarge the training set based on self-training. [26] proposes a method which learns a neighborhood translation from head nodes to tail nodes. In this way, the representations of tail nodes can be enhanced for improving the performance of tail nodes.
In this work, we analyze the bias existing in GCN-based recommenders that high-degree nodes and low-degree nodes influence each other. It is different from the bias in classification introduced in [19]. And our method is different with these methods based on producing pseudo labels and translation.
## 6 Conclusion and future work
In this paper, we first theoretically analyzed how GCN-based recommenders amplify the popularity bias. We show that popular items tend to dominate the updating information of neighbor users in the training stage, which makes user position closer to popular items. After multiple times of neighborhood aggregation, popular items would become more influential by affecting more high-order neighbor users. Based on the above insights, we propose a simple yet generic debiasing framework DAP. Our method is implemented in each graph convolution layer in the inference stage by intervening
the amplification effect on nodes in the representation space. Extensive experiments on the three real-world datasets justify the effectiveness of DAP. Our method could promote the recommendation performance on tail items and alleviate the popularity bias for GCN backbone models.
In the future, we will explore and theoretically analyze more problems hidden in graph-based recommendation methods. In addition, various biases also exist in recommender systems that are harmful to users and need to be solved, such as position bias and exposure bias. It will be meaningful to propose a universal solution to solve various biases.
|
2307.04234 | Extreme N-emitters at high-redshift: signatures of supermassive stars
and globular cluster or black hole formation in action? | [Abridged] Using the JWST/NIRSpec observations from CEERS we found an extreme
N-emitter, CEERS-1019 at z=8.6782 showing intense NIV and NIII emission. From
the observed rest-UV and optical lines we conclude that it is compatible with
photoionization from stars and we determine accurate abundances for C, N, O,
and Ne, relative to H, finding a highly supersolar ratio log(N/O) =
-0.18+/-0.11, and normal log(C/O) = -0.75+/-0.11 and log(Ne/O) = -0.63+/-0.07,
for its low metallicity, 12+log(O/H)= 7.70+/-0.18. We also analyze other
N-emitters from the literature. All show strongly enhanced N/O ratios and two
of them normal C/O. Massive star ejecta from WR stars are needed to explain the
galaxies with enhanced C/O (Lynx arc and Mrk 996). On the other hand,
supermassive stars (>1000 Msun, SMS) in the ``conveyer-belt model'' put forward
to explain globular clusters (GCs), predict a high N/O and small changes in
C/O, compatible with CEERS-1019, the Sunburst cluster, SMACS2031, and GN-z11.
Based on the chemical abundances, possible enrichment scenarios, compactness,
and high ISM density, we suggest that CEERS-1019, SMACS2031, and the Sunburst
cluster could contain proto-GCs. Finally, we propose that some N-emitters
enriched by SMS could also have formed intermediate-mass black holes, and we
suggest that this might be the case for GN-z11. Our observations and analysis
reinforce the suggested link between some N-emitters and proto-GC formation,
which is supported both by empirical evidence and quantitative models.
Furthermore, the observations provide possible evidence for the presence of
supermassive stars in the early Universe (z>8) and at z~2-3. Our analysis also
suggests that the origin and nature of the N-emitters is diverse, including
also objects like GN-z11 which possibly host an AGN. | R. Marques-Chaves, D. Schaerer, A. Kuruvanthodi, D. Korber, N. Prantzos, C. Charbonnel, A. Weibel, Y. I. Izotov, M. Messa, G. Brammer, M. Dessauges-Zavadsky, P. Oesch | 2023-07-09T17:24:53Z | http://arxiv.org/abs/2307.04234v2 | Extreme N-emitters at high-redshift: signatures of supermassive stars and globular cluster or black hole formation in action?
###### Abstract
Context:Recent JWST spectroscopic observations of the \(z=10.6\) galaxy GN-z11 have revealed a very peculiar UV spectrum showing intense emission lines of nitrogen, which are generally not detected in galaxy spectra. This observation indicates a super-solar N/O abundance ratio at low metallicity, resembling only the abundances seen in globular cluster (GC) stars. This discovery suggests that we might be seeing proto-GCs in formation or possibly even signatures of supermassive stars.
Aims:To examine if other objects with strong N iv and/or Nv Nii emission lines (N-emitters, hereafter) exist and to better understand their origin and nature, we have examined available JWST spectra and data from the literature.
Methods:Using the NIRSpec/JWST observations from CEERS we found an extreme N-emitter, CEERS-1019 at \(z=8.6782\) showing intense N iv \(\lambda 1486\) and N iii \(\lambda 1750\) emission. From the observed rest-UV and optical lines we conclude that it is compatible with photoionization from stars and we determine accurate abundances for C, N, O, and Ne, relative to H. We also (re-analyze other N-emitters from the literature, including three lensed objects at \(z=2.3-3.5\) (the Sunburst cluster, SMACS2031, and Lynx arc) and a low-redshift compact galaxy, Mrk 996. We compare the observed abundance ratios to observations from normal star-forming galaxies, predicted with yields from massive stars and predictions from supermassive stars (SMS with \(\sim 10^{4}-10^{5}\) M\({}_{\odot}\)).
Results:For CEERS-1019 we find a highly supersolar ratio \(\log({\rm N/O})=-0.18\pm 0.11\), and abundances of \(\log({\rm C/O})=-0.75\pm 0.11\) and \(\log({\rm Ne/O})=-0.63\pm 0.07\), which are normal compared to other galaxies at the low metallicity (\(12+\log({\rm O/H})\)= \(7.70\pm 0.18\)) of this galaxy. The three lensed N-emitters also show strongly enhanced N/O ratios and two of them normal C/O. The high N/O abundances can be reproduced by massive star winds assuming a special timing and essentially no dilution with the ambient ISM. Alternatively, these N/O ratios can be explained by mixing the ejecta of SMS with comparable amounts of unenriched ISM. Massive star ejecta (from WR stars) are needed to explain the galaxies with enhanced C/O (Lynx arc, Mrk 996). On the other hand, SMS in the "conveyer-belt model" put forward to explain globular clusters, predict a high N/O and small changes in C/O, compatible with CEERS-1019, the Sunburst cluster, SMACS2031, and GN-z11. Based on the chemical abundances, possible enrichment scenarios and other properties, such as their compactness and high ISM density, we discuss which objects could contain proto-GCs. We suggest that this is the case for CEERS-1019, SMACS2031, and the Sunburst cluster. Enrichment in the Lynx arc and Mrk 996 is likely due to normal massive stars (WR), which implies that the star-forming regions in these objects cannot become GCs. Finally, we propose that some N-emitters enriched by SMS could also have formed intermediate mass black holes, and we suggest that this might be the case for GN-z11.
Conclusions:Our observations and analysis reinforce the suggested link between some N-emitters and proto-GC formation, which is supported both by empirical evidence and quantitative models. Furthermore, the observations provide possible evidence for the presence of supermassive stars in the early Universe (\(z>8\)) and at \(z\sim 2-3\). Our analysis also suggests that the origin and nature of the N-emitters is diverse, including also objects like GN-z11 which possibly host an AGN.
## 1 Introduction
Long known as the most distant spectroscopically-confirmed galaxy (Oesch et al. 2016), GN-z11 has recently lead to new exciting and intriguing results, after the first spectra of this galaxy were obtained with the JWST. Indeed, the JWST/NIRSpec observations of Bunker et al. (2023) allowed to confirm a very high redshift of this source (\(z=10.60\)) and showed the presence of hydrogen, carbon, oxygen, magnesium, and neon emission lines in the rest-UV and rest-optical spectrum, often seen in star-forming galaxies at low-redshift and detected at \(z\sim 4-8\) in other JWST spectra (see e.g., Schaerer et al. 2022; Cameron et al. 2023; Nakajima et al. 2023; Tang et al. 2023). Most surprisingly, however, the spectrum of GN-z11 revealed the presence of strong N iii] \(\lambda 1750\) and N iv] \(\lambda 1486\) lines (Bunker et al. 2023), which are very rarely detected in galaxies (see e.g., Barchiesi et al. 2023). Furthermore, the object is found to be very compact (Tacchella et al. 2023), which could indicate the presence
of massive compact star clusters or point to an active galactic nucleus (AGN) (Bunker et al., 2023; Tacchella et al., 2023; Charbonnel et al., 2023; Maiolino et al., 2023).
The discovery of the peculiar emission line spectrum has triggered a series of papers discussing in particular their origin and the nature of GN-z11. Bunker et al. (2023) first suggested that the strong N emission lines may imply an unusually high N/O abundance. They also discussed whether the emission would be powered by star formation or photoionization from an AGN, without reaching clear conclusions on this issue. The quantitative analysis of the emission line spectrum of GN-z11 by Cameron et al. (2023a) confirmed the high N/O abundance, with a lower limit of four times solar, finding also possibly a less extreme C/O ratio, and a metallicity (O/H), which is sub-solar, although not well constrained. Using a suite of photoionization models, Senchyna et al. (2023) inferred the N/O abundance with a lower uncertainty and constrained the metallicity to \(12+\log({\rm O/H})=7.84^{+0.06}_{-0.05}\), confirming in particular a large overabundance of N/O \(\approx 3\times\) solar.
The finding of an exceptionally high N/O abundance at low metallicity (typically ten times the normal N/O value at this O/H) has triggered different speculations about the sources and processes explaining this enrichment. The scenarii discussed include enrichment from massive stars winds (WR stars) or AGB stars, i.e. relatively "classical scenarii", or more "exotic" options such as pollution from PopIII star-formation, tidal disruption of stars from encounters with black holes, ejecta from very massive stars formed through collisions in dense clusters, and supermassive stars (see: Cameron et al., 2023; Watanabe et al., 2023; Senchyna et al., 2023; Charbonnel et al., 2023; Nagele & Umeda, 2023). Supermassive stars, for example, have been invoked by Charbonnel et al. (2023) and Nagele & Umeda (2023) since very strong enrichment of N and low metallicity is difficult to explain and requires fairly fined-tuned conditions with classical scenarios (see also Cameron et al., 2023; Watanabe et al., 2023). Furthermore, such stars (with masses \(M_{\star}\gtrsim 1000\) M\({}_{\odot}\)) have been proposed to form by runaway collisions in very dense stellar clusters, and they could explain the long-standing problem of multiple stellar populations and peculiar abundance patterns observed in globular clusters (GC), as discussed by Gieles et al. (2018) and Denissenkov & Hartwick (2014). If correct, this would probably represent the first observational evidence of supermassive stars, which are also of great interest, for example for understanding the seeds of supermassive black holes (e.g., Portegies Zwart & McMillan, 2002; Woods et al., 2019; Trinca et al., 2023, and references therein).
Not only the abundance ratios observed in GN-z11 resemble those of GC stars. Its compactness and high ISM density also indicate conditions expected in young very massive clusters, which could be proto-GCs (Senchyna et al., 2023; Charbonnel et al., 2023). GN-z11 might thus also be the first high-redshift object where the long sought-for peculiar abundance patterns characterizing GCs are observed (e.g., Renzini, 2017; Gratton et al., 2019, and references therein). These exciting and surprising findings obviously pose the question of the uniqueness of GN-z11, beg for more examples, and call for a better understanding of similar objects, if they exist.
Indeed, although very rare, other galaxies showing emission lines of N iii] \(\lambda 1750\) or N iv] \(\lambda 1486\) in the UV (referred to as N-emitters subsequently) are known, as pointed out by Senchyna et al. (2023) and found in the compilation of Barchiesi et al. (2023). Apart from objects clearly identified as AGN, the Lynx arc, a lensed \(z=3.36\) galaxy identified for N iv] \(\lambda 1486\) and He ii \(\lambda 1640\) emission is probably the first N-emitter studied in detail (Fosbury et al., 2003; Villar-Martin et al., 2004). From photoionization modeling Villar-Martin et al. (2004) derive a high N/O ratio and sub-solar metallicity. Another strongly lensed object at \(z=2.37\), the multiply-imaged compact star cluster in the Sunburst arc which has extensively been studied in recent years (e.g. Rivera-Thorsen et al., 2019; Vanzella et al., 2022), shows N iii] \(\lambda 1750\) emission, as shown in the high S/N spectrum of Mestric et al. (2022). Pascale et al. (2023) have shown that N/O is also elevated (\(\sim 4\times\) solar) at a metallicity \(\sim 1/5\) solar. Finally, in the low-redshift Universe, Mrk 996 uniquely stands out as the only galaxy showing strong N iii] \(\lambda 1750\) emission in the UV (see Mingozzi et al., 2022), and this blue compact dwarf galaxy has long been known as very peculiar, showing e.g. a high electron density, the presence of strong emission lines from WR stars in the optical, and a high N/O abundance, at least in its core (e.g. Thuan et al., 1996; James et al., 2009; Telles et al., 2014).
Here we present a detailed analysis of the \(z=8.68\) galaxy CEERS-1019 observed with NIRSpec/JWST by the public CEERS survey (Finkelstein et al., 2017). This object has previously been studied by several authors (Tang et al., 2023; Nakajima et al., 2023; Larson et al., 2023), but none of these have analysed the carbon and nitrogen abundance and its rest-UV spectrum. Only very recently, Isobe et al. (2023) have analyzed the UV spectrum in detail. Similarly to GN-z11, this galaxy exhibits a very peculiar rest-UV spectrum, making it clearly an N-emitter. Showing numerous emission lines of H, C, N, O, Ne, and the auroral [O iii]\(\lambda 4363\) line, it allows us to accurately determine the chemical abundances of these elements and offers thus a unique opportunity to study the second N-emitter in the early Universe and to enlarge the sample of these rare objects. We also analyze the other known N-emitters and compare their properties to those of CEERS-1019 and GN-z11. Finally, we confront the observed abundance patterns with predictions from normal massive stars and with predicted enrichment patterns from supermassive stars.
The paper is structured as follows. In Sect. 2 we describe the observational data, reduction, and measurements used in this work. We then discuss the nature of the ionizing source of CEERS-1019 (Sect. 3). The chemical abundances and other physical properties of CEERS-1019 are derived in Sect. 4. In Sect. 5 we compare the abundance ratios of CEERS-1019 to other N-emitters and normal star-forming galaxies, and we present different chemical enrichment scenarios to explain them. We also discuss the possible link between CEERS-1019 and proto-GCs. The main results of our work are summarized in Sect. 6. Throughout this work, we assume concordance cosmology with \(\Omega_{\rm m}=0.274\), \(\Omega_{\rm\Lambda}=0.726\), and \(H_{0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\).
## 2 CEERS-1019: a new strong N emitter at high redshift
CEERS-1019 (\(\alpha\), \(\delta\) [J2000] \(=215.0354^{\circ}\), \(52.8907^{\circ}\)) was initially identified as a \(z_{\rm phot}\simeq 8.6\) dropout galaxy by Roberts-Borsani et al. (2016) and spectroscopically confirmed at \(z_{\rm spec}=8.683\) by Zitrin et al. (2015) through strong Ly\(\alpha\) emission (see also Mainali et al., 2018 and Witten et al., 2023). It is one of the most distant Ly\(\alpha\) emitter known and is thought to reside in an overdense region and ionized bubble boosting substantially its Ly\(\alpha\) transmission (Larson et al., 2022; Leonova et al., 2022; Whitler et al., 2023). Mainali et al. (2018) also report a tentative detection of N v \(\lambda 1240\) (\(4.6\sigma\)), suggesting a hard ionizing spectrum of this source.
Recently, much deeper spectroscopy of CEERS-1019 was reported and analyzed by Tang et al. (2023), Nakajima et al.
(2023), and Larson et al. (2023) using NIRSpec, along with NIRCam and MIRI imaging. Although with some discrepancies, these works derived important physical properties of CEERS-1019 such as its stellar mass (\(\log(M_{\star}/M_{\odot}\approx 8.7-10.1)\), gas-phase metallicities (\(12+\log(\)O/H\()\simeq 7.6-8.0\)), ionizing indicators (e.g., O\(32\simeq 13-18\)), among others. Interestingly, Larson et al. (2023) reported a tentative (2.5\(\sigma\)) detection of a broad component in H\(\beta\) that could be related to AGN activity (the presence of an AGN will be further discussed in Section 3). Here, we re-analyze the available JWST data of CEERS-1019.
### JWST NIRSpec and NIRCam observations
JWST/NIRSpec spectra are available for CEERS-1019 as part of the Cosmic Evolution Early Release Science (CEERS1; Finkelstein et al. 2023) program. These observations include both low-resolution PRISM and medium-resolution grating (G140M/F100LP, G235M/F170LP, and G395M/F290LP), providing spectral resolution of \(R\simeq 100\) and \(R\simeq 1000\), respectively, and a spectral coverage \(\simeq 1-5\mu\)m. Standard 3-shutter slits and a 3-point nodding pattern were used. The total exposure time for each medium-resolution grating was 3107 seconds, split into three individual exposures of 14 groups each. Deeper observations were obtained with the low-resolution PRISM, with a total exposure time of 6214 seconds. Both PRISM and medium-resolution observations were obtained with an aperture position angle PA \(\simeq 89.32\) deg (see Figure 1).
Footnote 1: [https://ceers.github.io/](https://ceers.github.io/)
Data reduction was performed using the official JWST pipeline 2 for Level 1 data products and msaexpr3 for Levels 2 and 3. Bias and dark current are subtracted followed by the correction of the \(1/f\) noise and the "snowball" events. We use the calibration reference data system (CRDS) context nvs1_1063.pmap to correct spectra for flat-field and implement the wavelength and photometric calibrations. 2D spectra of each slitlet are then drizzle-combined and the background is subtracted following the three-shutter dither pattern. Finally, 1D spectra are extracted using the inverse-variance weighted kernel following Horne (1986). Figure 1 shows the NIRSpec spectra of CEERS-1019.
Footnote 2: [https://jwst-pipeline.readthedocs.io/](https://jwst-pipeline.readthedocs.io/)
3 [https://github.com/gbramer/msaexpr](https://github.com/gbramer/msaexpr)
CEERS-1019 was also observed with _JWST_/NIRCam with the F115W, F150W, F200W, F277W, F356W, F410M, and F444W filters with exposure times of \(\sim 3000\) seconds (Finkelstein et al. 2023). NIRCam images were reduced using the grizli reduction pipeline (Brammer 2023), which includes procedures for masking the "snowball" artifacts and minimizing the impact of \(1/f\) noise. Photometry of CEERS-1019 is performed using SExtractor (Bertin & Arnouts 1996) in dual mode. For each NIRCam filter, we use the point-spread functions (PSFs) provided by G. Brammer within the grizli PSF library.4 which are based on models from webbgsr (Perrin et al. 2014). Images are then PSF-matched to F444W, which has the largest PSF within the NIRCam filters. We measure the flux of CEERS-1019 in each filter using a circular aperture of \(0.16^{\prime\prime}\) radius (4 pix) and apply an aperture correction derived in F444W using the "FLUX_AUTO" measured in a Kron-like aperture with default Kron parameters. Then, we scale all fluxes to total fluxes based on the encircled energy of the circularized Kron aperture on the F444W PSF from webbpsf (see Weibel et al. in prep. for more details) As shown in the bottom left panel of Figure 1, CEERS-1019 shows a complex morphology with three compact clumps.
Footnote 4: [https://github.com/gbrammer/grizli-psf-library](https://github.com/gbrammer/grizli-psf-library)
### Emission line measurements
As shown in Figure 1, CEERS-1019 presents intense nebular emission in the rest-frame UV and optical. As a first step, we determine the systemic redshift of CEERS-1019 using well-detected (\(\geq 10\sigma\)) and uncontaminated (i.e., not blended) emission lines detected in the G395M spectrum. Using the centroids of [Ne iii] \(\lambda\)3869, H\(\gamma\), H\(\beta\), and [O iii] \(\lambda\lambda\)4959,5007 we derive the mean value and scatter of \(z_{\rm sys}=8.6782\pm 0.0006\).
Several rest-frame UV lines are detected with high significance (\(\geq 5\sigma\)) in the deep PRISM spectrum, such as N iv] \(\lambda\)14865, C iv\(\lambda\)1550, O iii] \(\lambda\)1666, and C iii] \(\lambda\)1909. This contrasts with the shallower medium-resolution G140M spectrum that shows only Ly\(\alpha\) and N iv] at \(\geq 3\sigma\). Thus we use the much higher signal-to-noise ratio (S/N) PRISM spectrum to measure the fluxes of the rest-frame UV lines. We fit simultaneously several Gaussian profiles to account for the emission of N iv], C iv, O iii], N iii], and C iii], and a power-law in the form of \(f_{4}\propto\lambda^{p}\) to fit the continuum level between \(1.3-2.1\mu\)m (\(\lambda_{0}=1300-2200\)A). Since these lines are not resolved in the PRISM spectrum,6 we fixed the line widths of each line to the expected instrumental resolution at their corresponding wavelengths (\(R\simeq 30-45\))7. We repeat the fit 500 times while bootstrapping the spectrum according to its \(1\sigma\) error, and consider the standard deviation of each parameter as its \(1\sigma\) uncertainty. Table 1 summarizes our flux measurements. Along with Ly\(\alpha\), N iv] is found to be the strongest emission line in the rest-UV, stronger than C iv and C iii] by a factor \(\simeq 1.8\) and \(\simeq 1.5\), respectively. We also infer a steep UV slope of \(\beta_{\rm UV}^{\rm spec}=-2.11\pm 0.09\) from the spectrum, which is consistent with the photometric (\(\beta_{\rm UV}^{\rm phot}=-2.11\pm 0.15\)) using the apparent magnitudes in the F150W and F200W filters (F150W \(=25.25\pm 0.08\) and F200W \(=25.29\pm 0.07\)).
Footnote 6: N iv] \(\lambda\)1486 presents an observed line width FWHM \(=394\pm 95\) km s\({}^{-1}\) in the medium-resolution G140M spectrum.
Footnote 7: [https://jwst-docs.stsci.edu/jwst-near-infrared-spectrograph/nirspec-instrumentation/nirspec-dispersers-and-filters](https://jwst-docs.stsci.edu/jwst-near-infrared-spectrograph/nirspec-instrumentation/nirspec-dispersers-and-filters)
Flux measurements of rest-optical lines are obtained using the G395M spectrum, which presents a similar depth as the PRISM spectrum but with a much higher resolution. Optical lines are fitted separately over relatively narrow spectral windows (100 A, rest-frame) and a constant is assumed for the continuum level. The width of the lines is set as a free parameter. In total, we detect up to ten optical emission lines with high significance (Table 1), including Balmer lines that are useful for the determination of dust attenuation.
To account for wavelength-dependent slit losses and absolute flux calibration, we derive the synthetic photometry of NIRSpec spectra (PRISM and gratings) through each NIRCam filter bandpass and matched it to that obtained from observed photometry. In this process, we use a wavelength-dependent polynomial function yielding scaling factors for the slit-loss correction ranging from approximately 2.0 (F150W) to 3.6 (F444W).
Using fluxes and equivalent widths of the detected Balmer lines H\(\beta\), H\(\gamma\), and H\(\delta\), we iteratively derive the dust attenuation \(E(B-V)=0.12\pm 0.11\) using the Reddy et al. (2016) attenuation curve and following the methodology of Izotov et al. (1994), which accounts for the internal extinction and underlying hydrogen stellar absorption. Other important lines, such as those that are sensitive to the electron temperature (\(T_{e}\), [O iii]\(\lambda\lambda\)4363) and density (\(n_{e}\), N iv] \(\lambda\lambda\)1483, 1486 and [O ii]\(\lambda\lambda\)3727, 3729) are
also detected and are analyzed in more detail in Section 4. For the N iv] and [O ii] doublets we fit two Gaussian profiles with similar widths and use the expected separation between the two transitions. We find line ratios of \(F_{1483}/F_{1486}=0.50\pm 0.22\) and \(F_{3727}/F_{3729}=0.98\pm 0.27\) for the N iv] and [O ii] doublets, respectively.
We also check for the presence of spectral features that are usually associated with Wolf-Rayet (WR) stars. The so-called blue bump around 4600-4700A, encompassing the emission from N iii \(\lambda\lambda 4640\), C iii \(\lambda 4650\), and He ii \(\lambda 4686\), is detected neither in the G395M nor the PRISM spectra. We derive a 3\(\sigma\) upper limit relative to H\(\beta\) of He ii/H\(\beta\leq 0.26\). Similarly, the rest-UV He ii \(\lambda 1640\) line is not detected. Despite its low resolution, the PRISM spectrum clearly suggests no emission at the expected position of He ii, while the close O iii] emission is well detected (see Figure 1).
## 3 The nature of the ionizing source: star formation versus AGN
We now discuss the nature of the ionizing source of CEERS-1019, building upon the recent findings by Mainali et al. (2018) and Larson et al. (2023), who suggest a possible AGN activity. In their study, Mainali et al. (2018) reported the detection of N v \(\lambda 1242\) emission with an integrated flux of \((2.8\pm 0.6)\times 10^{-18}\) erg s\({}^{-1}\) cm\({}^{-2}\) with a narrow profile FWHM \(<90\) km s\({}^{-1}\) (unresolved in the MOSFIRE spectrum). However, the G140M spectrum does not exhibit any significant emission around the expected position of N v \(\lambda\lambda 1238,1242\) (Figure 2, top left). By considering the flux uncertainty around 1.2\(\mu\)m from the G140M error spectrum and assuming an unresolved line width of FWHM \(=352\) km s\({}^{-1}\), we infer a 3\(\sigma\) limit of \(1.44\times 10^{-18}\) erg s\({}^{-1}\) cm\({}^{-2}\). This limit stands well below the reported value of Mainali et al. (2018). Furthermore, according to Morton (1991), N v \(\lambda 1238\) is expected to be twice as strong as N v \(\lambda 1242\) under standard conditions. Hence, considering the reported flux of Mainali et al.
Figure 1: Overview of the _JWST_ observations of CEERS-1019 at \(z=8.6782\). Top: 1D and 2D low-resolution NIRSpec/PRISM spectra (black) and 1\(\sigma\) uncertainty (grey). Vertical dashed lines (green) mark the position of well-detected nebular emission lines. The blue line is the best fit for several UV emission lines and continuum. X-axis in the bottom and top panels refer to the observed (\(\mu\)m) and rest-frame wavelengths (Å), respectively. Bottom right: NIRSpec/G395M medium-resolution spectrum of CEERS-1019. Bottom left: _JWST_ NIRCam cutout of CEERS-1019 in the F200W filter. CEERS-1019 is composed of three resolved clumps. The inferred positions of the NIRSpec MSA shutters are overlaid in blue.
(2018) for N v \(\lambda\)1242, we would expect 11.6\(\sigma\) and 5.8\(\sigma\) detections for N v \(\lambda\)1238 and \(\lambda\)1242, respectively. These limits, however, are incompatible with our observations.
Larson et al. (2023) reported a 2.5\(\sigma\) detection of a broad (\(\simeq 1200\) km s\({}^{-1}\)) component in H\(\beta\) using the medium-resolution NIRSpec G395M spectrum. This broad component is not seen in stronger, forbidden lines like [O iii] \(\lambda\lambda\)4960, 5008, from which they suggest conditions similar to the broad line region (BLR) of an AGN. Using our own reduction of the G395M spectrum and a dual-component Gaussian profile to H\(\beta\), we find a 2.2\(\sigma\) detection for the broad component (Figure 2, top middle). Clearly, deeper observations of H\(\beta\) (or H\(\alpha\) with MIRI) are needed to unambiguously confirm the presence and nature of the broad component in H\(\beta\), as already discussed and suggested by Larson et al. (2023). Indeed, if a single Gaussian profile is used to fit the H\(\beta\) profile, a good fit is also found without penaling significantly the residuals (Figure 2, top right). In this case, we find FWHM(H\(\beta\)) = \(452\pm 68\) km s\({}^{-1}\) which differs only by 1.2\(\sigma\) from the nominal FWHM = \(369\pm 16\) km s\({}^{-1}\) obtained for the much brighter [O iii] \(\lambda\)5008 line.
If the existence of this broad component can be confirmed and attributed to the BRL, it would be expected that high-ionization semi-forbidden lines such as N iv], C iv, or C iii], which probe high-density regimes (\(n_{e}\approx 10^{9}\) cm\({}^{-3}\)), would display similar broad Doppler widths as observed in type-1 AGNs (e.g., Paris et al. 2011). However, these lines appear narrow in CEERS-1019, especially N iv] which exhibits a high-significance detection and an intrinsic FWHM \(\simeq 160\) km s\({}^{-1}\) after correcting for instrumental broadening. Thus, our results suggest that the aforementioned semi-forbidden lines unlikely originate from the broad line region. Instead, the properties of these lines, such as the narrow widths and the N iv] line ratio \(F_{1483}/F_{1486}=0.50\pm 0.22\) (implying densities \(n_{e}\approx 10^{4-5}\) cm\({}^{-3}\), see Section 4.3), are consistent with narrow line regions of AGN or H ii regions. In the following, we discuss these two scenarios.
The lower panels of Figure 2 present several diagnostic diagrams using different UV nebular lines: C iii]/He ii versus C iv/He ii, [O iii]/He ii, and N v/N iv]. Photoionization models of star-forming galaxies from Gutkin et al. (2016) and narrow-line regions of AGN from Feltre et al. (2016) are shown in blue and red, respectively. In the right panel of Figure 2 we show models of star-forming galaxies from the updated BOND grid using Cloudy (Ferland et al. 2017), which also includes N iv] and is available from the 3MdB8(Morisset et al. 2015). These models encompass a wide range of parameters, including the ionizing parameter (\(-4.0\leq log\ U\leq-1.0\)), hydrogen number density (\(10^{2}\leq n_{\rm H}/{\rm cm}^{3}\leq 10^{4}\)), and the power law index of the ionizing spectrum (\(-2.0\leq\alpha\leq-1.2\)). We have selected models with metallicities within the range \(0.05\leq Z/Z_{\odot}\leq 0.20\), which corresponds to the inferred metallicity for CEERS-1019 (12+log(O/H) \(=7.70\pm 0.18\), as indicated in Table 2). As illustrated in this figure, the position of CEERS-1019 (indicated by the blue circle) aligns with the predictions of star-forming models in all diagnostic diagrams. Clearly, the absence of He ii and N v, which probe energies \(>54\) eV and \(>77\) eV, respectively, places CEERS-1019 far away from the region occupied by AGN models. It is worth noting that Isobe et al. (2023) suggested recently that the high N iv]/N iii] ratio observed in CEERS-1019 is hardly reproduced by star formation models, pointing to an AGN contribution. However, the 3MdB photoionization models used here do predict very high ratios even well above the observed N iv]/N iii] = \(5.1\pm 2.2\), although requiring fairly high ionization parameters (\(\log(U)\lesssim-2\)).
Footnote 8: [https://sites.google.com/site/mexicanmillionmodels/](https://sites.google.com/site/mexicanmillionmodels/)
Other spectral features observed in CEERS-1019, such as the intense N iv] emission compared to other UV lines (N iv]/C iv \(\simeq 1.8\), N iv]/C iii] \(\simeq 1.5\), N iv]/N v \(\geq 2.6\)), and narrow profiles (FWHM \(\simeq 160\) km s\({}^{-1}\) for N iv]) differ from those observed in AGNs, even those showing unusually strong Nitrogen lines (e.g., Bentz et al. 2004; Jiang et al. 2008). The so-called Nitrogen-loud QSOs exhibit much weaker N iv] compared to other lines (e.g., N iv]/C iv\(\simeq 0.02-0.38\), Batra & Baldwin 2014, Dhanda et al. 2007) and, as expected, they present very broad Doppler widths (FWHM \(\simeq 1500-6000\) km s\({}^{-1}\), Jiang et al. 2008). Similarly, some type-2 AGNs also present N iv] emission (e.g., Hainline et al. 2011; Alexandroff et al. 2013), but notably weaker compared to other high-ionization lines (N iv]/C iv \(\simeq 0.15\), N iv]/C iii] \(\simeq 0.34\), of N iv]/N v \(\simeq 0.30\); Hainline et al. 2011). An exception may be GS-14, a type 1.8 AGN at \(z\simeq 5.55\) recently analyzed by Ubler et al. (2023). GS-14 exhibits broad components in Hydrogen and Helium lines (FWHM \(\simeq 3400\) km s\({}^{-1}\), Ubler et al. 2023) as well as narrow N iv] emission (FWHM \(\simeq 430\) km s\({}^{-1}\), Vanzella et al. 2010, Barchiesi et al. 2023), but it also shows clear nebular emission in N v \(\lambda\)1240 and O vi \(\lambda\)1033 (Grazian et al. 2020; Barchiesi et al. 2023) which are not detected in CEERS-1019.
In contrast, the spectrum of CEERS-1019 resembles those of other, yet also rare star-forming galaxies with intense emission in Nitrogen lines. Examples such as the Lynx arc (Fosbury et al. 2003; Villar-Martin et al. 2004), SMACS-2031 (Christensen et al. 2012; Patricio et al. 2016), Mrk 996 (James et al. 2009; Mingozzi et al. 2022), and the Sunburst cluster show narrow and prominent N iv] and/or [N iii] lines suggestive of high electron temperatures and densities like CEERS-1019 (see Section 4) and without any hint of AGN activity. The bottom panels of Figure 2 also show the location of these strong N-emitters, all consistent with star-forming models like CEERS-1019. The case of GN-z11, another strong N-emitter reported by Bunker et al. (2023), appears to be ambiguous, consistent with both models of AGN and star formation, as already discussed in Bunker et al. (2023) and Maiolino et al. (2023). In conclusion, our results suggest that, regardless of the presence of an AGN whose con
\begin{table}
\begin{tabular}{l c c} \hline \hline Line & Flux & Grism/Prism \\ & [\(\times 10^{-18}\) erg s\({}^{-1}\) cm\({}^{-2}\)] & \\ \hline Ly\(\alpha\)\(\lambda\)1215 & \(2.38\pm 0.49\) & G140M \\ N v \(\lambda\)1240 & \(<1.44\) (\(3\sigma\)) & G140M \\ N iv] \(\lambda\lambda\)1483, 1486 & \(3.75\pm 0.40\) & Prism \\ C iv \(\lambda\lambda\)1548, 1550 & \(2.10\pm 0.42\) & Prism \\ He ii \(\lambda\)1640 & \(<1.20\) (\(3\sigma\)) & Prism \\ O iii] \(\lambda\lambda\)1661, 1666 & \(1.64\pm 0.32\) & Prism \\ N iii] \(\lambda\)1750 & \(0.73\pm 0.30\) & Prism \\ C iii] \(\lambda\lambda\)1907, 1909 & \(2.43\pm 0.36\) & Prism \\
[O ii] \(\lambda\lambda\)3727, 3729 & \(1.29\pm 0.14\) & G395M \\
[Ne iii] \(\lambda\)3869 & \(1.08\pm 0.16\) & G395M \\ HS+He i \(\lambda\)3889 & \(0.24\pm 0.08\) & G395M \\
[Ne iii] + H7 \(\lambda\)3968 & \(0.60\pm 0.11\) & G395M \\ H\(\delta\)\(\lambda\)4101 & \(0.34\pm 0.11\) & G395M \\ H\(\gamma\)\(\lambda\)4340 & \(1.10\pm 0.20\) & G395M \\
[O iii] \(\lambda\)4363 & \(0.42\pm 0.12\) & G395M \\ H\(\beta\)\(\lambda\)4861 & \(2.14\pm 0.22\) & G395M \\
[O iii] \(\lambda\)4959 & \(4.50\pm 0.24\) & G395M \\
[O iii] \(\lambda\)5007 & \(14.05\pm 0.28\) & G395M \\ \hline \end{tabular}
\end{table}
Table 1: Flux measurements of CEERS-1019.
firmation awaits deeper data, the high-ionization lines observed in CEERS-1019 are consistent with stellar photoionization.
## 4 Observational and derived physical properties of CEERS-1019
### ISM properties and element abundances
The rich set of emission lines detected from the rest-frame UV-to-optical spectrum allows us to determine the electron temperature and density in the gas and the detailed abundances of numerous elements including H, C, N, O, and Ne. The derived quantities are summarized in Table 2.
### Electron temperature
To derive physical conditions and element abundances we follow the prescriptions of Izotov et al. (2006). Briefly, these authors adopt the classical three zone model of the H ii region with electron temperatures \(T_{e}\)(O iii) for the high-ionization zone, and \(T_{e}\)(O ii) for the low-ionization zone. The intermediate-ionization zone is not used here, since no such lines are detected.
The electron temperature \(T_{e}\)(O iii) is derived both from the ratio of [O iii] line fluxes \(\lambda 4363\)/\(\lambda(4959\)+5007) and from the UV-to-optical line ratio of \(\lambda 1660\)/\(5007\). The former ratio (respotical) is determined from the medium-resolution spectrum, the latter from the PRISM spectrum. In both cases we obtain \(T_{e}\approx 18000\) K, consistent within 1 \(\sigma\), and with uncertainties between 1151 and 3252 K. Subsequently, we adopt the electron temperature from the optical line ratios (\(T_{e}=18849\pm 3252\) K) with the larger uncertainty, which is primarily due to the low S/N detection of [O iii]\(\lambda 4363\). The electron temperature in the low-ionization region is derived from relations obtained from the photoionization models of Izotov et al. (2006).
### Electron density
Several density indicators exist in the observed spectral range, but few can be used here in practice. In the UV, the C iii] \(\lambda 1909\), Si iii] \(\lambda 1883\),1892, and N iv] \(\lambda 1486\) doublets are density estimators. However, the PRISM spectrum is of insufficient resolution to resolve any of these doublet lines. Si iii] \(\lambda 1883\),1892 is not detected, and C iii] \(\lambda 1909\) has too low S/N in the medium-resolution spectrum. Although of fairly low S/N, the N iv] \(\lambda 1486\)
Figure 2: Star formation and AGN diagnostics. The top left panel shows the G140M spectrum of CEERS-1019 (in black and 1\(\sigma\) uncertainty in grey) around the expected positions of N v \(\lambda\lambda 1238,1242\) (marked with vertical lines), which are not detected (\(1.44\times 10^{-18}\) erg s\({}^{-1}\) cm\({}^{-2}\) at 3\(\sigma\)). Ly\(\alpha\) emission is also marked. The top middle and right panels show the best fits (green) to the H\(\beta\) emission line using dual-component (middle, narrow and broad components in blue and red, respectively) and single-component (right) Gaussian profiles. The bottom panels show star formation (blue) and AGN (red) photoionization models using several rest-frame UV lines. The position of CEERS-1019 (dark blue circle) aligns with the predictions of star-forming models in all diagnostic diagrams. The location of other known star-forming galaxies with strong Nitrogen emission, GN-z11 (Bunker et al., 2023), the Lynx arc (Villar-Martin et al., 2004), SMACS2031 (Patricio et al., 2016), and Mrk 996 (Mingozzi et al., 2022) are also marked with different symbols as indicated in the legend.
doublet is detected with a ratio \(\lambda 1483/\lambda 1487=0.50\pm 0.22\) which indicates a fairly high electron density of \(n_{e}\approx 10^{4-5}\) cm\({}^{-3}\)(Kewley et al. 2019). In the optical, the [O ii] \(\lambda 3727\) doublet is clearly detected, but not resolved from the medium-resolution spectra. Our measured line ratio \(\lambda 3727/\lambda 3729=0.98\pm 0.23\) is consistent within the uncertainties with that obtained by Larson et al. (2023) (\(0.639\pm 0.255\)), and compatible with \(n_{e}>10^{3}\) cm\({}^{-3}\)(Kewley et al. 2019).
The two density estimates could indicate a density gradient between the low and high ionization regions, but are also compatible with a single, relatively high density of \(n_{e}\approx 10^{4-5}\) cm\({}^{-3}\), whose origin we discuss below. In any case, the most important point to take away from this is that the electron density, although high, is lower than the critical densities of all the relevant emission lines used for the subsequent abundance determinations. This holds for the (semi-)forbidden lines of [O iii] at 1666, 4363, 4959, 5007 (with the critical densities \(n_{\rm crit}\geq 6.9\times 10^{5}\) cm\({}^{-3}\)), the two components of the C iii] \(\lambda 1909\) doublet (\(n_{\rm crit}=8.7\times 10^{4}\) cm\({}^{-3}\) for 1907 and \(10^{9}\) cm\({}^{-3}\) for 1909, C iv] \(\lambda 1550\) (\(n_{\rm crit}=2\times 10^{15}\) cm\({}^{-3}\)), N iii] \(\lambda 1750\) (a multiplet whose components have \(n_{\rm crit}\geq 10^{9}\) cm\({}^{-3}\)), N iv] \(\lambda 1486\) (\(n_{\rm crit}=3\times 10^{9}\) cm\({}^{-3}\)), and [Ne iii] \(\lambda 3869\) (\(n_{\rm crit}=1\times 10^{8}\) cm\({}^{-3}\)) (see e.g. Hamann et al. 2002; Dere et al. 2019). Only the [O ii] \(\lambda 3727\) doublet, whose components have relatively low critical densities of \(n_{\rm crit}=1(4)\times 10^{3}\) cm\({}^{-3}\) for 3728 (3726), is therefore affected by the high density inferred for CEERS-1019, whereas all other lines can safely be used to determine abundances, to which we now proceed.
### Ionic and total metal abundances
The electron temperature \(T_{\rm e}\)(O iii) is used to obtain abundances of ions O\({}^{2+}\), N\({}^{3+}\), N\({}^{2+}\), C\({}^{3+}\), C\({}^{2+}\), and Ne\({}^{2+}\); the temperature in the low-ionization region, \(T_{\rm e}\)(O ii), to derive the ionic abundance of O\({}^{+}\). Ionic abundances are derived following Izotov et al. (2006) for the optical lines, and comparing different methods for the UV lines. For C, N, and O, the observations provide two ionization stages, hence the ionic abundances will be close to the total abundances, and we neglect further ionization corrections. For Ne\({}^{2+}\) we use the ionization correction factor (ICF) following Izotov et al. (2006). The results are listed in Table 2.
We derive a total oxygen abundance of \(12+\log({\rm O/H})=7.70\pm 0.18\), which is dominated by the ionic abundance of O\({}^{2+}\)/H\({}^{+}\) (see Table 2). Given the high density, [O ii] \(\lambda 3727\) could be decreased and hence the O\({}^{+}\)/H\({}^{+}\) abundance underestimated. However, in view of the high excitation observed from lines with high critical densities, it is likely that O\({}^{2+}\) is the dominant ionization stage over the majority of the Hii region and hence the determination of O/H close to the correct value.
With available line detections the N/O abundance can be determined in different ways. First we use only the UV lines to compute the ionic abundance ratio (N\({}^{2+}\)+N\({}^{3+}\))/O\({}^{2+}\) using the expressions from Villar-Martin et al. (2004) (V+04) and Hamann et al. (2002) (H+02), assuming the low-density regime. Then we determine N/H from the UV and optical line ratio (N and H\(\beta\)) and use O/H determined from the optical lines. Both methods, marked as "UV only" and "UV+opt" respectively, yield values compatible within the errors, and consistent with a high N/O abundance \(\log\) (N/O) \(\approx-0.15\pm 0.17\).
Similarly, for C/O we use the expressions from Villar-Martin et al. (2004), Perez-Montero & Amorin (2017) (PM17, and Izotov et al. (2023) (I+23) using either only the rest-UV or a combination of the UV and optical lines. As seen from Table 2 the ionic abundance ratios derived in this manner are compatible within uncertainties. For the total C/O abundance we adopt \(\log({\rm C/O})=-0.75\pm 0.11\) as our default value. The C/O ratio is therefore clearly subsolar, and in fact very similar to the average of normal star-forming galaxies at the same O/H (see below).
Finally we also derive the Neon abundance from the [Ne iii] \(\lambda 3869\) and H\(\beta\) lines and applying an ICF from the oxygen lines, following Izotov et al. (2006). We find an abundance ratio of \(\log({\rm Ne/O})=-0.63\pm 0.07\), somewhat higher than the average value of \(\log({\rm Ne/O})=-0.78\pm 0.01\) determined for normal star-forming galaxies by Guseva et al. (2011) at the same metallicity.
Although the abundances derived here assume low densities they are not altered by density effects at the density derived for CEERS-1019, as already discussed above. Most importantly, the critical densities for the N iii] \(\lambda 1750\), N iv] \(\lambda 1486\), and O iii] \(\lambda 1666\) lines involved in the (N\({}^{2+}\)+N\({}^{3+}\))/O\({}^{2+}\) ratio derived from the UV are all very high (\(n_{\rm crit}>10^{9}\) cm\({}^{-3}\)), which further shows that this important ionic abundance ratio can be determined accurately.
Taken together, the derived abundances of CEERS-1019 show that this object has a "metallicity" (O/H) of approximately 1/10 solar (asssuming a solar value of \(12+\log({\rm O/H})\)=8.69 Asplund et al. 2009, 2021), an exceptionally high N/O abundance, and a normal C/O abundance, when compared to galaxies of similar metallicity (see Fig. 5). The interpretation of these abundances and implications will be discussed below (Sect. 5).
### Comparison with other studies and caveats
ISM properties and abundances of CEERS-1019 have been determined by several other studies, with whom we now briefly compared our results.
Larson et al. (2023) argue that the [O ii] \(\lambda 3727\) doublet can be deblended, from which they infer an electron density of \(n_{e}=(1.9\pm 0.2)\times 10^{3}\) cm\({}^{-3}\). From inspection of the C iii] \(\lambda 1909\) doublet they suggest that the density could be higher than \(n_{e}>10^{4}\) cm\({}^{-3}\). The density inferred here from the N iv] \(\lambda 1486\)
\begin{table}
\begin{tabular}{l r} \hline \hline Property & Quantity \\ \hline \(n_{e}\) [cm\({}^{-3}\)] & \(10^{4}-10^{5}\) \\ \(T_{\rm e}\)(O iii) – UV [K] & \(17839\pm 1151\) \\ \(T_{\rm e}\)(O iii) – opt [K] & \(18849\pm 3252\) \\ \(T_{\rm e}\)(O iii) [K] & \(14937\pm 944\) \\ \(12+\log({\rm O}^{2}/{\rm H}^{+}\)) & \(6.73\pm 0.14\) \\ \(12+\log({\rm O}^{2+}/{\rm H}^{+}\)) & \(7.68\pm 0.18\) \\ \(12+\log({\rm O}/{\rm H})\) & \(7.70\pm 0.18\) \\ \(\log({\rm(N^{2+}+N^{3+})/O^{2+}})\) – UV only (V+04) & \(-0.13\pm 0.11\) \\ \(\log({\rm(N^{2+}+N^{3+})/O^{2+}})\) – UV only (H+02) & \(-0.16\pm 0.17\) \\ \(\log({\rm(N^{2+}+N^{3+})/O^{2+}})\) – UV+opt & \(-0.18\pm 0.28\) \\ \(\log({\rm(C^{2+}+C^{3+})/O^{2+}})\) – UV only (V+04) & \(-0.75\pm 0.11\) \\ \(\log({\rm(C^{2+}+C^{3+})/O)-UV+opt (V+04)}\) & \(-0.79\pm 0.22\) \\ \(\log({\rm(C^{2+}+C^{3+})/O^{2+}})\) – UV only (PM17) & \(-0.76\pm 0.11\) \\ \(\log({\rm C^{2+}/O^{2+}})\) – UV only (I+23) & \(-0.92\pm 0.12\) \\ \(\log({\rm C^{2+}/O^{2+}})\) – UV+opt only (I+23) & \(-0.84\pm 0.22\) \\ ICF(\({\rm C^{2+}/O^{2+}})\) = 1.1 & \\ ICF(\({\rm Ne^{2+}/O^{2+}}\)) & 1.04 \\ \(\log({\rm Ne/O})\) & \(-0.63\pm 0.07\) \\ \hline \end{tabular}
\end{table}
Table 2: ISM properties, ionic and total heavy element abundances
doublet (\(n_{e}\approx 10^{4}-10^{5}\) cm\({}^{-3}\)) is compatible with their finding. Most importantly for the abundance determinations, all available density estimates indicate that the main emission lines should not be affected by density effects.
From the 3-\(\sigma\) detection of [O iii]\(\lambda 4363\) Larson et al. (2023) inferred \(T_{e}=18630\pm 3682\) K, in excellent agreement with our determination. Based on the \(T_{e}\) determination they infer \(12+\log(\mathrm{O/H})=7.664\pm 0.508\) from an average relation between \(T_{e}\) and O/H determined empirically by Perez-Montero & Amorin (2017). Tang et al. (2023) determined \(12+\log(\mathrm{O/H})=7.72^{+0.17}_{-0.14}\) using the direct method. Within the quoted uncertainties, our results agree with both of these determinations. A slightly higher O/H abundance (\(12+\log(\mathrm{O/H})=7.97\pm 0.16\)), but still compatible with the uncertainties, has been derived by Nakajima et al. (2023) using a less accurate R23 strong-line calibration. Finally, assuming AGN models, Isobe et al. (2023) have obtained a higher metallicity for CEERS-1019, but similar N/O, C/O, and Ne/O ratios as derived here.
Note also that the abundance ratios determined here assume a homogeneous medium both in abundance and density. If pockets of high density and enriched gas coexist with lower density gas with say normal abundance ratios, only a relatively small fraction of enriched gas - i.e. relatively low amounts of Nitrogen - might suffice to explain the observed emission line ratios, since the emissivity of the forbidden line depends on the density (see e.g Izotov et al., 2006). However, in this case the inferred N/O abundance would also be lower limit of the true N/O ratio in the enriched pocket.
### Other physical properties
#### 4.6.1 Morphology
As shown in the left panel of Figure 4 CEERS-1019 shows a complex morphology in the NIRCam bands consistent with three different clumps/structures separated by \(\simeq 0.24\arcsec\), or \(\simeq 1.12\) kpc at \(z=8.678\) (\(4.68\arcsec\) kpc\({}^{-1}\)). These clumps, labeled as A, B, and C as indicated in Figure 4, are very compact, only resolved in the NIRCam bands at short wavelengths.
To investigate the morphology of CEERS-1019 in more detail, we model the three galaxy substructures following accurately the methodology applied to the study of stellar clumps in Messa et al. (2022) and Claeyssens et al. (2023). Assuming that clumps have Gaussian profiles, we consider a 15\(\times\)15 pixel region centered on the galaxy and we fit a model consisting of three 2D Gaussian functions, convolved to the NIRCam instrumental PSF in this field from the grizly library. The best fit to their observed profiles (given by least-squares minimization) returns their fluxes and sizes. We assume that the shape of each substructure is the same in all bands. For this reason, the fit is initially performed in F200W, chosen as the reference filter, and then the shape (size, axis ratio, and position angle) of each clump is kept fixed in the other filters, where only the source flux is fitted. Uncertainties are obtained from Monte Carlo sampling.
The results of the model analysis are presented in Table 3. Our findings indicate that the morphologies of the three clumps in CEERS-1019 are compact, with measured FWHMs of \(48\pm 5\) mas, \(62\pm 15\) mas, and \(43\pm 4\) mas for clumps A, B, and C, respectively. Following Peng et al. (2010) (see also: Vanzella et al. 2017, Messa et al. 2022, and Claeyssens et al. 2023), the inferred FWHM suggest that these clumps are resolved, albeit slightly, as their sizes are larger than the pixel size of the NIRCam images (40 mas). Translating these measurements into half-light radii, we find \(r_{\mathrm{e}}=112\pm 12\) pc, \(145\pm 35\) pc, and \(101\pm 9\) pc for clumps A, B, and C, respectively.
#### 4.6.2 Spectral Energy Distribution
We now analyze the spectral energy distributions (SEDs) of CEERS-1019 as a whole (named Total) as well as its subcomponents (A, B, and C). We use the SED-fitting code CIGALE (Boquien et al., 2019, version 2022.1) using the available NIRCam photometry from F115W to F444W, covering the rest-frame wavelength \(\sim 1200-4600\)A. Stellar population models from Bruzual & Charlot (2003) are used along with the Chabrier (2003) Initial Mass Function (IMF) and the Small Magellanic Cloud extinction curve (\(R_{\mathrm{v}}=2.93\), Pei 1992). The metallicity is fixed to \(Z=0.004\), the closest available value inferred
Figure 3: Best fit (blue) of density-sensitive emission lines, N iv] \(\lambda\lambda 1483,1486\) (left) and [O ii] \(\lambda\lambda 3727,3729\) (right), using G140M and G395M medium-resolution spectra (black and \(1\sigma\) uncertainty in grey), respectively. The fit uses two Gaussian profiles with similar widths and the expected position and separation between the two transitions (vertical lines).
for CEERS-1019, and is assumed to be the same for nebular emission and starlight. The dust attenuation (\(E(B-V)\)) and ionization paramater (\(\log(U)\)) are treated as free parameters, ranging from \(0.0-0.5\) mag and \(-3.5\) to \(-1.0\), respectively. Finally, we explore two different star-formation histories: a constant star-formation model applied to the integrated light of CEERS-1019 (Total) and instantaneous burst episodes for the three sub-components (A, B, and C). For the former, we include the flux measurements of the H\(\beta\) + [O iii] \(\lambda\lambda 4960,5008\) emission lines in the fitting process.
Starting with the integrated emission of CEERS-1019 (Total), the best-fit model, shown in black in the right panel of Figure 4, finds a continuous star-formation rate SFR= \(161\pm 23\) M\({}_{\odot}\) yr\({}^{-1}\) over \(14\pm 7\) Myr. The stellar mass is \(M_{\odot}^{\rm total}/M_{\odot}=(2.0\pm 0.6)\times 10^{9}\) attenuated by \(E(B-V)=0.17\pm 0.02\), in agreement with the values reported in Larson et al. (2023). For the three individual components A, B, and C we find burst masses of \(M_{\star}^{\star}/M_{\odot}=(5.7\pm 0.5)\times 10^{8}\), \(M_{\odot}^{\rm H}/M_{\odot}=(4.6\pm 0.1)\times 10^{8}\), and \(M_{\star}^{\star}/M_{\odot}=(8.6\pm 0.2)\times 10^{8}\), respectively. Clumps A and B are well-fitted with very young burst models, having ages of \(4.0\pm 0.26\) Myr and \(5.6\pm 0.7\) Myr, respectively. On the other hand, clump C is older than the other components, with a burst age of \(15.0\pm 2.9\) Myr. Indeed, the color obtained for clump C F356W - F444W \(=0.32\pm 0.29\) is significantly lower than those measured in clumps A and B, F356W - F444W \(\simeq 0.75-1.16\), suggesting a weak contribution of nebular emission in F444W (e.g., H\(\beta\) and [O iii]), thus negligible star formation over the last \(\lesssim 10\) Myr.
#### 4.6.3 Stellar mass and SFR surface densities
Based on the stellar masses and half-light radii obtained for the individual clumps (Table 3), we obtained high stellar mass surface densities of \(\log(\Sigma_{M})=3.86\pm 0.11\), \(3.55\pm 0.53\), and \(4.14\pm 0.14\) M\({}_{\odot}\)pc\({}^{-2}\) for clumps A, B, and C, respectively (defined as \(\Sigma_{M}=M_{\star}/(2\pi r_{\rm eff}^{2})\)). It is worth noting that the inferred values of \(\Sigma_{M}\) may even be higher if each substructure comprises multiple unresolved stellar systems. Nevertheless, these values are already comparable to the densest systems identified at high redshift by Claeyssens et al. (2023) or Mestric et al. (2022), and significantly higher than the average \(\log(\Sigma_{M})\simeq 2\) M\({}_{\odot}\)pc\({}^{-2}\) observed in nearby young clusters (Brown & Gnedin 2021). Similarly, the compactness index, defined as \(C_{5}=(M_{\star}/10^{5}M_{\odot})(r_{\rm eff}/\)pc\({}^{-1})\) is also high in the case of CEERS-1019. It ranges from \(C_{5}\simeq 30\)-90 depending on the clump, exceeding the values of old globular clusters and young massive clusters by at least one order of magnitude (Krause et al. 2016), suggesting high cluster formation efficiencies (Krause et al. 2016; Kruijssen 2012). The SFR surface density is also found to be very high for clumps A and B with \(\log(\Sigma_{\rm SFR})=3.27\pm 0.11\) and \(2.81\pm 0.21\) M\({}_{\odot}\)
Figure 4: Left panel shows cutout images around CEERS-1019 in the NIRCam filters. In the F150W sub-panel, we show the positions of the three compact clumps resolved only at short wavelength, labeled as A, B, and C (blue, green, and red, respectively) The right panel shows the SED best-fit models using CIGALE (Boquien et al. 2019) of the integrated light of CEERS-1019 (“Total” in black), as well as the individual clumps (A, B, and C in blue, green, and red, respectively). Observed fluxes are marked with circles, while the predicted fluxes from the best fit are marked with crosses.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline ID & R.A. & Dec. & \(r_{\rm eff}\) & Age & SFR\({}_{\rm 10M_{\rm yr}}\) & \(\log(M_{\star})\) & \(\log(\Sigma_{M})\) & \(\log(\Sigma_{\rm SFR})\) \\ & [J2000] & [J2000] & [pc] & [Myr] & [\(M_{\odot}\) yr\({}^{-1}\)] & [\(M_{\odot}\)] & [\(M_{\odot}\) pc\({}^{-2}\)] & [M\({}_{\odot}\) yr\({}^{-1}\)kpc\({}^{-2}\)] \\ (1) & (2) & (2) & (3) & (4) & (5) & (6) & (7) & (8) \\ \hline A & 14:20:08.50 & +52:53:26.37 & 112 \(\pm\) 12 & 4.0 \(\pm\) 0.3 & 148 \(\pm\) 25 & 8.76 \(\pm\) 0.04 & 3.86 \(\pm\) 0.11 & 3.27 \(\pm\) 0.11 \\ B & 14:20:08.51 & +52:53:26.51 & 145 \(\pm\) 35 & 5.7 \(\pm\) 0.7 & 83 \(\pm\) 18 & 8.66 \(\pm\) 0.15 & 3.55 \(\pm\) 0.53 & 2.81 \(\pm\) 0.21 \\ C & 14:20:08.48 & +52:53:26.36 & 101 \(\pm\) 9 & 15.0 \(\pm\) 3.0 & 2 \(\pm\) 10 & 8.94 \(\pm\) 0.12 & 4.14 \(\pm\) 0.14 & \(<\) 2.27 \\ Total & — & — & — & 14.4 \(\pm\) 7.2 & 161 \(\pm\) 23 & 9.31 \(\pm\) 0.15 & — & — \\ \hline \end{tabular} 1
\end{table}
Table 3: SED and morphological properties of the different substructures of CEERS-1019.
\(\rm yr^{-1}kpc^{-2}\), respectively. In contrast, clump C does not show significant star formation over the last 10 Myr, yielding an upper limit of log(\(\Sigma_{\rm SFR}\)) \(<2.27\) M\({}_{\odot}\) yr\({}^{-1}\)kpc\({}^{-2}\).
Finally, the derived mass and SFR surface densities in CEERS-1019 are comparable with those of other prominent N-emittures discussed below, such as GN-z11 (log(\(\Sigma_{M}\)) \(\sim 4.6\) M\({}_{\odot}\)pc\({}^{-2}\), Tacchella et al. 2023), SMACS2031 (log(\(\Sigma_{M}\)) \(\sim 4.0\) M\({}_{\odot}\)pc\({}^{-2}\), log(\(\Sigma_{\rm SFR}\)) \(\sim 1.4\) M\({}_{\odot}\) yr\({}^{-1}\)kpc\({}^{-2}\), Patricio et al. 2016), the Sunburst cluster (log(\(\Sigma_{M}\)) \(\sim 4.1\) M\({}_{\odot}\)pc\({}^{-2}\), log(\(\Sigma_{\rm SFR}\)) \(\sim 3.7\) M\({}_{\odot}\) yr\({}^{-1}\)kpc\({}^{-2}\), Vanzella et al. 2022), or Mrk 996 (log(\(\Sigma_{M}\)) \(\sim 2.8\) M\({}_{\odot}\)pc\({}^{-2}\), Thuan et al. 1996). This suggests a potential connection between compactness and a high production efficiency of nitrogen.
### Mass of the enriched material
The total mass of enriched, ionized gas, which is directly observable, can easily be estimated assuming ionization equilibrium and a constant ISM density (see, e.g., Dopita & Sutherland 2003):
\[M_{\rm ionized}=\frac{m_{p}Q_{H}}{\alpha_{B}n_{e}}=2.5\times 10^{6}\left( \frac{10^{3}}{n_{e}}\right)\left(\frac{Q_{H}}{10^{54}}\right)M_{\odot}, \tag{1}\]
where \(Q_{H}\) is the ionizing photon production rate which can be determined from H recombination lines, \(n_{e}\) the electron density, \(m_{p}\) the proton mass, and \(\alpha_{B}\) the recombination rate coefficient.
For CEERS-1019 we thus find \(M_{\rm ionized}\sim 1.2\times 10^{5}\) M\({}_{\odot}\), from the observed H\(\beta\) luminosity and adopting \(n_{e}=10^{5}\) cm\({}^{-3}\), very similar to \(M_{\rm ionized}\sim 2\times 10^{5}\) M\({}_{\odot}\) inferred for GN-z11 by Charbonnel et al. (2023). Maiolino et al. (2023) argue that the amount of enriched gas in GN-z11 could be even smaller if the N-emitting gas is found at higher densities, as they suggest.
## 5 Discussion
### Observed heavy element abundances in CEERS-1019 comparison to "normal" objects
The main elemental abundance ratios derived for CEERS-1019 are shown in Fig. 5, and compared to measurements in other galaxies and Hii regions. To do so we use in particular the recent CNO abundances determined and compiled by Izotov et al. (2023), who primarily included data from low-redshift star-forming galaxies observed with HST/COS, and data on individual Hii regions from the works of Esteban et al. (2002, 2009, 2014), Garcia-Rojas & Esteban (2007), and Lopez-Sanchez et al. (2007).
As well known, the majority of galaxies and Hii regions follow a fairly well-defined sequence of N/O versus O/H and C/O versus O/H (e.g. Garnett et al. 1999; Berg et al. 2019), which can be understood with chemical evolution models (Henry et al. 2000; Chiappini et al. 2006; Prantzos et al. 2018). In N/O, for example, only few strong outliers with a large nitrogen excess are known at low redshift (see e.g. Thuan et al. 1996; Pustilnik et al. 2004; Stephenson et al. 2023). In comparison, CEERS-1019 clearly stands out by having an extremely high Nitrogen abundance, log(N/O) \(=-0.13\pm 0.11\), which is approximately 5.6 times the solar ratio (Asplund et al. 2021) and more than a factor 10 higher than the N/O values generally observed at similar metallicities (O/H). This exceptionally high N abundance reflects the very peculiar UV spectrum of CEERS-1019, showing unusually strong Nitrogen lines.
In contrast to N/O, with log(C/O) \(=-0.75\pm 0.11\), the C/O abundance is fairly normal for the observed metallicity. The Ne/O abundance, log(Ne/O) \(=-0.63\pm 0.07\) is somewhat higher (by \(\sim 0.15\) dex) than the average value for normal star-forming galaxies derived by Guseva et al. (2011) at the same metallicity.
Interestingly, these observed abundance ratios of CEERS-1019 resemble those of globular cluster stars, similarly to what was pointed out by Senchyna et al. (2023) and Charbonnel et al. (2023) for GN-z11. The origin of these peculiar abundances ratios will be discussed below.
### Abundances in other N-emitters
Interestingly, the abundance ratios found in CEERS-1019 resemble those found by Cameron et al. (2023) for the \(z=10.6\) galaxy GN-z11 observed recently with JWST by Bunker et al. (2023), which are shown by boxes in Fig. 5. As shown, the abundances in GN-z11 suffer from large uncertainties, which are in particular due to the fact that the [O iii]\(\lambda 5007\) line is shifted beyond the
Figure 5: Observed chemical abundances of the six N-emitters and comparison samples from the literature. _Top:_ N/O versus O/H, _bottom:_ C/O versus O/H. CEERS-1019 is shown by a red star, GN-z11 by a blue circle. The \(z\sim 2.6-3\) lensed galaxies (Lynx arc, SMACS2023 and the Sunburst cluster) are shown by red, black, and orange crosses, the low-\(z\) galaxy Mrk 996 with a green cross (two N/O values from James et al. (2009) are shown: for the central region and from the total galaxy). The magenta shaded region and outlined box indicate the range of abundances allowed for GN-z11, according to Cameron et al. (2023). Low-\(z\) star-forming galaxies and Hii regions from the compilation of Izotov et al. (2023) are shown by small black symbols. The dash-dotted line shows the average trend observed in low-\(z\) star-forming galaxies, as parametrized by Vila-Costas & Edmunds (1993) for N/O and C/O by Dopita et al. (2006) respectively.
range accessible with NIRSpec and no direct O/H abundance determination is possible for this object from the present data. Using photoionization modeling, Senchyna et al. (2023) have further constrained the abundances in GN-z11, obtaining total gas abundances of \(12+\log(\rm O/H)=7.84\pm 0.06\) and \(\log(\rm N/O)=-0.38\pm 0.05\), which are quite similar to those obtained here for CEERS-1019. Clearly, both CEERS-1019 and GN-z11 are significantly enriched in Nitrogen, reaching exceptionally high N/O values. The carbon abundance cannot be well constrained in GN-z11, since the electron temperature remains undetermined in this object. The allowed range, derived by Cameron et al. (2023a), is indicated in Fig. 5.
Very few other galaxies or Hii regions with a high N/O abundance and/or clear detections of nebular lines of N in the UV can be found in the literature. Barchiesi et al. (2023) list known AGN and galaxies with O vi, N v, or N iv] \(\lambda 1486\) emission lines in the rest-UV. Among the non-AGN in their list one finds the peculiar galaxy named the Lynx arc (at \(z=3.36\)), which has been studied by Fosbury et al. (2003) and Villar-Martin et al. (2004), although Binette et al. (2003) have argued that this object may be an obscured QSO. According to the photoionization models of Villar-Martin et al. (2004), both the N/O and C/O abundance ratios of this object are elevated, as seen in Fig. 5. Although suspected, no direct signs of WR stars have been found in this object (Villar-Martin et al. 2004) and the inferred abundances are not explained.
Another object showing nebular N iv] \(\lambda 1486\) emission is the strongly lensed galaxy SMACSJ2031.8-4036 at \(z=3.5\) studied in detail by Christensen et al. (2012) and Patricio et al. (2016). The available VLT observations (with XShooter and MUSE) cover both the rest-UV and optical domain, allowing the detection of numerous emission lines, and thus electron temperature, density and abundance determinations. Interestingly, this object shows indications for high density (\(n_{e}\ga 10^{5}\) cm\({}^{-3}\)) from the N iv] \(\lambda 1486\) doublet and lower densities from other diagnostics Patricio et al. (2016). The metallicity \(12+\log(\rm O/H)=7.76\pm 0.03\) is very similar to CEERS-1019 and it shows a normal C/O abundance (\(\log(\rm C/O)=-0.80\pm 0.09\)), according to Christensen et al. (2012). Inspection of their spectra, kindly provided by the authors, shows a clear detection of both N iv] \(\lambda 1486\) and N iii] \(\lambda 1750\) lines, which allows us to determine N/O from the UV lines and the reported \(T_{e}\) using the same methods described above (see Sect. 4.4). We find a relatively high N abundance of \(\log(\rm N/O)=-0.66\pm 0.1\), which we also report in Fig. 5. Finally, we also find a normal Neon abundance of \(\log(\rm Ne/O)=-0.82\) from the reported line fluxes.
In the list of Barchiesi et al. (2023) other non-AGN spectra showing UV lines of Nitrogen show only N v P-Cygni lines, which are most likely due to stellar emission, or are stacked spectra with weak detections, not suitable for our purpose.
Another high-redshift object where N iii] \(\lambda 1750\) emission has recently been detected is the strongly lensed and multiply imaged stellar cluster at \(z=2.368\) in the Sunburst arc (Mestric et al. 2023), an exceptional object studied in depth by various authors (e.g. Rivera-Thorsen et al. 2019; Vanzella et al. 2020). From a detailed analysis and photoionization modelling, Pascale et al. (2023) infer in particular a high N/O abundance ratio (\(\log\rm N/O=-0.21^{+0.10}_{-0.11}\)), and normal C/O and Ne/O ratios for a metallicity (O/H) or approximately \(\sim 0.22\) solar. The N/O ratio of this object fixes thus among the highest values, comparable to CEERS-1019, and C/O is also similar, as also shown in Fig. 5.
To extend our comparison, we have also examined the low-redshift galaxy Mrk 996, which is a well-known Blue Compact Dwarf (BCD) galaxy with peculiar properties, such as a high electron density, broad emission line components in H\(\alpha\), [O iii] \(\lambda\lambda 4959,5007\) and other lines, the presence of Wolf-Rayet stars of WN and WC type, and a high N/O abundance (see e.g. Thuan et al. 1996; Pustilnik et al. 2004; James et al. 2009; Telles et al. 2014). This galaxy also shows N iii] and N iv] emission lines in the UV (Mingozzi et al. 2022; Senchyna et al. 2023). From integral-field observations James et al. (2009) have found a normal N abundance (\(\log\)(N/O) \(\approx-1.43\)) across the galaxy and a N-enhancement by a factor \(\sim 20\) (\(\log\)(N/O) \(\approx-0.13\)) in the broad line component, emitted in the central region. The two measurements are plotted in Fig. 5. The C/O abundance of Mrk 996 can be derived from the C iii] \(\lambda 1909\) and O iii] \(\lambda 1666\) line ratio, which is taken from the HST/COS observations from the CLASSY survey (Berg et al. 2022; Mingozzi et al. 2022), and adopting the electron temperature \(T_{e}=10^{4}\) K from James et al. (2009). We find a high Carbon abundance of \(\log(\rm C/O)=-0.22\), close to solar, for this galaxy. However, for its metallicity (\(\sim 0.5\times\) solar, according to James et al. 2009) the C/O abundance ratio is comparable to that of other galaxies and Hii regions, hence not unusual.
Taken together we thus conclude that all of the six N-emitters show an elevated (supersolar) N/O abundance ratio, whereas the C/O abundance is normal in four of them, and only one of them (the Lynx arc) appears enhanced in C/O. The observed and other properties of these objects are also summarized in Table 4. We will now discuss possible scenarios to explain to observed abundance pattern.
### Possible chemical enrichment scenarios
Galactic chemical evolution models are able to reproduce the observed average trends of the abundance ratios of CNO and H for "normal" galaxies (see e.g. Henry et al. 2000; Chiappini et al. 2006; Berg et al. 2019; Johnson et al. 2023), although the evolution of Nitrogen has notoriously been more complicated to explain, since the observations show a behaviour like a primary element at low (subsolar) metallicity (cf. discussion and references in Prantzos et al. 2018). To examine the conditions which may be more appropriate for low metallicity dwarf galaxies and Hii regions, which dominate the current samples of extra-galactic CNO measurements in galaxies (the samples shown here), various authors have studied the effects of variable or bursty star-formation histories, outflows and different star-formation efficiencies. Again, such models are able to reproduce the _average_ trends of C/O, N/O and C/N as a function of metallicity and they can also explain the observed scatter in the data, e.g. by the presence of burst phases (see Berg et al. 2019, for a recent study).
Since the observed abundance ratios of CEERS-1019 and possibly other N-emitters are, however, clearly more extreme than those of the bulk of galaxies studied so far, we need to examine the possible nucleosynthetic sources and the conditions capable to explain them. To do so, we first consider two quantitative scenarios, the first involving enrichment from normal massive stars, and the second nucleosynthesis from supermassive stars. These scenarii were considered in recent studies (e.g. Charbonnel et al. 2023; Nagele & Umeda 2023; Watanabe et al. 2023).
#### 5.3.1 Enrichment from massive stars - "WR-scenario"
It is well-known that the stellar winds of massive stars can carry important amounts of newly-created elements such as He and N (from H-burning, the latter resulting at the expense of C and
O) or C and O (from He-burning); those elements appear at the stellar surfaces and are ejected by the winds during the so-called Wolf-Rayet (WR) phases, with N enhanced in the WN phase and C enhanced in the subsequent WC phase (Maeder, 1983). The stellar wind yields depend strongly on the initial mass and metallicity of the stars, and also on other properties such as stellar rotation and the efficiency of mixing in the stellar interiors, or their evolution in close binary systems (e.g. Georgy et al., 2012; Szecsi et al., 2015; Pauli et al., 2022).
Using the recent stellar yields from Limongi & Chieffi (2018) we have computed the cumulative stellar wind yields of a simple stellar population as a function of time, for a Kroupa (2002) IMF, three different metallicities ([Fe/H]\(=-2\), \(-1\) and 0, respectively) and three different initial rotational velocities (V\({}_{\rm Rot}=0\), 150 and 300 km/s, respectively). The latter value of V\({}_{\rm Rot}=\)300 km/s was adopted in Charbonnel et al. (2023) to discuss the observations of GN-z11. Assuming that stars more massive than 20-25 M\({}_{\odot}\) do not explode but collapse and become black holes (see discussion in Prantzos et al., 2018), the stellar ejecta have exclusively a wind composition for several million years. In the first couple of Myr, that composition is the original one of the stellar envelope, then it is dominated by H-burning products and subsequently, by He-burning products. To compare with observed abundance ratios Charbonnel et al. (2023) assumed a dilution of the wind ejecta with an equal amount of ISM. Here we assume no such mixing, thus maximizing the effect of the stellar winds on the composition. Physically, this may correspond to the situation where the winds of the previous O-star phase, operating for a few Myr, have opened a cavity in the ISM where the winds of the subsequent WR phase are expanding. Actually, there is mixture with pristine ISM material, since we include the winds released by all stars above 12 M\({}_{\odot}\) and in the considered period of 8 Myr the stars less massive than 20 M\({}_{\odot}\) do not reach the WR phase.
In Fig. 6 we display the evolution of various quantities of the "WR scenario" for stars of [Fe/H]\(=-1\), a value reasonably close to the metallicity of the extragalactic systems studied here. Results are shown up to 8 Myr after the formation of a stellar population of total mass 10\({}^{8}\) M\({}_{\odot}\)with a normal IMF (Kroupa, 2002). During that period, stars below 25 M\({}_{\odot}\) have not yet ended their lives (by assumption), so that only the wind ejecta populate the cavity crafted by the winds and the radiation of the stars. The mass of the wind ejecta increases steadily, from \(\sim\)10\({}^{4}\) M\({}_{\odot}\) after the first Myr to \(\sim\)10\({}^{6}\) M\({}_{\odot}\) at 4 Myr and more slowly after that. In Sec. 4.7 we discussed the amounts of ionized gas estimated in CEERS-1019 and GN-z11, which are compatible with the model results for this earliest period after the starburst (horizontal dashed lines in the top panel).
The evolution of the wind composition differs between the non-rotating and the rotating stars. The former (solid red curves) have practically no mixing between their convective core and radiative envelope; in consequence, the signatures of H-burning (high N/O and N/C) appear abruptly in the wind, once the mass loss uncovers the former H-burning core. The latter (solid blue curves) undergo rotational mixing, bringing slowly the H-burning products to the surface; as a result, the N/O and N/C ratios increase slowly but steadily, up to the equilibrium value, which is similar to the case of non-rotating stars. The timescale for the appearance of high N abundance is \(\sim 3\) Myr, in good agreement with the time window inferred by Senchyna et al. (2023) for GN-z11. About a Myr later, some amounts of He and He-burning products - mainly C and insignificant O amounts - appear in the wind ejecta of the most massive rotating stars (from 120 to \(\sim 70\) M\({}_{\odot}\)) while the less massive ones never reach the WC phase; the combined effect is a strong increase of C/O, a strong decrease of N/C and a small variation of N/O. In contrast, none of the non-rotating stars reaches the WC phase at such low metallicity, and all the CNO ratios remain basically unchanged. After that, the situation is expected to change drastically, as the first SN from M\(<\)25 M\({}_{\odot}\) stars explode and eject their core material in the ISM.
As shown in Fig. 6 in the early evolution of a stellar population, there is a period of several Myr during which the N/O ratio in the stellar winds reaches the high N/O ratios observed in CEERS-1019 and in the other N-emitters analyzed here. However, rapidly after reaching the maximum N/O value, the carbon abundance also increases (very strongly in rotating star or less so without rotation), implying C/O and N/C ratios that are incompatible with the observations of CEERS-1019, SMACS2031, and the Sunburst cluster over most of the time (see also Fig. 8).
Figure 6: Evolution of IMF-weighted time-integrated masses and abundances of the winds of a stellar population of total mass 10\({}^{8}\) M\({}_{\odot}\)cresed at t=0, according to the models of Limongi & Chieffi (2018) with metallicity [Fe/H]\(=\)1 and initial rotational velocity V\({}_{\rm rotr}\)=300 km/s (solid blue curves) or 0 km/s (solid red curves) in all panels; practically no dilution with ambient ISM is assumed (99% of ejecta and 1% of ISM). Comparison is made to abundance data from CEERS-1019 (this work, orange shaded), Lynx arc (green shaded), GN-z11 (Senchyna et al., 2023, violet shaded with age determination). The two horizontal dashed lines in the top panel indicate the estimated mass of ionized gas observed in CEERS-1019 and GN-z11, respectively (see Sec. 4.7). The yellow shaded area indicates the short period (\(\sim\)-0.5 Myr) where all three abundance ratios for CEERS-1019 are well reproduced by the rotating massive star winds. In the bottom panel, displaying the evolution of the He mass fraction, the corresponding lifetimes of the stars are indicated (filled circles, color-coded for V\({}_{\rm rotr}\)= 0 and 300 km/s) for selected masses (associated numbers in M\({}_{\odot}\)).
In the results displayed here, there is thus only a fairly short period of \(\sim\) 0.5 Myr (yellow shaded area in Fig. 6) where all three ratios N/O, N/C, and C/O are compatible with the observations of CEERS-1019 for the case of rotating stars. In view of the timescales involved (several Myr), the probability of such an occurrence is small but certainly non-negligible. We note that this occurs rather early in the evolution of the starburst, but well within the time window found by the analysis of Senchyna et al. (2023) for GN-z11 (violet horizontal segments in the 2nd and 3d panels). We also note that other stellar models than those used here could result in more extended periods of high N/O and N/C ratios. This could be the case, for instance, of stars rotating more rapidly than 300 km/s (e.g. the fast rotators at nearly break-up velocity of 800 km/s calculated by Hirschi 2007), binary stars, or stars calculated with higher mass loss rates, etc. (see e.g. Eldridge & Stanway 2022, for a recent review). On the other hand, for the central region of Mrk 996 which shows both N and C enrichment, we find that all the abundance ratios are well reproduced by the models. Furthermore, in this galaxy the WR-scenario is directly supported by the presence of WR stars both of WN and WC types (Telles et al. 2014). Similarly, N and C enrichment found in the Lynx arc could also be explained by the WR scenario, and earlier studies have argued for the presence of WR stars, from emission line modelling of this peculiar object (see e.g. Villar-Martin et al. 2004).
Is there any direct evidence for WR stars in the N-emitters discussed here? In short, WR stars have been reported only in the low-\(z\) galaxy Mrk 996, as mentioned earlier. In the spectral range covered by the observations of CEERS-1019, the strongest WR features could be He ii\(\lambda\)1640 and C iv\(\lambda\)1550 in the rest-UV and the so-called blue WR-bump centered around He ii\(\lambda\)4686. None of these features is detected in the current NIRSpec spectra and the same holds for GN-z11 (see: Bunker et al. 2023; Maiolino et al. 2023). However, the JWST spectra of these very high-\(z\) objects, and in particular for CEERS-1019, are of insufficient spectral resolution and S/N to rule out, e.g., He ii\(\lambda\)1640 emission with equivalents widths \(\la 7-10\) A (depending on the adopted FWHM of the WR line), and therefore stellar populations comparable to those of Mrk 996, which has EW(1640)\(\approx 3-4\) A, cannot be ruled out from the present data. The rest-UV spectrum of SMACS2031 from Patricio et al. (2016) also shows no clear feature of WR stars. He ii\(\lambda\)1640 is present with an EW(1640)\(=0.99\pm 0.1\) A, but it is only marginally broader than nebular emission lines.
The very high-S/N spectrum of the Sunburst cluster, discussed by Mestric et al. (2023), also shows no signature of WR stars. Except for the nebular lines, the Sunburst spectrum resembles in fact strongly the spectrum of the well-known massive star cluster R136 in the LMC, which is known to be very young (\(\sim 1.5\) Myr) and to host very massive stars with masses up to \(\sim 200\) M\({}_{\odot}\) (Vanzella et al. 2020; Mestric et al. 2023). The Sunburst cluster also appears to be too young to host WR stars. Finally, Villar-Martin et al. (2004) have suggested the presence of WR in the Lynx arc, in particular to explain the hard observed ionizing spectrum, but no direct signatures are detected in the relatively low S/N spectra available for this object.
In conclusion, except for Mrk 996 where the presence of important populations of WR stars (both of WN and WC types) is established, no direct evidence for WR stars is found in the other N-emitters studied here. However, this does not necessarily exclude the WR-scenario, since WR stars may be present below the detection threshold.
#### 5.3.2 Enrichment from super-massive stars (\(M\ga 1000\) M\({}_{\odot}\)) - SMS scenario
An alternate scenario, already invoked by Charbonnel et al. (2023) to explain the high N-abundance in the compact galaxy GN-z11 at \(z=10.6\), is that of super-massive stars (SMS), which have previously been proposed to explain the abundance anomalies of the multiple stellar populations seen in old Galactic and extra-galactic globular clusters (GC) and in extra-galactic massive star clusters with ages down to \(\sim\) 1.7 Gyr (Gieles et al. 2018). In essence, this model proposes that gas accretion and collisions of proto-stars in the densest clusters lead to the runaway formation of one or several SMS, with masses \(M\ga 10^{3}M_{\odot}\) that increase with the cluster mass. During some time before two-body relaxation heats the cluster, this mostly convective SMS undergoes accretion (from proto-stars in the cluster and infalling gas) and it ejects processed matter, whose composition reflects the conditions in its hot H-burning core. Namely,
Figure 7: Observed chemical abundances (N/O versus O/H in the top panel, C/O versus O/H in the bottom) of the six N-emitters (using the same symbols as in Fig. 5) and comparison with predictions for enrichment from massive stars (dotted and dash-dotted lines showing the “WR scenario” for non-rotating and rotating stars, respectively; see text) and supermassive stars (solid and dashed). Different colors indicate different metallicities. The predictions for the WR scenario are shown for a very low dilution (1%) with ISM matter. The solid lines show the predicted composition using SMS models with \(10^{4}\) M\({}_{\odot}\) at different metallicities (12 + log(O/H)= 7.0,7.8, 8.3) from Charbonnel et al. (2023) and for varying amounts of dilution with an ISM of standard composition. The dashed lines show an SMS model with \(10^{3}\) M\({}_{\odot}\) from Nagele & Umeda (2023).
the ejected material is strongly enriched in N, Na, and Al, and strongly depleted in O and C as a result of CNO, NeNa, and MgAl nuclear reactions at high temperature. As initially shown by Denissenkov & Hartwick (2014), the whole range of abundance anomalies (C-N, O-N, Na-O, Mg-Al anticorrelations) in GC stars is very well accounted for after dilution of the SMS ejecta with proto-GC gas.
The constant supply of unprocessed material to the SMS "freezes" its evolution close to the zero-age main sequence, preventing strong He-enrichment of the SMS yields, in agreement with GC multiple band photometry (Milone 2015; Milone et al. 2018). This also solves the so-called "mass budget" problem encountered by all the other scenarios that try to explain the presence and properties of multiple stellar populations in globular clusters (Prantzos & Charbonnel 2006; Schaerer & Charbonnel 2011; Krause et al. 2012; Renzini et al. 2015; Krause et al. 2016; Bastian & Lardo 2018). For example, Gieles et al. (2018) find that a SMS forming into a dense cluster hosting \(10^{7}\) proto-stars can reach and process respectively \(\sim 5\)% and \(\sim 45\)% of the cluster mass. This is significantly higher than the \(\sim 2\)% of wind mass ejected in the massive star scenario (cf. Fig. 6). In particular, the super-linear scaling predicted between the amount of material nuclearly processed by the SMS and the cluster mass explains the observed increase of the fraction of second population stars with GC mass (Carretta et al. 2010; Milone et al. 2017). This picture is dubbed the "conveyor-belt" SMS model. The high amount of processed matter also implies that any additional matter ejected by the SMS during its final phase (once the conveyer-belt stops) will have very little impact on the final abundance ratios.
In Figs. 7 and 8 the solid lines show, for three different initial metallicities (0.34 Z\({}_{\odot}\), 0.12 Z\({}_{\odot}\), and 0.018 Z\({}_{\odot}\)), the predicted chemical abundance ratios resulting from the mixture of ejecta of \(10^{4}\) M\({}_{\odot}\) SMS in the conveyer-belt scenario with different amounts of ISM gas with a normal, initial abundance (stellar models from Charbonnel et al. 2023). The composition of the SMS ejecta reflects the yields from H-burning via the CNO-cycle. It is very strongly enriched in Nitrogen, with N/O \(>\) 10, i.e. nearly 100 times super-solar, and very strongly depleted in Oxygen and Carbon. With an increasing fraction of matter from the SMS mixed into the ISM, the predicted N/O and N/C ratios increase strongly. The resulting mixture also shows a decreasing O/H abundance (metallicity) while C/O remains relatively constant.
The observed N/O ratio of GN-z11 and CEERS-1019 can be explained by mixing approximately equal amounts of SMS ejecta with ISM gas, as already shown by Charbonnel et al. (2023). The N/O abundance of all other N emitters considered here could also be explained with the SMS scenario. The same is also true for the C/O and N/C abundance ratios, except for the two objects which show a high C/O ratio, Mrk 996 and the Lynx arc. As already mentioned before, C/O in these galaxies reveals the presence of both H- and He-burning products, which, in the case of Mrk 996, is compatible with its observed WR star population. In short, the comparison of the observed N/O, C/O, and N/C ratios suggests that CEERS-1019, SMACS2031, and the Sunburst cluster might be explained by the SMS conveyor-belt scenario, implying that they should contain one or several proto-GC, and Mrk 996 and the Lynx arc by the WR scenario. From the available data and the lack of accurate C/O measurements, the case of GN-z11 remains inconclusive.
Nagele & Umeda (2023) have computed the composition of the material ejected through winds along the entire evolution of SMS with masses between \(10^{3}\) and \(10^{5}\) M\({}_{\odot}\) for 0.1 Z\({}_{\odot}\), neglecting the conveyor belt rejuvenation of the star discussed above (they assume that SMS form through gravitational collapse during the merger of gas-rich galaxies at high-\(z\), see Mayer et al. 2015). In addition, they estimate if and when the SMS become GR unstable as they evolve, as well as the modifications of the composition of the material that can be ejected at the end of the life of the stars in the case they eventually explode due to the CNO cycle and the rp (rapid proton capture) process (for details see Nagele et al. 2023a). Their \(10^{3}\) and \(10^{4}\) M\({}_{\odot}\) models - not shown here - predict strong N-enrichment on the main sequence, confirming the results of Denissenkov & Hartwick (2014) and Charbonnel et al. (2023). However, these two models do not become GR unstable and make it until carbon-oxygen burns. As a consequence, their winds reach super-solar C and O abundances because of the dredge-up of C and O from the core during central He-burning, and they are strongly enriched in He. This implies that without undergoing the conveyor-belt episode that is required to solve the mass budget and the photometric constraints for the GC case, the total yields of such models cannot explain the GC abundance anomalies, nor can they explain the N/O and C/O ratios in CEERS-1019 and in GN-z11 as discussed by Nagele & Umeda (2023).
On the other hand, Nagele & Umeda (2023) find that their \(5\times 10^{4}\) and \(10^{5}\) M\({}_{\odot}\) models at 0.1 Z\({}_{\odot}\) become GR unstable close to or at the end of the main sequence, implying that their winds contain super-solar N and sub-solar C and O before the stars eventually collapse to a black hole or are disrupted by a thermonuclear explosion. The dashed lines in Figs. 7 and 8 show the range of abundances expected when the ejecta of their \(10^{5}\) M\({}_{\odot}\) model is diluted to various degrees with ISM of initial composition. In addition to the N-enrichment along the main sequence, this includes their estimate of the additional N that is produced during the expected CNO-powered explosion (Nagele et al. 2023a). This model accounts well for the observed abundance N/O ratios in CEERS-1019, GN-z11, and SMACS2031. And it also produces enough enriched material to be able to pollute sufficient ionized gas, i.e. masses in the observed range (see Sect. 4.7), as shown by Nagele & Umeda (2023).
From this, we conclude that SMS over a wide range of masses can simultaneously explain the GC abundance anomalies and the N/O and C/O ratios in CEERS-1019, GN-z11, and SMACS2031, if they eject large quantities of H-processed ma
Figure 8: Observed and predicted abundance ratios of N/C as a function of O/H for the six N-emitters and comparison samples from the literature. Observed data are shown using the same symbols as in Fig. 5, model predictions with the linestyles of Fig. 5.
terial early on the main sequence, as predicted by the conveyor-belt SMS scenario (Gieles et al. 2018), or if the SMS sheds large amounts of processed material due to instabilities and an explosion during the CNO-cycle (cf. Nagele & Umeda 2023).
In Sect. 5.4 we will further argue whether the N-emitters are proto-GCs, and discuss possible implications of the SMS scenario, including the possible formation of an intermediate-mass black hole (IMBH).
#### 5.3.3 Other scenarios to explain strong N emission
Cameron et al. (2023a) have discussed different processes or sources which could explain the observed N-enhancement in GN-z11, including enrichment from AGB stars, pollution from Pop III star-formation, stellar encounters in dense star clusters, or tidal disruption of stars from encounters with black holes. The main conclusions of their qualitative discussion is that these explanations would need very fine-tuned conditions and that the origin of N-enrichment is currently largely unknown.
The predictions of classical chemical evolution models including AGB stars are shown e.g. in the studies of Johnson et al. (2023). Watanabe et al. (2023) also show predictions of such models in comparison with GN-z11. Indeed, as well known from earlier works, such models cannot produce high N/O abundance ratios at low metallicity (as observed in the N-emitters discussed here), since these models include also the yields of massive stars and core-collapse supernovae, which produce large amounts of oxygen, and hence no extreme N/O ratios. The pure WR-wind models of Watanabe et al. (2023) are essentially the same as our massive star models (WR-scenario).
Maiolino et al. (2023) have recently argued that GN-z11 shows signs of a type 1 AGN, with emission from very high density and a Broad Line Region (BLR). They further argue that the exceptionally high nitrogen abundance "becomes much less problematic" in the AGN scenario, for several reasons. First, they point out that several "nitrogen-loud" AGN have been found, making GN-z11 less peculiar. And second, they mention that only small amounts of enriched gas are needed if the observed gas is at very high densities. Finally, they mention supernovae from supermassive stellar progenitors, rapidly recycled secondary nitrogen production, or bloated atmospheres of giant/supergiant stars as possible sources of the observed enrichment, without providing quantitative estimates.
Clearly, the spectra of CEERS-1019 and the other N-emitters discussed here are very different from nitrogen-loud AGN, as discussed in Sect. 3. Furthermore, except for GN-z11 for which Maiolino et al. (2023) show indications of densities \(n_{H}\ga 10^{10}\) cm\({}^{-3}\) typical of BLR, the densities inferred here are much lower, typically \(n\sim 10^{4-5}\) cm\({}^{-3}\), and all observed emission line properties are compatible with photoionization from star-formation (Sect. 3). The qualitative scenarios sketched by Maiolino et al. (2023) for GN-z11 may therefore not be applicable to the other N-emitters discussed here. In any case, more quantitative studies on the detailed chemical abundances of nitrogen-loud AGN and their source of enrichment could be of interest to better understand the common points and differences with other N-emitters.
For the Sunburst cluster, Pascale et al. (2023) proposed a model where the super star cluster is surrounded by low- and high-density photoionized clouds and regions (channels) through which ionizing radiation can escape, and they argue that only the high-density clouds in the vicinity of the star cluster are N-enriched and confined by strong external pressure. They estimate that \(\sim\) 500 M\({}_{\odot}\) of nitrogen is needed - an amount which can be produced by the star cluster with a mass \(M_{\star}\sim\) few \(\times 10^{7}\) M\({}_{\odot}\)- and suggest that it originates from young massive stars, ejected, e.g., in dense LBV winds or non-conservative binary mass transfer. SN ejecta are not favored, since the Sunburst is not enriched in C, and the inferred age (\(\la 4\) Myr) is consistent with this explanation.
The model of Pascale et al. (2023) is essentially the same as our massive star scenario, although they do not use a specific model to predict the chemical yields of the cluster and its temporal evolution, and our massive star scenario does not include ejecta from mass transfer in binary systems. As already discussed above, such a scenario requires some specific "tuning", in particular the selection of a fairly specific age at which the composition of ejecta matches the observed abundances. For the Sunburst cluster this seems very plausible; however, it is not clear if this could be generalized to CEERS-1019 and the other N-emitters.
Are CEERS-1019 and other N-emitters proto-GC in formation or related to the formation of intermediate-mass black holes?
The unusually high N/O abundances derived for GN-z11 and the Sunburst arc and similarities with the abundance pattern of stars in globular clusters have led several authors to suggest a link between these peculiar objects and GC formation (Senchyma et al. 2023; Charbonnel et al. 2023; Nagele & Umeda 2023; Pascale et al. 2023). With the finding of a highly supersolar N/O ratio and normal C/O in CEERS-1019 and similar results for other objects from the literature (in total six N-emitters analyzed here), the question of the nature of the N-emitters must be rediscussed in light of new and additional evidence. We summarize basic observational evidence and our favourite scenarii/explanations in Table 4.
First, the observed abundance ratios of N/O and C/O, which are accurately measured for five objects, suggest that two of them (the low-z galaxy Mrk 996 and the Lynx arc) are probably explained by pollution from WR stars, as discussed above. If correct, it implies that the cluster(s) dominating presumably these objects cannot be progenitors of GCs. This is due to the fact that massive star wind scenario suffers from the so-called mass budget problem of GCs (Prantzos & Charbonnel 2006; Decressin et al. 2007), which basically means that the massive stars cannot produce sufficient amounts of enriched material to explain the observed population of "polluted" (second population) stars in GCs without this first population being much more massive than the second one, in contradiction with observations. In Mrk 996 WR features are detected, and the presence of WR stars is suspected in the Lynx arc. We therefore suggest that they are somewhat peculiar star-forming galaxies (WR galaxies), although we note that Binette et al. (2003) have also considered a hidden AGN to explain the emission line properties of the Lynx arc.
For CEERS-1019, GN-z11, SMACS2031, and the Sunburst cluster, the N/O, C/O, and N/C ratios could be explained by the two scenarii discussed earlier, with the enriched matter originating from normal massive stars or from supermassive stars. We favour the SMS scenario for several reasons. First, the WR scenario requires a very special and shorter timing than the SMS scenario. Second, these galaxies contain at least one sufficiently massive and compact region (the Sunburst cluster is of course a cluster) with extreme conditions (very high SFR and mass surface density), and unusually high ISM densities. Such rare conditions may be necessary for the formation of supermassive stars through runaway collisions and for the conveyer-belt
model, as proposed by Gieles et al. (2018). This would also naturally explain why N-emitters are rare. We therefore propose that CEERS-1019, SMACS2031, and the Sunburst cluster have been enriched by SMS and that they host (or are) proto-GCs in star-forming galaxies. Finally, the finding of such objects at lookback times between 11.2-13.3 Gyr is also compatible with them hosting proto-GCs.
The case of GN-z11 may be somewhat different as it may host an AGN, as suggested by Maiolino et al. (2023). If the high density of the ionized gas (\(n_{e}\ga 10^{10}\) cm\({}^{-3}\)) inferred by these authors is confirmed, it would significantly reduce the amount of ionized gas which needs to be polluted, but it still leaves the source of chemical enrichment unexplained (cf. Maiolino et al. 2023). However, this does not exclude pollution from one or several SMS, which might even have seeded the "small" massive black hole (with \(\log(M_{\rm BH}/M_{\odot})\sim 6.2\pm 0.3\)) or contributed to its growth. Indeed, the final fate of SMS is difficult to predict since in addition to metallicity and mass, other input parameters of the stellar models (mass loss, convection, overshooting, rotation, etc.) may impact the occurrence of the GR instability, its timing, and whether the collapse would trigger an explosion through the CNO-cycle (Fuller et al. 1986; Montero et al. 2012; Haemmerle et al. 2019; Haemmerle 2021; Nagele et al. 2023a). In any case, the formation of IMBH with masses \(\sim 10^{4}\) to \(10^{5}\) M\({}_{\odot}\) from SMS seems possible at metallicities comparable to that of GN-z11, as shown e.g. by Nagele et al. (2023b). We therefore propose that N-emitters could also be an indication of black hole seed formation from SMS. And these objects could evolve to N-loud quasars, a rare sub-population of quasars showing strong N lines in the UV (Jiang et al. 2008), and which have been suggested to be objects with high N/O and sub-solar metallicities in a rapid growth phase (Araki et al. 2012; Matsuoka et al. 2017). We therefore mark GN-z11 as a possible AGN with BH-formation related to SMS in Table 4. Finally, we also consider that the formation of an IMBH with mass \(\ga 1000\) M\({}_{\odot}\) from an SMS is incompatible with the proto-GC scenario, as the presence of such a BH in old GCs seems to be ruled out observationally (Baumgardt et al. 2019, and references therein). This is also reflected in Table 4.
Finally, we wish to remind the reader that Larson et al. (2023) suggested that CEERS-1019 also hosts a black hole, although our analysis does not show significant evidence for this and suggests that the object is dominated by star-formation (see Sect. 3). If CEERS-1019 harbours an AGN, the situation could be similar to that of GN-z11, just discussed and point to a possible link between SMS and black hole formation. Also, we note that Binette et al. (2003) have considered a hidden AGN to explain the emission line properties of the Lynx arc, although the nature of this source remains unclear. To conclude, we also recall that none of the four other N-emitters discussed here show any AGN indication. We are therefore probably left with three good candidates for SMS and proto-GCs, CEERS-1019, SMACS2031, and the Sunburst cluster.
### Future steps and improvements
Clearly, better data and more N-emitters would be helpful to better understand the origin of the strong N emission lines, to further test the proposed enrichment scenarios and the possible presence of SMS, and thus to understand the nature of these rare objects.
An important test for the massive star scenario would be to detect direct spectral signatures of WR stars. Deeper, very high S/N spectra, in the rest-optical domain would be ideal for this. The massive star scenario also predicts important amounts of helium in the ejecta, which might be measurable from the analysis of nebular He and H emission lines in rest-optical spectra of sufficient quality. In the SMS scenario, a strong enrichment of aluminum, originating from H-burning from the MgAl chain (Prantzos et al. 2007), is predicted (Ramirez-Galeano, in prep.), as observed in GC stars (Carretta et al. 2009; Pancino et al. 2017; Masseron et al. 2019). In contrast, massive stars should produce less aluminum (Decressin et al. 2007; Gormaz-Matamala et al. 2023). Aluminum has spectral signatures in the rest-UV (Al ii \(\lambda\)1670, Al iii \(\lambda\lambda\)1855,1863), which are often seen in absorption in high-\(z\) galaxy spectra (Shapley et al. 2003; Le Fevre et al. 2019), and which are in emission in some AGNs (see e.g. Alexandroff et al. 2013). These features might be an interesting test of the relation between N-emitters and proto-GCs, and to distinguish between the WR and SMS scenarii.
To examine if the strong N lines could be related to large density variations and found preferentially in pockets of high density, it will be of interest to obtain multiple density measurements probing the widest possible range of density, regions of different ionization, and possibly also spatial variations. Both high S/N and high-resolution spectra are needed for this, and measurements of fine-structure lines of oxygen and nitrogen with ALMA could also provide insights into this question.
Future studies may reveal new N-emitters, improving the statistics and providing more test cases. If strongly enhanced N-emitters are found at significantly lower metallicities (say \(12+\log(\mathrm{O/H})\la 7\)) the SMS scenario might be favored, since WR stars should be less present at low O/H. Also, objects with even higher N/O abundances could exist, if the SMS scenario is correct.
## 6 Conclusion
In this work, we have presented the detailed analysis of CEERS-1019 at \(z=8.678\) using deep spectroscopy and imaging with NIRSpec and NIRCam obtained from the _JWST_ CEERS program. Low- and medium-resolution NIRSpec spectra covering \(1-5\mu\)m reveal a wealth of rest-frame UV and optical nebular emission lines of various transitions and ionizing states from H, He, C, N, O, and Ne. In particular, CEERS-1019 shows remarkably intense Nitrogen emission of N iii and N iv, with N iv] \(\lambda\)1486 emerging as the strongest line within the rest-frame UV
\begin{table}
\begin{tabular}{l l l l l l l l l} \hline \hline Object & redshift & N/O & C/O & WR features & enrichment & proto-GC & BH formation & nature \\ \hline CEERS-1019 & 8.678 & super-solar & normal &? & SMS & yes & no & SF-galaxy \\ GN-z11 & 10.6 & super-solar & uncertain &? & SMS or other & no & yes? & AGN? \\ SMACS2031 & 3.506 & super-solar & normal &? & SMS & yes & no & SF-galaxy \\ Sunburst cluster & 2.368 & super-solar & normal & SMS & yes & no & SF-galaxy \\ Lynx arc & 3.36 & super-solar & \(\sim\) solar &? & WR? & no & no & WR galaxy? \\ Mrk 996 & 0.00544 & super-solar & \(\sim\) solar & WC+WN & WR & no & no & WR galaxy \\ \hline \end{tabular}
\end{table}
Table 4: Properties, proposed scenarii, and nature of the N-emitters
spectrum. These emission lines are very rarely seen in galaxy spectra, and CEERS-1019 - which shows some resemblance with the peculiar object GN-z11 revealed recently by JWST (Bunker et al., 2023) - is thus the second "N-emitter" found at \(z>8\). From the analysis of these data, we arrive at the following main results:
* Using the well-detected auroral [O iii] \(\lambda 4363\) line we determined the O/H abundance using the direct method, resulting in \(12+\log({\rm O/H})=7.70\pm 0.18\). We derived the electron temperature from both rest-frame UV and optical [O iii] lines, yielding consistent values of \(T_{e}\approx 18000\) K. The density-sensitive lines of N iv] \(1483/1487=0.50\pm 0.22\) and [O ii] \(3727/3729=0.98\pm 0.23\) suggest a relatively high electron density of \(n_{e}\approx 10^{3-5}\) cm\({}^{-3}\). These values are consistent with those reported by other studies for this object (Tang et al., 2023; Nakajima et al., 2023; Larson et al., 2023).
* Metal abundances were derived for different ions of C, N, O, and Ne. Notably, we found an exceptionally high N/O abundance of \(\log({\rm N/O})\)\(=-0.13\pm 0.11\), approximately 5.6 times higher than the solar ratio. Conversely, CEERS-1019 exhibits relatively normal C/O and Ne/O ratios for its metallicity (O/H), with \(\log({\rm C/O})\)\(=-0.75\pm 0.11\) and \(\log({\rm Ne/O})\)\(=-0.63\pm 0.07\), respectively. This translates to high N/O and N/C, and normal C/O ratios, typically found in globular cluster stars, and which reflect the abundance ratios from H-burning via the CNO-cycle at very high temperature (Prantzos et al., 2017; Gratton et al., 2019).
* We have discussed possible chemical enrichment scenarios to explain these peculiar C, N, and O abundance ratios observed in CEERS-1019. Enrichment from massive star winds through the WR phase can explain the observed ratios but requires a very short and specific time window (and the presence of WN stars only); it would also come with a very strong He enrichment. Furthermore, no signatures of WR stars are detected in CEERS-1019, although their presence cannot be ruled out from the available data. Alternatively, models of super-massive stars (\(>1000M_{\odot}\)) mixed with ISM with a normal composition can explain the abundance ratios of CEERS-1019. In this scenario, the ejected processed material via SMS will exhibit H-burning products only, strong enriched in N and possibly some depletion in O and C, and a normal He content.
* We have investigated the possibility of an AGN in CEERS-1019, a scenario recently suggested by Larson et al. (2023) due to the detection of a broad component in H\(\beta\). Our own reduction of the NIRSpec spectrum shows a tentative, broad component in H\(\beta\) (FWHM \(\simeq 1150\) km s\({}^{-1}\)) but detected with a fairly low significance (\(\simeq 2.2\sigma\)). Line ratios using rest-UV lines (N v, N iv], C v, C iii], O iii], and He ii) suggest that the gas is primarily photoionized by star formation, and any contribution from an AGN would likely be residual. The non-detection of the high-ionization lines of N v \(\lambda 1240\) and He ii \(\lambda 1640\) further support this scenario.
* CEERS-1019 shows a complex morphology with three resolved clumps. By analyzing the light distribution of these substructures, we found very compact morphologies with characteristic half-light radii of \(\simeq 100-150\) pc. Multi-wavelength SED fits for each individual clump predict stellar masses of \(\log(M_{\star}/M_{\odot})\simeq 8.66-8.94\), resulting in very high stellar mass surface densities \(\log(\Sigma_{M_{\star}}/(M_{\odot}\,{\rm pc}^{-2})\simeq 3.55-4.14\). The star formation rate appears very intense in two clumps (SFR \(\simeq 80-150\)\(M_{\odot}\) yr\({}^{-1}\)), while the remaining clump displays a negligible level of ongoing star formation.
CEERS-1019 represents thus the second example of a rare population of strong N-emitting galaxies at \(z>8\) with highly super-solar N/O abundances, very compact regions, and a high-density ISM. To put this object into context and better understand these N-emitters, we have (re-)analyzed other known N-emitting star-forming galaxies from the literature. This includes three lensed objects, two galaxies (SMACS2031 and the Lynx arc), and one star-cluster (the Sunburst cluster at \(z\sim 2.3-3.5\), plus a nearby blue compact dwarf galaxy (Mrk 996), all of them without any clear indications of AGN activity. Similar to CEERS-1019, these sources show peculiar abundance ratios with a supersolar N/O ratio along with very dense clustered mass and star formation (\(\log(\Sigma_{M_{\star}}/(M_{\odot}\,{\rm pc}^{-2}))\gtrsim 3.5\)) and high ISM densities (\(n_{e}\sim 10^{4}-10^{5}\) cm\({}^{-3}\)). Two galaxies, Mrk 996 and the Lynx arc, show an enhanced C/O ratio compared to normal galaxies at the same metallicity (O/H), indicative of enrichment from WR stars.
We have also presented quantitative predictions for the chemical enrichment in two different scenarios, including enrichment from winds of massive stars (called the WR-scenario) or from ejecta of supermassive stars (SMS) with masses \(10^{3}-10^{5}\) M\({}_{\odot}\), which have been invoked to explain the abundance anomalies observed in present-day globular clusters (Denissenkov & Hartwick, 2014; Gieles et al., 2018). The WR scenario explains well the two galaxies with enhanced C/O and is supported by direct evidence of WN and WC stars in Mrk 996. As already found by Charbonnel et al. (2023) for GN-z11, we found that the SMS scenario reproduced well the observed abundance ratios in CEERS-1019, SMACS2031, and the Sunburst cluster. These observations probably provide the best indirect evidence so far for the possible existence of SMS in galaxies.
Finally, considering the preferred enrichment scenarii and other physical properties, we have also examined which of the N-emitters could host proto-GCs and what their nature is. From our analysis we concluded that CEERS-1019, SMACS2031, and the Sunburst cluster host most likely proto-GCs. We also suggested that the peculiar abundances of GN-z11 could be due to SMS, even if this object was confirmed to host an AGN, as proposed by (see Maiolino et al., 2023). This could also point to the formation of intermediate-mass black holes from SMS and suggest a link between the N-emitters and N-loud quasars.
In short, the newly discovered N-emitter CEERS-1019 and other N-emitters show tantalizing similarities with stars in GCs and the conditions expected during the formation of GCs. They may also offer a unique window into the formation of SMS, their role during the formation of GCs, and also their possible importance as seeds for the formation of massive black holes. More detailed studies and further discoveries of these rare objects will shed further light on these exciting topics and questions.
###### Acknowledgements.
We thank Lise Christensen and Johan Richard for sharing spectra from their VLT observations of SMACS2031. We also thank Mark Gileels, Eros Vanzella, Laura Ramirez Galacan, Anastasios Fragos, Holger Baumgardt, Montes Villar-Martin and other colleagues for stimulating discussions. CC acknowledges support from the Swiss National Science Foundation (SNE; Project 200020-192039). M.M. acknowledges the support of the Swedish Research Council, Vetenskapstider (international) postdoc grant 2019-090502). Y1. acknowledges support from the National Academy of Sciences of Ukraine (Project No. 0123IU02248) and from the Simons Foundation.
|
2306.13461 | Understanding quantum machine learning also requires rethinking
generalization | Quantum machine learning models have shown successful generalization
performance even when trained with few data. In this work, through systematic
randomization experiments, we show that traditional approaches to understanding
generalization fail to explain the behavior of such quantum models. Our
experiments reveal that state-of-the-art quantum neural networks accurately fit
random states and random labeling of training data. This ability to memorize
random data defies current notions of small generalization error,
problematizing approaches that build on complexity measures such as the VC
dimension, the Rademacher complexity, and all their uniform relatives. We
complement our empirical results with a theoretical construction showing that
quantum neural networks can fit arbitrary labels to quantum states, hinting at
their memorization ability. Our results do not preclude the possibility of good
generalization with few training data but rather rule out any possible
guarantees based only on the properties of the model family. These findings
expose a fundamental challenge in the conventional understanding of
generalization in quantum machine learning and highlight the need for a
paradigm shift in the study of quantum models for machine learning tasks. | Elies Gil-Fuster, Jens Eisert, Carlos Bravo-Prieto | 2023-06-23T12:04:13Z | http://arxiv.org/abs/2306.13461v2 | # Understanding quantum machine learning also requires rethinking generalization
###### Abstract
Quantum machine learning models have shown successful generalization performance even when trained with few data. In this work, through systematic randomization experiments, we show that traditional approaches to understanding generalization fail to explain the behavior of such quantum models. Our experiments reveal that state-of-the-art quantum neural networks accurately fit random states and random labeling of training data. This ability to memorize random data defies current notions of small generalization error, problematizing approaches that build on complexity measures such as the VC dimension, the Rademacher complexity, and all their uniform relatives. We complement our empirical results with a theoretical construction showing that quantum neural networks can fit arbitrary labels to quantum states, hinting at their memorization ability. Our results do not preclude the possibility of good generalization with few training data but rather rule out any possible guarantees based only on the properties of the model family. These findings expose a fundamental challenge in the conventional understanding of generalization in quantum machine learning and highlight the need for a paradigm shift in the design of quantum models for machine learning tasks.
## I Introduction
Quantum devices promise applications in solving computational problems beyond the capabilities of classical computers [1; 2; 3; 4; 5]. Given the paramount importance of machine learning in a wide variety of algorithmic applications that make predictions based on training data, it is a natural thought to investigate to what extent quantum computers may assist in tackling machine learning tasks. Indeed, such tasks are commonly listed among the most promising candidate applications for near-term quantum devices [6; 7; 8; 9]. To date, within this emergent field of _quantum machine learning_ (QML) a body of literature is available that heuristically explores the potential of improving learning algorithms by having access to quantum devices [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20]. Among the models considered, _parameterized quantum circuits_ (PQCs), also known as _quantum neural networks_ (QNNs), take center stage in those considerations [21; 22; 23]. For fine-tuned problems in quantum machine learning, quantum advantages in computational complexity have been proven over classical computers [24; 25; 26; 27], but to date, such advantages rely on the availability of full-scale quantum computers, not being within reach for near-term architectures. While for PQCs such an advantage has not been shown yet, a growing body of literature is available that investigates their expressivity [28; 29; 30; 31; 32; 33; 34], trainability [35; 36; 37; 38; 39; 40; 41; 42; 43], and generalization [44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57] - basically aimed at understanding what to expect from such quantum models. Among those studies, the latter notions of _generalization_ are particularly important since they are aimed at providing guarantees on the performance of QML models with unseen data after the training process.
The importance of notions of generalization for PQCs is actually reflecting the development in classical machine learning: Vapnik's contributions [58] have laid the groundwork for the formal study of statistical learning systems. This methodology was considered standard in classical machine learning theory until roughly the last decade. However, the mindset put forth in this work has been disrupted by seminal work [59] demonstrating that the conventional understanding of generalization is unable to explain the great success of large-scale deep convolutional neural networks. These networks, which display orders of magnitude more trainable parameters than the dimensions of the images they process, defied conventional wisdom concerning generalization.
Employing clever randomization tests derived from non-parametric statistics [60], the authors of Ref. [59] exposed cracks in the foundations of Vapnik's theory and its success [61], at least when applied to specific, state-of-the-art, large networks. Established complexity measures, such as the well-known VC dimension or Rademacher complexity [62], among others, were inadequate in explaining the generalization behavior of large classical neural networks. Their findings, in the form of numerical experiments, directly challenge many of the well-established _uniform_ generalization bounds for learning models, such as those derived in, e.g., Refs. [63; 64; 65]. Uniform generalization bounds apply uniformly to all hypotheses across an entire function family. Consequently, they fail to distinguish between hypotheses with good out-of-sample performance and those which completely overfit the training data. Moreover, uniform generalization bounds are oblivious to the difference between real-world data and randomly corrupted patterns. This inherent uniformity is what grants long reach to the randomization tests: exposing a single instance of poor generalization is sufficient to reduce the statements of mathematical theorems to mere trivially loose bounds.
This state of affairs has important consequences for the emergent field of QML, as we explore here. Noteworthy, current studies of generalization in quantum machine learning models have uniquely focused on uniform variants. Consequently, our present comprehension remains akin to the classical machine learning canon before the advent of Ref. [59]. This observation raises a natural question as to whether the same randomization tests would yield analogous outcomes when applied to quantum models. In classical machine learning, it is widely acknowledged that the scale of deep neural networks plays a crucial role in generalization. Analogously, it is widely accepted that current QML models are consider
ably distant from that size scale. In this context, one would not anticipate similarities between current QML models and high-achieving classical learning models.
In this article, we provide empirical, long-reaching evidence of unexpected behavior in the field of generalization, with quite arresting conclusions. In fact, we are in the position to challenge notions of generalization, building on similar randomization tests that have been used in Ref. [59]. As it turns out, they already yield surprising results when applied to near-term QML models employing quantum states as inputs. Our empirical findings, also in the form of numerical experiments, reveal that uniform generalization bounds may not be the right approach for current-scale QML. To corroborate this body of numerical work with a rigorous underpinning, we show how QML models can assign arbitrary labels to quantum states. Specifically, we show that PQCs are able to perfectly fit training sets of polynomial size in the number of qubits. By revealing this ability to memorize random data, our results rule out the good generalization guarantees with few training data from uniform bounds [53, 55]. To clarify, _our experiments do not study the generalization capacity of state-of-the-art QML_. Instead, _we expose the limitation of uniform generalization bounds when applied to these models_. While QML models have demonstrated good generalization performance in some settings [20, 46, 53, 55, 66, 67, 68], our contributions do not explain why or how they achieve it. We highlight that the reasons behind their successful generalization remain elusive.
## II Results
### Statistical learning theory background
We begin by briefly introducing the necessary terminology for discussing our findings in the framework of _supervised learning_. We denote \(\mathcal{X}\) as the input domain and \(\mathcal{Y}\) as the set of possible labels. We assume there is an unknown but fixed distribution \(\mathcal{D}(\mathcal{X}\times\mathcal{Y})\) from which the data originate. Let \(\mathcal{F}\) represent the family of functions that map \(\mathcal{X}\) to \(\mathcal{Y}\). The _expected risk_ functional \(R\) then quantifies the predictive accuracy of a given function \(f\) for data sampled according to \(\mathcal{D}\). The _training set_, denoted as \(S\), comprises \(N\) samples drawn
Figure 1: **Visualization of our framework.** (a) In the _empirical experiments_, a distribution of labeled quantum data \(\mathcal{D}\) undergoes a randomization process, leading to a corrupted data distribution \(\tilde{\mathcal{D}}\). The training and a test set are drawn independently from each distribution. Then, the training sets are fed into an optimization algorithm, which is employed to identify the best fit for each data set individually from a family of parameterized quantum circuits \(\mathcal{F}_{Q}\). This process generates two hypotheses: one for the original data \(f_{\text{original}}\) and another for the corrupted data \(f_{\text{coruped}}\). We empirically find that the labeling functions can perfectly fit the training data, leading to small training errors. In parallel, \(f_{\text{original}}\) achieves a small test error, indicating good learning performance, and quantified by a _small_ generalization gap \(\operatorname{gen}(f_{\text{original}})=\) small. On the contrary, the randomization process causes \(f_{\text{coruped}}\) to achieve a large test error, which in turn results in a _large_ generalization gap \(\operatorname{gen}(f_{\text{coruped}})=\) large. (b) Regarding _uniform generalization bounds_, it is worth noting that this corner of QML literature assigns the same upper bound \(g_{\text{unif}}\) to the entire function family without considering the specific characteristics of each individual function. Finally, we combine two significant findings: (1) We have identified a hypothesis with a large empirical generalization gap, and (2) the uniform generalization bounds impose identical upper bounds on all hypotheses. Consequently, we conclude that any uniform generalization bound derived from the literature must be regarded as “large”, indicating that all such bounds are _loose_ for that training data size. The notion of loose generalization bound does not exclude the possibility of achieving good generalization; rather, it fails to explain or predict such successful behavior.
from \(\mathcal{D}\). The _empirical risk_\(\hat{R}_{S}(f)\) then evaluates the performance of a function \(f\) on the restricted set \(S\). The difference between \(R(f)\) and \(\hat{R}_{S}(f)\) is referred to as the _generalization gap_, defined as
\[\operatorname{gen}(f)\coloneqq\left|R(f)-\hat{R}_{S}(f)\right|. \tag{1}\]
The dependence of \(\operatorname{gen}(f)\) on \(S\) is implied, as evident from the context. Similarly, the dependence of \(R(f)\), \(\hat{R}_{S}(f)\), and \(\operatorname{gen}(f)\) on \(\mathcal{D}\) is also implicit. We employ \(C(\mathcal{F})\) to represent any _complexity measure_ of a function family, such as the VC dimension, the Rademacher complexity, or others [62]. It is important to note that these measures are properties of the whole function family \(\mathcal{F}\), and _not_ of single functions \(f\in\mathcal{F}\).
### Numerical results
Our goal is to improve our understanding of PQCs as learning models. In particular, we tread in the domain of generalization and its interplay with the ability to memorize random data. The main idea of our work builds on the theory of randomization tests from non-parametric statistics [60]. Fig. 1 contains a visualization of our framework.
Initially, we train QNNs on quantum states whose labels have been randomized and compare the training accuracy achieved by the same learning model when trained on the true labels. Our results reveal that, in many cases, the models learn to classify the training data perfectly, regardless of whether the labels have been randomized. By altering the input data, we reach our first finding:
**Observation 1** (Fitting random labels).: _Existing QML models can accurately fit random labels to quantum states._
Next, we randomize only a fraction of the labels. We observe a steady increase in the generalization error as the label noise rises. This suggests that QNNs are capable of extracting the residual signal in the data while simultaneously fitting the noisy portion using brute-force memorization.
**Observation 2** (Fitting partially corrupted labels).: _Existing QML models can accurately fit partially corrupted labels to quantum states._
In addition to randomizing the labels, we also explore the effects of randomizing the input quantum states themselves and conclude:
**Observation 3** (Fitting random quantum states).: _Existing QML models can accurately fit labels to random quantum states._
These randomization experiments result in a remarkably large generalization gap after training without changing the circuit structure, the number of parameters, the number of training examples, or the learning algorithm. As highlighted in Ref. [59] for classical learning models, these straightforward experiments have far-reaching implications:
1. Quantum neural networks already show memorization capability for quantum data.
2. The trainability of a model remains largely unaffected by the absence of correlation between input states and labels.
3. Randomizing the labels does not change any properties of the learning task other than the data itself.
In the following, we present our experimental design and the formal interpretation of our results. Even though it would seem that our results contradict established theorems, we elucidate how and why we can prove that uniform generalization bounds are vacuous for currently tested models.
#### ii.2.1 Quantum phase recognition and randomization tests
Here, we show the numerical results of our randomization tests, focusing on a candidate architecture and a well-established classification problem: the _quantum convolutional neural network_ (QCNN) [66] and the classification of quantum phases of matter.
Classifying quantum phases of matter accurately is a relevant task for the study of condensed-matter physics [69, 70]. Moreover, due to its significance, it frequently appears as a benchmark problem in the literature [69, 71]. In our experiments, we consider the _generalized cluster Hamiltonian_
\[H=\sum_{j=1}^{n}\left(Z_{j}-j_{1}X_{j}X_{j+1}-j_{2}X_{j-1}Z_{j}X_{j+1}\right)\,, \tag{2}\]
where \(n\) is the number of qubits, \(X_{i}\) and \(Z_{i}\) are Pauli operators acting on the \(i^{\text{th}}\) qubit, and \(j_{1}\) and \(j_{2}\) are _coupling strengths_. Specifically, we classify states according to which one of four symmetry-protected topological phases they display. As demonstrated in Ref. [72], and depicted in Fig. 2, the ground-state phase diagram comprises the phases: (I) symmetry-protected topological, (II) ferromagnetic, (III) antiferromagnetic, and (IV) trivial.
The learning task we undertake involves identifying the correct quantum phase given the ground state of the generalized
Figure 2: The ground-state phase diagram of the Hamiltonian of Eq. (2).
cluster Hamiltonian for some choice of \((j_{1},j_{2})\). We generate a training set \(S=\{(|\psi_{i}\rangle,y_{i})\}_{i=1}^{N}\) by sampling coupling coefficients uniformly at random in the domain \(j_{1},j_{2}\in[-4,4]\), with \(N\) being the number of training data points, \(|\psi_{i}\rangle\) representing the ground state vectors of \(H\) corresponding to the sampled \((j_{1},j_{2})\), and \(y_{i}\) denoting the corresponding phase label among the aforementioned phases. In particular, labels are length-two bit strings \(y_{i}\in\{(0,0),(0,1),(1,0),(1,1)\}\).
We employ the QCNN architecture presented in Ref. [66] to address the classification problem. By adapting classical convolutional neural networks to a quantum setting, QCNNs are particularly well-suited for tasks involving spatial and temporal patterns, which makes this architecture a natural choice for phase classification problems. A unique feature of the QCNN architecture is the interleaving of convolutional and pooling layers. Convolutional layers consist of translation-invariant parameterized unitaries applied to neighboring qubits, functioning as filters between feature maps across different layers of the QCNN. Following the convolutional layer, pooling layers are introduced to reduce the dimensionality of the quantum state while retaining the relevant features of the data. This is achieved by measuring a subset of qubits and applying translationally invariant parameterized single-qubit unitaries based on the corresponding measurement outcomes. The operation of a QCNN can be interpreted as a quantum channel \(\mathcal{C}_{\vartheta}\) specified by parameters \(\vartheta\), mapping an input state \(\rho_{\text{in}}\) into an output state \(\rho_{\text{out}}\), represented as \(\rho_{\text{out}}=\mathcal{C}_{\vartheta}\left[\rho_{\text{in}}\right]\). Subsequently, the expectation value of a task-oriented Hermitian operator is measured, utilizing the resulting \(\rho_{\text{out}}\).
Our implementation follows that presented in Ref. [53]. The QCNN maps an input state vector \(|\psi\rangle\), consisting of \(n\) qubits, into a \(2\)-qubit output state. For the labeling function given the output state, we use the probabilities of the outcome of each bit string when the state is measured in the computational basis \((p_{00},p_{01},p_{10},p_{11})\). In particular, we predict the label \(\hat{y}\) according to the measurement outcome with the _lowest_ probability according to
\[|\psi\rangle\mapsto(p_{b})_{b\in\{0,1\}^{2}}\mapsto\hat{y}\coloneqq\operatorname {arg\,min}_{b\in\{0,1\}^{2}}p_{b}\,. \tag{3}\]
For each experiment repetition, we generate data from the corresponding distribution \(\mathcal{D}\). For training, we use the loss function
\[\ell\left(\vartheta;(|\psi\rangle,y)\right)\coloneqq\langle y|\left(\mathcal{ C}_{\vartheta}\left[|\psi_{i}\rangle\!\langle\psi_{i}|\right]\right)|y \rangle\,. \tag{4}\]
Thus, given a training set \(S\sim\mathcal{D}^{N}\), we minimize the empirical risk
\[\hat{R}_{S}(\vartheta)=\frac{1}{N}\sum_{i=1}^{N}\langle y_{i}|\left(\mathcal{ C}_{\vartheta}\left[|\psi_{i}\rangle\!\langle\psi_{i}|\right]\right)|y_{i}\rangle\,. \tag{5}\]
We consider three ways of altering the original data distribution \(\mathcal{D}_{0}\) from where data is sampled, namely: (_a_) data wherein true labels are replaced by _random labels_\(\mathcal{D}_{1}\), (_b_) randomization of only a fraction \(r\in[0,1]\) of the data, mixing real and _corrupted labels_ in the same distribution \(\mathcal{D}_{r}\), and (_c_) replacing the input quantum states with _random states_\(\mathcal{D}_{\text{st}}\), instead of randomizing the labels. In each of these randomization experiments, the generalization gap and the risk functionals are defined according to the relevant distribution \(\hat{\mathcal{D}}\in\{\mathcal{D}_{1},\mathcal{D}_{r},\mathcal{D}_{\text{st}}\}\). In all cases, the correlations between states and labels are gradually lost, which means we can control how much signal there is to be learned. In experiments where data-label correlations have vanished entirely, learning is impossible. One could expect the impossibility of learning to manifest itself during the training process, e.g., through lack of convergence. We observe that training the QCNN model on random data results in almost perfect classification performance on the training set. At face value, this means the QCNN is able to _memorize_ noise.
In the following experiments, we approximate the expected risk \(R\) with an empirical risk \(\hat{R}_{T}\) using a large _test set_\(T\). This test set is sampled independently from the same distribution as the training set \(S\). In particular, the test set contains \(1000\) points for all the experiments, \(T\sim\mathcal{D}^{1000}\).
Additionally, we report our results using the _probability of error_, which is further elucidated below. Consequently, we employ the term "error" instead of "risk". Henceforth, we refer to _test accuracy_ and _test error_ as accurate proxies for the _true accuracy_ and _expected risk_, respectively. All our experiments follow a three-step process:
1. Create a training set \(S\sim\mathcal{D}^{N}\) and a test set \(T\sim\mathcal{D}^{1000}\).
2. Find a function \(f\) that approximately minimizes the empirical risk of Eq. (5).
3. Compute the training error \(\hat{R}_{S}(f)\), test error \(\hat{R}_{T}(f)\), and the empirical generalization gap \(\operatorname{gen}_{T}(f)=|\hat{R}_{T}(f)-\hat{R}_{S}(f)|\).
For ease of notation, we shall employ \(\operatorname{gen}(f)\) instead of \(\operatorname{gen}_{T}(f)\) while discussing the generalization gap without reiterating its empirical nature.
Random labelsWe start our randomization tests by drawing data from \(\mathcal{D}_{1}\), wherein the true labels have been replaced by random labels sampled uniformly from \(\{(0,0),(0,1),(1,0),(1,1)\}\). In order to sample from \(\mathcal{D}_{1}\), a labeled pair can be obtained from the original data distribution \((|\psi\rangle,y)\sim\mathcal{D}_{0}\), after which the label \(y\) can be randomly replaced. In this experiment, we have employed QCNNs with varying numbers of qubits \(n\in\{8,16,32\}\). For each qubit number, we have generated training sets with different sizes \(N\in\{5,8,10,14,20\}\) for both random and real labels. The models were trained individually for each \((n,N)\) combination.
In Fig. 3 (a), we illustrate the results obtained when fitting random and real labels, as well as random states (discussed later). Each data point in the figure represents the average generalization gap achieved for a fixed training set size \(N\) for the different qubit numbers \(n\). We observe a large gap for the random labels, close to \(0.75\), which should be seen as effectively maximal: perfect training accuracy and the same test accuracy as random guessing would yield. This finding suggests that the QCNN can be adjusted to fit the random labels in the training set, despite the labels bearing no correlation to
the input states. As the training set sizes increase, since the capacity of the QCNN is fixed, achieving a perfect classification accuracy for the entire training set becomes increasingly challenging. Consequently, the generalization gap diminishes. It is worth noting that a decrease in training accuracy is also observed for the true labeling of data [53].
Corrupted labelsNext to the randomization of labels, we further investigate the QCNN fitting behavior when data come with varying levels of label corruption \(\mathcal{D}_{r}\), ranging from no labels being altered (\(r=0\)) to all of them being corrupted (\(r=1\)). The experiments consider different number of training points \(N\in\{4,6,8\}\), and varying number of qubits \(n\in\{8,10,12\}\). For each combination of \((n,N)\), we start the experiments with no randomized labels (\(r=0\)). Then, we gradually increase the ratio of randomized labels until all labels are altered, that is, \(r\in\{0,1/N,2/N,\ldots,1\}\). Fig. 3 (b) shows the test error after convergence. In all repetitions, this experiment reaches \(100\%\) training accuracy. We observe a steady increase in the test error as the noise level intensifies. This suggests that QCNNs are capable of extracting the remaining signal in the data while simultaneously fitting the noise by brute force. As the label corruption approaches \(1\), the test error converges to \(75\%\), corresponding to the performance of random guessing.
The inset in Fig. 3 (b) focuses on the experiments conducted with \(N=6\) training points. In particular, we examine the relationship between the learning speed and the ratio of random labels. The plot shows an average over five experiment repetitions. Remarkably, each individual run exhibits a consistent pattern: the training error initially remains high, but it converges quickly once the decrease starts. This behavior was also reported for classical neural networks [59]. The precise moment at which the training error begins to decrease seems to be heavily dependent on the random initialization of the parameters. However, it also relates to the signal-to-noise ratio \(r\) in the training data. Notably, we observe a long and stable plateau for the intermediate cases \(r=1/3\) and \(r=2/3\), roughly halfway between the starting training error and zero. This plateau represents an average between those runs where the rapid decrease has not yet started and those
Figure 3: **Randomization tests.** (a) Generalization gap as a function of the training set size achieved by the _quantum convolutional neural network_ (QCNN) architecture. The QCNN is trained on real data, random label data, and random state data. The horizontal dashed line should be thought of as the largest generalization gap attainable, characterized by zero training error and test error equal to random guessing (\(0.75\) due to the task having four possible classes). The shaded area corresponds to the standard deviation across different experiment repetitions. For the real data and random labels, we employed \(8,16\), and \(32\) qubits, while for the random states, we employed \(8,10\), and \(12\) qubits. We observe that both random labels and random states exhibit a similar trend in the generalization gap, with a slight discrepancy in height due to the different relative frequencies of the four classes under the respective randomization protocols. In both cases, the test accuracy fails to surpass that of random guessing. Notably, the largest generalization gap occurs in the random labels experiments when using a training set of up to size \(N=10\), highlighting the memorization capacity of this particular QCNN. The training with uncorrupted data yields behavior in accordance with previous results [53]. (b) Test error as a function of the ratio of label corruption after training the QCNN on training sets of size \(N\in{4,6,8}\) and \(n=8\). The plot illustrates the interpolation between uncorrupted data (\(r=0\)) and random labels (\(r=1\)). As the label corruption approaches \(1\), the test accuracy drops to levels of random guessing. The dependence between the test error and label corruption reveals the ability of the QCNN to extract remaining signal despite the noise in the initial training set. The inset focuses on the case \(N=6\). It conveys the optimization speed for four different levels of corruption, namely, \(0,2,4\) and \(6\) out of \(6\) labels being corrupted, and provides insights into the average convergence time. The shaded area denotes the variance over five experiment repetitions with independently initialized QCNN parameters. Surprisingly, on average, fitting completely random noise takes less time than fitting unperturbed data. This phenomenon emphasizes that QCNNs can accurately memorize random data.
where the convergence has already been achieved, leading to significant variance. Interestingly, in the complete absence of correlation between states and labels (\(r=1\)), the QCNN, on average, perfectly fits the training data even slightly faster than for the real labels (\(r=0\)).
Random states:In this scenario, we introduce randomness to the input ground state vectors rather than to the labels. Our goal is to introduce a certain degree of randomization into the quantum states while preserving some inherent structure in the problem. To achieve this, we define the data distribution \(\mathcal{D}_{\text{st}}\) for the random quantum states in a specific manner instead of just drawing pure random states uniformly.
To sample data from \(\mathcal{D}_{\text{st}}\), we first draw a pair from the original distribution \((|\psi\rangle,y)\sim\mathcal{D}_{0}\), and then we apply the following transformation to the state vector \(|\psi\rangle\): We compute the mean \(\mu_{\psi}\) and variance \(\sigma_{\psi}\) of its amplitudes and then sample new amplitudes randomly from a Gaussian distribution \(\mathcal{N}(\mu_{\psi},\sigma_{\psi})\). After the new amplitudes are obtained, we normalize them. The random state experiments were performed with varying numbers of qubits \(n\in\{8,10,12\}\) and training set sizes \(N\in\{5,8,10,14,20\}\).
In Fig. 3 (a), we show the results for fitting random input states, together with the random and real label experiment outcomes. The empirical generalization gaps achieved by the QCNN for random states exhibit a similar shape to those obtained for random labels. Indeed, a slight difference in the relative occurrences of each of the four classes leads to improved performance by _biased random guessing_. We observe that the QCNN can perfectly fit the training set for few data, and then the generalization gap decreases, analogously to the scenario with random labels.
The case of random states presents an intriguing aspect. The QCNN architecture was initially designed to unveil and exploit local correlations in input quantum states [66]. However, our randomization protocol in this experiment removes precisely all local information, leaving only global information from the original data, such as the mean and the variance of the amplitudes. This was not the case in the random labels experiment, where the input ground states remained unaltered while only the labels were modified. The ability of the QCNN to memorize random data seems to be unaffected despite its structure to exploit local information.
#### ii.2.2 Implications
Our findings indicate that novel approaches are required in studying the capabilities of quantum neural networks. Here, we elucidate how our experimental results fit the statistical learning theoretic framework. The main goal of machine learning is to find the _expected risk minimizer_\(f^{\text{opt}}\) associated with a given learning task,
\[f^{\text{opt}}\coloneqq\operatorname*{arg\,min}_{f\in\mathcal{F}}R(f)\,. \tag{6}\]
However, given the unknown nature of the complete data distribution \(\mathcal{D}\), the evaluation of \(R\) becomes infeasible. Consequently, we must resort to its unbiased estimator, the _empirical risk_\(\hat{R}_{S}\). We let an optimization algorithm obtain \(f^{*}\), an approximate empirical risk minimizer
\[f^{*}\approx\operatorname*{arg\,min}_{f\in\mathcal{F}}\hat{R}_{S}(f)\,. \tag{7}\]
Nonetheless, although \(\hat{R}_{S}(f)\) is an unbiased estimator for \(R(f)\), it remains uncertain whether the empirical risk minimizer \(f^{*}\) will yield a low expected risk \(R(f^{*})\). The generalization gap \(\operatorname{gen}(f)\) then comes in as the critical quantity of interest, quantifying the difference in performance on the training set \(\hat{R}_{S}(f)\) and the expected performance on the entire domain \(R(f)\).
In the literature, extensive efforts have been invested in providing robust guarantees on the magnitude of the generalization gap of QML models through so-called _generalization bounds_[44, 45, 46, 47, 48, 49, 50, 51, 53, 55, 62]. These theorems assert that under reasonable assumptions, the generalization gap of a given model can be upper bounded by a quantity that can depend on various parameters. These include properties of the function family, the optimization algorithm used, or the data distribution. The derivation of a generalization bound for a learning model typically involves rigorous mathematical calculations and often considers restricted scenarios. Many results in the literature fit the following template:
**Generic _uniform_ generalization bound.** Let \(\mathcal{F}\) be a hypothesis class, and let \(\mathcal{D}\) be any data-generating distribution. Let \(R\) be a risk functional associated to \(\mathcal{D}\), and \(\hat{R}_{S}\) its empirical version, for a given set of \(N\) labeled data: \(S\sim\mathcal{D}^{N}\). Let \(C(\mathcal{F})\) be a complexity measure of \(\mathcal{F}\). Then, for any function \(f\in\mathcal{F}\), the generalization gap \(\operatorname{gen}(f)\) can be upper bounded, with high probability, by
\[\operatorname{gen}(f)\leq g_{\text{unif}}(\mathcal{F})\,, \tag{8}\]
where usually \(g_{\text{unif}}(\mathcal{F})\in\mathcal{O}\left(\operatorname{poly}(C(\mathcal{ F}),1/N)\right)\) is given explicitly. We make the dependence of \(g_{\text{unif}}\) on \(N\) implicit for clarity. The _high probability_ is taken with respect to repeated sampling from \(\mathcal{D}\) of sets \(S\) of size \(N\).
We refer to these as _uniform_ generalization bounds by virtue of them being equal for all elements \(f\) in the class \(\mathcal{F}\). Also, these bounds apply irrespective of the probability distribution \(\mathcal{D}\). The usefulness of uniform generalization bounds lies in their ability to provide performance guarantees for a model before undertaking any computationally expensive training. Thus, it becomes of interest to identify ranges of values for \(C(\mathcal{F})\) and \(N\) that result in a diminishing or entirely vanishing generalization gap (such as the limit \(N\to\infty\)). These bounds usually deal with asymptotic regimes. Thus it is sometimes unclear how tight their statements are for practical scenarios.
In cases where the risk functional is itself bounded, we can further refine the bound. For example, if we take \(R^{e}\) to be the probability of error
\[R^{e}(f)=\mathbb{P}_{(x,y)\sim\mathcal{D}}\left[f(x)\neq y\right]\in\left[0,1 \right], \tag{9}\]
we can immediately say that, for any \(f\), there is a trivial upper bound on the generalization gap \(\operatorname{gen}(f)\leq 1\). Thus, the
generalization bound could be rewritten as
\[\mathrm{gen}(f)\leq\min\left\{1,g_{\text{unif}}(\mathcal{F})\right\}\,. \tag{10}\]
This additional threshold renders the actual value of \(g_{\text{unif}}(\mathcal{F})\) of considerable significance.
We now have the necessary tools to discuss the results of our experiments properly. Randomizing the data simply involves changing the data-generating distribution, e.g., from the original \(\mathcal{D}_{0}\) to a randomized \(\hat{\mathcal{D}}\in\{\mathcal{D}_{1},\mathcal{D}_{r},\mathcal{D}_{\text{ sl}}\}\). As we have just remarked, the r.h.s. of Eq. (8) does not change for different distributions, implying that the same upper bound on the generalization gap applies to both data coming from \(\mathcal{D}_{0}\), or corrupted data from \(\hat{\mathcal{D}}\). If data from \(\hat{\mathcal{D}}\) is such that inputs and labels are uncorrelated, then any hypothesis cannot be better than random guessing in expectation. This results in the expected risk value being close to its maximum. For instance, in the case of the probability of error and a classification task with \(M\) classes, if each input is assigned a class uniformly at random, then it must hold for any hypothesis \(f\),
\[R^{e}(f)\approx 1-\frac{1}{M}\,, \tag{11}\]
indicating that the expected risk must always be large.
A large risk for a particular example does not generally imply a large generalization gap \(\mathrm{gen}(f)\not\approx R^{e}(f)\). For instance, if a learning model is unable to fit a corrupted training set \(S\), \(\hat{R}^{e}_{S}(f)\approx R^{e}(f)\), then one would have a small generalization gap \(\mathrm{gen}(f)\approx 0\). Conversely, for the generalization gap of \(f\) to be large \(\mathrm{gen}(f)\approx 1-1/M\), the learning algorithm must find a function that can actually fit \(S\), with \(\hat{R}^{e}_{S}(f)\approx 0\). Yet, even in this last scenario, _the uniform generalization bound still applies_.
Let us denote \(N^{\prime}\) the size of the largest training set \(S\) for which we found a function \(f_{r}\) able to fit the random data \(\hat{R}^{e}_{S}(f_{r})\approx 0\) (which leads to a large generalization gap \(\mathrm{gen}(f_{r})\approx 1-1/M\)). Since the uniform generalization bound applies to all functions in the class \(f\in\mathcal{F}\), we have found
\[g_{\text{unif}}(\mathcal{F})\gtrsim 1-\frac{1}{M} \tag{12}\]
as an empirical lower bound to the generalization bound. This reveals that the generalization bound is _vacuous_ for training sets of size up to \(N^{\prime}\). Noteworthy is also that, further than \(N^{\prime}\), there is a regime where the generalization bound remains impractically large.
The strength of our results resides in the fact that we did not need to specify a complexity measure \(C(\mathcal{F})\). Our empirical findings apply to _every_ uniform generalization bound, irrespective of its derivation. This gives strong evidence for the need for a perspective shift to the study of generalization in quantum machine learning.
### Analytical results
In the previous section, we provided evidence that QNNs can accurately fit random labels. Our empirical findings are restricted to the number of qubits and training samples we tested. While these limitations seem restrictive, they are actually the relevant regimes of interest, considering the empirical evidence. In this section, we study the memorization capability of QML models of arbitrary size in terms of _finite sample expressivity_.
Finite sample expressivity refers to the ability of a function family to memorize arbitrary data. In general, expressivity is the ability of a hypothesis class to approximate functions in the entire domain \(\mathcal{X}\). Conversely, finite sample expressivity studies the ability to approximate functions on fixed-size subsets of \(\mathcal{X}\). Although finite sample expressivity is a weaker notion of expressivity, it can be seen as a stronger alternative to the _pseudo-dimension_ of a hypothesis family [62, 44].
The importance of finite sample expressivity lies in the fact that machine learning tasks always deal with finite training sets. Suppose a given model is found to be able to realize any possible labeling of an available training set. Then, reasonably one would not expect the model to learn meaningful insights from the training data. It is plausible that some form of learning may still occur, albeit without a clear understanding of the underlying mechanisms. However, under such circumstances, uniform generalization bounds would inevitably become trivial.
**Theorem 1** (Finite sample expressivity of quantum circuits).: _Let \(\rho_{1},\ldots,\rho_{N}\) be unknown quantum states on \(n\in\mathbb{N}\) qubits, with \(N\in\mathcal{O}(\mathrm{poly}(n))\), and let \(W\) be the Gram matrix_
\[[W]_{i,j}=\mathrm{tr}(\rho_{i}\rho_{j})\,. \tag{13}\]
_If \(W\) is well-conditioned, then, for any \(y_{1},\ldots,y_{N}\in\mathbb{R}\) real numbers, we can construct a quantum circuit \(\mathcal{M}_{y}\) of \(\mathrm{poly}(n)\) depth such that_
\[\mathrm{tr}(\rho_{i}\mathcal{M}_{y})=y_{i}\,. \tag{14}\]
The proof is given in Appendix A. Theorem 1 gives us a constructive approach to, given a finite set of quantum states and real labels, find a quantum circuit that produces each of the labels as the expectation value for each of the input states. This should give an intuition for why QML models seem capable of learning random labels and random quantum states. Nevertheless, as stated, the theorem falls short in applying specifically to PQCs. The construction we propose requires query access to the set of input states every time the circuit is executed. We estimate the values \(\mathrm{tr}(\rho_{i}\rho_{j})\) employing the \(\mathrm{SWAP}\) test. The circuit that realizes the \(\mathrm{SWAP}\) test bears little relation to usual QML ansatze. Ideally, if possible, one should impose a familiar PQC structure and drop the need to use the input states.
Next, we propose an alternative, more restricted version of the same statement, keeping QML in mind as the desired application. For it, we need a sense of distinguishability of quantum states.
**Definition 1** (Distinguishability condition).: _We say \(n\)-qubit quantum states \(\rho_{1},\ldots,\rho_{N}\) fulfill the distinguishability condition if we can find intermediate states \(\rho_{i}\mapsto\hat{\rho}_{i}\) based on some generic quantum state approximation protocol such that they fulfill the following:_
1. _For each_ \(i\in[N]\)_,_ \(\hat{\rho}_{i}\) _is efficiently preparable with a PQC._
2. _The matrix_ \(\hat{W}\) _can be efficiently constructed, with_ \[\hat{W}_{i,j}=\operatorname{tr}(\rho_{i}\hat{\rho}_{j})\,.\] (15)
3. _The matrix_ \(\hat{W}\) _is well-conditioned._
Notable examples of approximation protocols are those inspired by classical shadows [73] or tensor networks [74]. For instance, similarly to classical shadows, one could draw unitaries from an approximate \(\operatorname{poly}(n)\)-design using a brickwork ansatz with \(\operatorname{poly}(n)\)-many layers of i.i.d. Haar random \(2\)-local gates. For a given quantum state \(\rho\), one produces several pairs \((U,b)\) where \(U\) is the randomly drawn unitary and \(b\) is the bitstring outcome after performing a computational basis measurement of \(U\rho U^{\dagger}\), and one refers to each individual pair as a _snapshot_. Notice that this approach does not follow exactly the traditional classical shadows protocol. Our end goal is to prepare the approximation as a PQC, rather than utilizing it for classical simulation purposes. In particular, we do not employ the inverse measurement channel, since that would break complete positivity and thus the corresponding approximation would not be a quantum state. For each snapshot, one can efficiently prepare the corresponding quantum state \(U^{\dagger}|b\rangle\!\langle b|U\) by undoing the unitary that was drawn after preparing the corresponding computational basis state vector \(|b\rangle\). Given a collection of snapshots \(\{(U_{1},b_{1}),\dots,(U_{M},b_{M})\}\), an approximation protocol would consist of preparing the mixed state \(\frac{1}{M}\sum_{m=1}^{M}U_{m}^{\dagger}|b_{m}\rangle\!\langle b_{m}|U_{m}\). Since each \(b_{m}\) is prepared with at most \(n\) Pauli-\(X\) gates and each \(U_{m}\) is a brickwork PQC architecture, this approximation protocol fulfills the restriction of efficient preparation from Definition 1. Whether or not this or any other generic approximation protocol is accurate enough for a specific choice of quantum states we discuss in Section IV.2. There, we present Algorithm 1 together with its correctness statement as Theorem 3. Given the input states \(\rho_{1},\dots,\rho_{N}\) Algorithm 1 moreover allows to combine several quantum state approximation protocols in order to produce a well-conditioned matrix of inner products \(\hat{W}\).
**Theorem 2** (Finite sample expressivity of PQCs).: _Let \(\rho_{1},\dots,\rho_{N}\) be unknown quantum states on \(n\in\mathbb{N}\) qubits, with \(N\in\mathcal{O}(\operatorname{poly}(n))\), and fulfilling the distinguishability condition of Definition 1. Then, we can construct a PQC \(\hat{\mathcal{M}}(\vartheta)\) of \(\operatorname{poly}(n)\) depth such that, for any \(y=(y_{1},\dots,y_{N})\in\mathbb{R}\) real numbers, we can efficiently find a specification of the parameters \(\vartheta_{y}\) such that_
\[\operatorname{tr}(\rho_{i}\hat{\mathcal{M}}(\vartheta_{y}))=y_{i}\,. \tag{16}\]
The proof is given in Appendix B. With Theorem 2, we understand that PQCs can produce any labeling of arbitrary sets of quantum states, provided they fulfill our distinguishability condition.
Notice that Definition 1 is needed for the correctness of Theorem 2. We require knowledge of an efficient classical description of the quantum states for two main reasons. On the one hand, PQCs are the object of our study. Hence, we need to prepare the approximation efficiently as a PQC. In addition, on the other hand, the distinguishability condition is also enough to prevent us from running into computation-complexity bottle-necks, like those arising from the distributed inner product estimation results in Ref. [75].
## III Discussion
We next discuss the implications of our results and suggest research avenues to explore in the future. We have shown that quantum neural networks (QNNs) can fit random data, including randomized labels or quantum states. We provided a detailed explanation of how to place our findings in a statistical learning theory context. We do not claim that uniform generalization bounds are wrong or that any prior results are false. Instead, we show that the statements of theorems that fit our generic uniform template must be vacuous for the regimes where the models are able to fit a large fraction of random data.
Our numerical results suggest that we must reach further than _uniform_ generalization bounds to fully understand _quantum machine learning_ (QML) models. In particular, experiments like ours immediately problematize approaches based on complexity measures like the _VC dimension_, the _Rademacher complexity_, and all their uniform relatives. To the best of our knowledge, all generalization bounds derived for QML so far are of the uniform kind. Therefore, our findings highlight the need for a perspective shift in generalization for QML. In the future, it will be interesting to conduct causation experiments on QNNs using non-uniform generalization measures. Promising candidates for good generalization measures in QML include the time to convergence of the training procedure, the geometric sharpness of the minimum the algorithm converged to, and the robustness against noise in the data [76].
We selected one of the most promising QML architectures for our experiments, known as the _quantum convolutional neural network_ (QCNN). We considered the task of classifying quantum phases of matter, which is a state-of-the-art application. The structure of the QCNN, with its equivariant and pooling layers, results in an ansatz with restricted expressivity. Its core features, including intermediate measurements, parameter-sharing, and logarithmic depth, contribute to higher bias and lower variance. This means the QCNN should display better generalization behavior than, for example, the usual hardware-efficient ansatze [77]. Most complexity measures are monotonous functions of the expressivity of the function family, and uniform generalization bounds are monotonous functions of a complexity measure. Therefore, our demonstration that uniform generalization bounds applied to the QCNN family are trivially loose immediately implies that the same bounds applied to less restricted models must also be vacuous. In this sense, our results for QCNNs carry over to the entirety of unrestricted QML ansatze. Overall, our study adds to the evidence supporting the need for a proper understanding of symmetries and equivariance in QML [78, 79, 80, 55].
In addition to our numerical experiments, we have analytically shown that polynomially-sized QNNs are able to fit arbitrary labeling of data sets. This seems to contradict claims that few training data are provably sufficient to guarantee good generalization in QML, raised e.g. in Ref. [53]. Our analytical and numerical results do not preclude the possibility of good generalization with few training data but rather indicate we cannot _guarantee_ it with arguments based on uniform generalization bounds. The reasons why successful generalization might occur have yet to be discovered.
We have brought the randomization tests of Ref. [59] to the quantum level, relating them to the task of quantum phase recognition as a representative example of state-of-the-art QML. Upon first glance, the training set sizes employed in our randomization experiments may be relatively small compared to the classical learning tasks investigated in Ref. [59]. However, it is essential to consider both studies within their respective contexts. In Ref. [59], the considered learning models were regarded as the best in terms of generalization for the common benchmark tasks. As previously mentioned, good generalization performance has been reported in QML, particularly for classifying quantum phases of matter using a QCNN architecture. At present, this combination of model and task is also among the best leading approaches concerning generalization within the QML literature. The range of sizes for which we demonstrated memorization behavior aligns with the size regime for which good generalization performance was achieved. It is important to note that while the actual size scales in the classical case are orders of magnitude larger than those presented here, both studies focus on the optimal approaches available at the time.
Despite the parallelism between our work and Ref. [59], it is essential to be aware of the underlying differences between both studies. The notion of _overparameterization_1 plays a critical role in classical machine learning. Only with the onset of models containing far more trainable parameters than input dimensions did the traditional understanding of generalization start to dwindle. In contrast, although the number of parameters in the considered architectures is larger than the size of the training sets, they exhibit a logarithmic scaling with the number of qubits. Meanwhile, the number of dimensions of the quantum states scales exponentially. Hence, it is inappropriate to categorize the models we have investigated as _large_ in the same way as the classical models in Ref. [59]. This observation reveals a promising research direction: not only must we rethink our approach to studying generalization in QML, but we must also recognize that the mechanisms leading to successful generalization in QML may differ entirely from those in classical machine learning. On a higher level, this work exemplifies the necessity of establishing connections between the literature on classical machine learning and the evolving field of quantum machine learning.
Footnote 1: It is important to distinguish the notion of overparameterization in classical ML from the recently introduced definition of overparameterization in QML [41], which under the same name, deals with different concepts.
## IV Methods
### Numerical methods
This section provides a comprehensive description of our numerical experiments, including the computation techniques employed for the random and real label implementations, as well as the random state and partially-corrupted label implementations.
_Random and real label implementations_. The test and training ground state vectors \(|\psi_{i}\rangle\) of the cluster Hamiltonian in Eq. (2) have been obtained as variational principles over _matrix product states_ in a reading of the _density matrix renormalization group_ ansatz [81] through the software package Quimb[82]. We have utilized the matrix product state backend from TensorCircuit[83] to simulate the quantum circuits. In particular, a bond dimension of \(\chi=40\) was employed for the simulations of 16- and 32-qubit QCNNs. We find that further increasing the bond dimension does not lead to any noticeable changes in our results.
_Random state and partially-corrupted label implementations_. In this scenario, the test and training ground state vectors \(|\psi_{i}\rangle\) were obtained directly diagonalizing the Hamiltonian. Note that our QCNN comprised a smaller number of qubits for these examples, namely, \(n\in\{8,10,12\}\). The simulation of quantum circuits was performed using Qibo[84], a software framework that allows faster simulation of quantum circuits.
For all implementations, the training parameters were initialized randomly. The optimization method employed to update the parameters of the QCNN during training is the CMA-ES[85], a stochastic, derivative-free optimization strategy. The code generated under the current study is also available in Ref. [86].
### Analytical methods
Here, we shed light on the practicalities of Definition 1, a requirement for our central Theorem 2. Algorithm 1 allows for several approximation protocols to be combined to increase the chances of fulfilling the assumptions of Definition 1. Indeed, we can allow for the auxiliary states \(\hat{\rho}_{1},\dots,\hat{\rho}_{N}\) to be linear combinations of several approximation states while staying in the mindset of Definition 1. Then, we can cast the problem of finding an optimal weighting for the linear combination as a linear optimization problem with a positive semi-definite constraint.
With Theorem 3, we can assess the distinguishability condition of Definition 1 for specific states \(\rho_{1},\dots,\rho_{N}\) and specific approximation protocols. Theorem 3 also considers the case where different approximation protocols are combined, which does not contradict the requirements of Theorem 2.
**Theorem 3** (Conditioning as a convex program 1).: _Let \(\rho_{1},\dots,\rho_{N}\) be unknown, linearly-independent quantum states on \(n\) qubits, with \(N\in\mathcal{O}(\mathrm{poly}(n))\). For any \(i\in[N]\), let \(\sigma^{i}=(\sigma^{i}_{1},...,\sigma^{i}_{m})\) be approximations of \(\rho_{i}\), each of which can
be efficiently prepared using a PQC. Call \(\sigma=(\sigma^{1},\ldots,\sigma^{N})\). Then, the real numbers \(\alpha=(\alpha_{i,k})_{i\in[N],k\in[m]}\in\mathbb{R}^{Nm}\) define the auxiliary states \(\hat{\rho}_{1},\ldots,\hat{\rho}_{N}\) as_
\[\hat{\rho}_{i}(\alpha;\sigma^{i})=\sum_{k=1}^{m}\alpha_{i,k}\sigma^{k}_{i}\,, \tag{17}\]
_and the matrix of inner products \(\hat{W}(\alpha;\sigma)\) with entries_
\[\left[\hat{W}(\alpha;\sigma)_{i,j}\right]_{i,j\in[N]} :=\operatorname{tr}\left(\rho_{i}\hat{\rho}_{j}(\alpha;\sigma^{j })\right) \tag{18}\] \[=\sum_{k=1}^{m}\alpha_{j,k}\operatorname{tr}\left(\rho_{i}\sigma ^{j}_{k}\right)\,. \tag{19}\]
_Then, \(\|\hat{W}(\alpha;\sigma)\|\leq N\). Further, one can then decide in polynomial time whether, given \(\rho_{1},\ldots,\rho_{N}\), \(\sigma\), and \(\kappa\in\mathbb{R}\), there exists a specification of \(\alpha\in\mathbb{R}^{Nm}\) such that \(\hat{W}(\alpha;\sigma)\) is well-conditioned in the sense that \(\|\hat{W}(\alpha;\sigma)^{-1}\|^{-1}\geq\kappa\). And, if there exists such a specification, a convex semi-definite problem (SDP) outputs an instance of \(\alpha\leftarrow\mathsf{SDP}(\rho,\sigma,\kappa)\) for which \(\hat{W}\) is well-conditioned. If it exists, one can also find in polynomial time the \(\alpha\) with the smallest \(\|\cdot\|_{l_{1}}\) or \(\|\cdot\|_{l_{2}}\) norm._
Proof.: The inequality \(\|\hat{W}(\alpha;\sigma)\|\leq N\) follows from Gershgorin's circle theorem [87], given that all entries of \(\hat{W}\) are bounded between \([0,1]\). In particular, the largest singular value of the matrix \(\hat{W}\) reaches the value \(N\) when all entries are \(1\).
The expression
\[\hat{W}_{i,j}=\sum_{k=1}^{m}\alpha_{j,k}\operatorname{tr}\left(\rho_{i}\sigma ^{j}_{k}\right). \tag{20}\]
is a linear constraint on \(\alpha\) and \(\hat{W}\), for \(i,j\in[N]\), while
\[\kappa\mathbb{I}\leq\hat{W}\leq N\mathbb{I} \tag{21}\]
in matrix ordering is a positive semi-definite constraint. \(\hat{W}\leq N\mathbb{I}\) is equivalent with \(\|\hat{W}\|\leq N\), while \(\kappa\mathbb{I}\leq\hat{W}\) means that the smallest singular value of \(\hat{W}\) is lower bounded by \(\kappa\), being equivalent with
\[\|\hat{W}(\alpha;\sigma)^{-1}\|^{-1}\leq\kappa\,, \tag{22}\]
for an invertible \(\hat{W}(\alpha;\sigma)\). The test whether such a \(\hat{W}\) is well-conditioned hence takes the form of a semi-definite feasibility problem [88]. One can additionally minimize the objective functions
\[\alpha\mapsto\|\alpha\|_{l_{1}} \tag{23}\]
and
\[\alpha\mapsto\|\alpha\|_{l_{2}}\,, \tag{24}\]
both again as linear or convex quadratic and hence semi-definite problems. Overall, the problem can be solved as a semi-definite problem, that can be solved in a run-time with low-order polynomial effort with interior point methods. Duality theory readily provides a rigorous certificate for the solution [88].
We propose using Algorithm 1 to construct the optimal auxiliary states \(\hat{\rho}_{1},\ldots,\hat{\rho}_{N}\), given the unknown input states \(\rho_{1},\ldots,\rho_{N}\) and a collection of available approximation protocols \(A_{1},\ldots,A_{m}\). The algorithm produces an output of either \(0\) in cases where no combination of the approximation states satisfies the distinguishability condition, or it provides the weights \(\alpha\) necessary to construct the auxiliary states as a sum of approximation states. In Theorem 3, we prove the correctness of the algorithm.
```
1:\(\rho=(\rho_{1},\ldots,\rho_{N})\)\(\triangleright\) Quantum states
2:\(A=(A_{1},\ldots,A_{m})\)\(\triangleright\) State approximation algorithms
3:\(\kappa\)\(\triangleright\) Condition number
4:
5:\(\alpha\) such that \(\hat{W}\) is well-conditioned if possible, \(0\) otherwise.
6:
7:for\(i\in[N],k\in[m]\)do
8:\(\sigma^{i}_{k}\gets A_{k}(\rho_{i})\)
9:endfor
10:
11:\(\sigma\leftarrow(\sigma^{i}_{k})_{i\in[N],k\in[m]}\)
12:
13:\(\alpha\leftarrow\mathsf{SDP}(\rho,\sigma,\kappa)\)\(\triangleright\) From proof of Theorem 3
14:
15:if SDP fails then
16:return\(0\)\(\triangleright\) No suitable \(\alpha\) found
17:
18:else
19:return\(\alpha\)\(\triangleright\)\(\hat{W}\) well-conditioned
20:endif
```
**Algorithm 1** Convex optimization state approximation
We refer to the proof of Theorem 2, in Appendix B, for an explanation of how to construct the intermediate states \(\hat{\rho}_{i}\) as a linear combination of auxiliary states \(\sigma^{i}\) without giving up the PQC framework.
## Code and data availability
The code and data generated during the current study are available in Ref. [86].
###### Acknowledgements.
The authors would like to thank Matthias C. Caro, Vedran Dunjko, Johannes Jakob Meyer, and Ryan Sweke for useful comments on an earlier version of this manuscript and Christian Bertoni, Jose Carrasco, and Sofine Jerbi for insightful discussions. The authors also acknowledge the BMBF (MUNIQC-Atoms, Hybrid), the BMWK (EniOmA, PlanQK), the QuantERA (HQCC), the Quantum Flagship (PasQuans2), the MATH+ Cluster of Excellence, the DFG (CRC 183, B01), and the Einstein Foundation (Einstein Research Unit on Quantum Devices) for financial support.
## Author Contributions
The project has been conceived by C. B.-P. Experimental design has been laid out by E. G.-F. Analytical results have been proven by E. G.-F. and J. E. Numerical experiments have been performed by C. B.-P. The project has been supervised by C. B.-P. All authors contributed to writing the manuscript.
|
2310.11590 | Towards Inferring Users' Impressions of Robot Performance in Navigation
Scenarios | Human impressions of robot performance are often measured through surveys. As
a more scalable and cost-effective alternative, we study the possibility of
predicting people's impressions of robot behavior using non-verbal behavioral
cues and machine learning techniques. To this end, we first contribute the SEAN
TOGETHER Dataset consisting of observations of an interaction between a person
and a mobile robot in a Virtual Reality simulation, together with impressions
of robot performance provided by users on a 5-point scale. Second, we
contribute analyses of how well humans and supervised learning techniques can
predict perceived robot performance based on different combinations of
observation types (e.g., facial, spatial, and map features). Our results show
that facial expressions alone provide useful information about human
impressions of robot performance; but in the navigation scenarios we tested,
spatial features are the most critical piece of information for this inference
task. Also, when evaluating results as binary classification (rather than
multiclass classification), the F1-Score of human predictions and machine
learning models more than doubles, showing that both are better at telling the
directionality of robot performance than predicting exact performance ratings.
Based on our findings, we provide guidelines for implementing these predictions
models in real-world navigation scenarios. | Qiping Zhang, Nathan Tsoi, Booyeon Choi, Jie Tan, Hao-Tien Lewis Chiang, Marynel Vázquez | 2023-10-17T21:12:32Z | http://arxiv.org/abs/2310.11590v1 | # Towards Inferring Users' Impressions of Robot Performance in Navigation Scenarios
###### Abstract
Human impressions of robot performance are often measured through surveys. As a more scalable and cost-effective alternative, we study the possibility of predicting people's impressions of robot behavior using non-verbal behavioral cues and machine learning techniques. To this end, we first contribute the SEAN TOGETHER Dataset consisting of observations of an interaction between a person and a mobile robot in a Virtual Reality simulation, together with impressions of robot performance provided by users on a 5-point scale. Second, we contribute analyses of how well humans and supervised learning techniques can predict perceived robot performance based on different combinations of observation types (e.g., facial, spatial, and map features). Our results show that facial expressions alone provide useful information about human impressions of robot performance; but in the navigation scenarios we tested, spatial features are the most critical piece of information for this inference task. Also, when evaluating results as binary classification (rather than multiclass classification), the \(F_{1}\)-Score of human predictions and machine learning models more than doubles, showing that both are better at telling the directionality of robot performance than predicting exact performance ratings. Based on our findings, we provide guidelines for implementing these predictions models in real-world navigation scenarios.
## I Introduction
As a scalable alternative to measuring subjective impressions of robot performance through surveys, recent work in Human-Robot Interaction (HRI) has explored using _implicit_ human feedback to predict these impressions [1, 2, 3, 4]. The feedback corresponds to communicative signals that are inevitably given off by people [5]. They can be reflected in human actions that change the world's physical state [6] or can be nonverbal cues, such as facial expressions [2, 3] and gaze [1, 7], displayed during social interactions. Implicit feedback serves as a burden-free information channel that sometimes persists even when people don't intend to communicate [8].
We expand the existing line of research on predicting impressions of robot performance from nonverbal human behavior to dynamic scenarios involving robot navigation. Prior work has often considered stationary tasks, like physical assembly at a desk [9] or robot photography [4], in laboratory environments. We instead explore the potential of using observations of body motion, gaze, and facial expressions to predict a human's impressions of robot performance while a robot guides them to a destination in a crowded environment. These impressions (which we also refer to as human perceptions) correspond to subjective opinions of how well a robot is performing the navigation task. Predicting them in crowded navigation scenarios is more challenging than in controlled laboratory settings because human nonverbal behavior can be a result of not only robot behavior, but also other interactants in the environment. Further, because of motion, nonverbal responses to the robot may change as a function of the interaction context. For example, imagine that the person that follows the robot looks downwards. This could reflect paying attention to the robot, or be a result of the person inspecting their nearby physical space, which changes during navigation.
Due to the complexity of reliably capturing observations of implicit feedback during navigation tasks, we performed a data collection effort using the Social Environment for Autonomous Navigation (SEAN) 2.0 [10] with Virtual Reality (VR) [11].3 Humans took part in the simulations through an avatar, which was controlled using a VR headset, as in Fig. 1. The headset enabled immersion and allowed us to capture implicit feedback features like gaze. Also, it facilitated querying the human about robot performance as navigation tasks took place. We considered robot performance as a multi-dimensional construct, similar to [4], because humans may care about many aspects of a robot's navigation behavior, as discussed in the social robot navigation literature [12, 13, 14].
Footnote 3: Dataset available at: [https://sean-together.interactive-machines.com/](https://sean-together.interactive-machines.com/)
Using the data collected with SEAN, we first investigate to what extent humans can predict the users' impression of a robot's performance (along the dimensions of perceived competence, surprise, and intention) from a visualization of the observations of interactions (or features) recorded in our
Fig. 1: Data collection. Humans controlled an avatar in the simulation with VR (a) while they were guided by a Fetch robot (b). The screen on the desk shows what the user saw.
navigation dataset. Second, we investigate how well various supervised learning models do this type of inference in comparison to humans. Finally, we study the generalization capabilities of supervised learning methods to unseen users.
Our analyses bring understanding to the complexity of predicting impressions of robot performance in navigation tasks and the value of various combinations of features in this inference problem. Based on our findings, we conclude this paper with a set of suggested guidelines for implementing machine learning algorithms that infer robot performance using implicit feedback in real-world navigation scenarios.
## II Related Work
**Impressions of Robot Performance.** Understanding human impressions of robot performance is important. They can be used to evaluate robot policies [15, 16, 17] and to create better robot behavior [18, 19, 7, 20], increasing the likelihood of robot adoption. In this work, we focus on inferring three robot performance dimensions relevant to navigation [12]: robot competence, the surprisingness of robot behavior, and clear intent. Robot competence is a popular performance metric [21], especially in robot navigation [22, 23, 24]. Surprising behavior violates expectations. It is often considered undesired [14, 25] and may require explanations by the robot [26]. Meanwhile, showing clear intent means that the robot enables an observer to infer the goal of its motion [27]. If humans fail to anticipate the motion of a robot because it acts surprisingly or its intent is unclear, they will likely have trouble coordinating their own behavior with it [28, 29].
**Implicit Human Feedback.** We distinguish between explicit and implicit human feedback about robot performance. Explicit feedback corresponds to purposeful or deliberate information conveyed by humans to robots, e.g., through preferences [30, 31] or survey instruments [22, 32]. Meanwhile, implicit feedback are cues and signals that people exhibit without intending to communicate some specific information about robot performance, yet they can be used to infer such perceptions. Inferring performance from implicit feedback can reduce the chances of excessively querying users for explicit feedback in robot learning scenarios [33, 34], thereby minimizing the risk of feedback fatigue [35]. Learning from implicit feedback is not without challenges, however, as it can be difficult to interpret [2, 3]. For example, this can happen due to inter-person variability in facial expressions [36] or similar signals being produced for different reasons [37].
Our work considers a variety of nonverbal implicit signals, including gaze, body motion, and facial expressions, which have long been studied in social signal processing [38]. While in some cases these signals are explicit feedback (e.g., to interrupt an agent [39]), our work considers them implicit feedback because we do not prime humans to react in specific ways to a robot. As such, our work is closer to [2, 40, 41, 42, 37, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59], which used nonverbal signals to identify critical states during robot operation, detect robot errors, and adjust robot behavior. Other types of feedback signals, such as those from brain-computer interfaces, have been used in HRI [43, 44, 45]; however, they are impractical for navigation tasks.
**Simulation in HRI.** Simulation is a useful tool in HRI [46, 47, 48], and particularly popular in robot navigation research [49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59]. Robotics simulators model aspects of the real world in a virtual environment and render virtual representations of the real world, often using game engines such as Unity [52, 10, 53]. In this work, we take advantage of the SEAN 2.0 simulator [10], which integrates with the Robot Operating System, and supports VR [11]. Virtual Reality interfaces have gained popularity in HRI [54, 55, 56, 57, 58, 59]. Some VR systems, such as the Vive Pro Eye [60] that we utilize in our work (Fig. 1), also allow tracking of eye-gaze and facial features.
**Human Annotations.** We build on prior HRI research that utilizes user self-reports (or self-annotations) to create prediction models relevant to a task of interest [4, 61]. Self-reports consist of first-hand opinions from users about their experiences [62]. In HRI, these are opinions by direct users of robots - rather than opinions by third-parties that observe the experiences [63, 64, 65, 23, 66]. For instance, [4] asked study participants to evaluate robot performance using video logs immediately after they interacted with a robot. Similarly, we asked robot users to evaluate robot performance. However, instead of discretizing interactions based on high-level robot actions and collecting impressions of robot performance all throughout interactions, we opted for querying humans about their impressions of the robot at critical points in time during a navigation task. This was necessary because navigation actions are continuous, rather than discrete as in [4]. This makes it very time-consuming and expensive to both segment robot behavior and annotate performance across whole interactions.
## III Problem Statement & Research Questions
We study if a person's impression of a robot's performance can be predicted using observations of their interaction. Specifically, we aim to learn a mapping from a sequence of observations to an individual's reported impressions at the end of the sequence. We consider multiple robot performance dimensions on a 5-point scale, as detailed later in Sec. IV.
Consider a dataset of observations and performance labels, \(\mathcal{D}=\{(\mathbf{o}_{i:T}^{i},y^{i})\}\), where \(\mathbf{o}_{1:T}\) is an observation sequence of length \(T\), \(y\) is a performance rating given by a robot user at the end of the sequence, and \(i\) identifies a given data sample. We place emphasis on predicting a person's impression of a robot by considering observations of their implicit feedback. Thus, the observations \(\mathbf{o}_{i}^{i}\) include features that describe the person's non-verbal behavior, such as gaze and facial expressions. Also, the observations include features that describe the spatial behavior of all the agents in the environment, the navigation task, and the space occupied by static objects. Given this data, we investigate three main research questions:
1. _How well can human observers predict a user's impression of robot performance?_ By answering this question, we obtain a human baseline for learning a function \(f:\mathcal{O}_{1:T}\rightarrow\mathcal{Y}\), where \(\mathcal{O}\) is the observation space at a given time-step and
\(\mathcal{Y}\) is performance. Also, through this question, we study the impact of two types of observations in the prediction task: observations that describe fine-grained facial expressions for a robot user; and other observations about the user, the robot and their environment. As mentioned earlier, observations of fine-grained expressions have gained popularity in recent work to infer human perceptions of an agent's behavior [2, 4, 9, 37]. Other observations (e.g. body motion and nearby static obstacles) can be more easily computed in real-world navigation tasks, but their usefulness on a robot's ability to infer users' impression of their performance is less understood.
#### Iii-A2 Can machine learning methods predict impressions of robot performance as well as humans?
Ultimately, we are interested in bringing us forward to a future where machine learning models facilitate evaluating robot performance at scale, without having to necessarily ask users all the time for explicit feedback. Thus, we evaluate various machine learning models to approximate the function \(f\), as defined in the prior question.
#### Iii-A3 How well can machine learning models generalize to unseen users?
In future robot deployments, a robot may interact with completely new users. Thus, we conduct a more detailed analysis of the performance of various machine learning models in predicting impressions of robot performance according to users for whom the model had no data at training time.
## IV Data Collection with SEAN and VR
We collected data using SEAN-VR [11]. As in Fig. 1(a), participants used a Vive Pro Eye VR device to control an avatar in a warehouse. They had to follow a Fetch robot that guided them to a destination that was unknown to them a priori. The VR headset captured implicit signals from the participants, like eye and lip movements. Also, participants provided ratings of robot performance through the simulation's VR interface.
Fig. 1(b) shows an example first-person view of the simulation during robot-guided navigation. The Fetch robot was controlled with the Robot Operating System (ROS) [67] in SEAN. The environment contained other algorithmically controlled pedestrians and obstacles typical of warehouses. Our data collection protocol, described below, was approved by our local Institutional Review Board and refined through pilots.
### _Participants_
We recruited 60 participants using flyers and by word of mouth. They were at least 18 years old, fluent in English, and had normal or corrected-to-normal vision. Overall, 19 participants identified as female, 40 as male, and 1 as non-binary or third gender. Most of them were university students, and ages ranged from 18 to 43 years old. Participants were somewhat familiar with robots, as indicated by a mean rating of M = 4.20 (with standard error SE = 0.18) on a 7-point Likert responding format (1 being lowest). Yet, they were somewhat unfamiliar with VR (M = 3.72, SE = 0.20). No participant had prior experience with SEAN or social robot navigation in VR.
### _Data Collection Procedure_
**Protocol:** A data collection session took place as follows. First, the participant provided demographics data. Second, the experimenter introduced the robot, explained the navigation task in which the participant was to follow the robot, and demonstrated how to use the VR device to control their avatar in SEAN and label robot performance. Third, the participant experienced four navigation tasks with the robot, each with a particular starting position and destination. In each task, the robot guided the participant to the destination and repeatedly changed its behavior (as further detailed below). Importantly, the interaction was paused before and after each behavior change took place, at which point the participant was asked to evaluate the robot's most recent navigation performance. A typical data collection session was completed in 45 min to 1 hour. Participants were compensated US$15 for their time.
**Robot Behaviors:** During a navigation task, the robot switched between one of these three behaviors:
_1. Nav-Stack._ The robot navigated efficiently to the destination based on the path planned by the ROS Navigation Stack with social costs [68]. This behavior lasted 40 seconds.
_2. Spinning._ The robot rotated at its current position, indicating confusion. This behavior lasted 20 seconds.
_3. Wrong-Way._ The robot moved in the wrong direction, away from the task's destination, effectively making a mistake during navigation. This behavior lasted 20 seconds.
Unbeknownst to the participants, the robot switched to _Nav-Stack_ behavior after _Spinning_ or _Wrong-Way_ during navigation. It randomly switched to _Spinning_ or _Wrong-Way_ after finishing _Nav-Stack._ The design was intended to maintain a consistent rate of sub-optimal behavior and avoid user boredom or significant confusion. We expected the behaviors to elicit both positive and negative views of the robot.
**Impressions of Robot Performance:** During a navigation task, we paused the interaction at 4 seconds _before_, and at 8 seconds _after_ the robot switched between behaviors. The elapsed time for the latter pause was longer in order to give people enough time to experience the latest robot behavior.
As shown in the supplementary video, impressions of robot performance were provided through an interface embedded in the simulation. The interface asked the participants to indicate their impression about the robot's most recent performance in regard to: 1) _"how competent was the robot at navigating,"_ 2) _"how surprising was the robot's navigation behavior,"_ and 3) _"how clear were the robot's intentions during navigation."_ Participants provided ratings for these three dimensions of robot performance on a 5-point Likert responding format, e.g., with 1 being "incompetent", 2 being "somewhat incompetent", 3 being "neither competent nor incompetent", 4 being "somewhat competent", and 5 being "competent".
### _Observations_
We organized observations of human-robot interactions, as recorded in SEAN-VR [11], into the features described below.
**Participants' Facial Expression Features:** We captured the participants' eye and lip movements, as well as their gaze through the VR headset using the VIVE Eye and Facial Tracking (SRanipal) SDK. The eye and lip movements corresponded to 73 features that described the geometry of the face through blend shapes. The gaze was a 3D vector providing the direction of gaze of the person relative to their face.
**Spatial Behavior Features:** During navigation, we captured the poses of the robot, the participant, and the other automatically-controlled avatars on the ground plane of the scene. Then, we computed the poses of the avatars relative to the robot, considering only those that were up to 7.2m away from it, as this region is typically considered a robot's public space [69, 70, 71]. Each of the features were \((x,y,\theta)\) tuples with \(x\), \(y\) being the position and \(\theta\) being the body orientation (yaw angle) relative to a coordinate frame attached to the robot.
**Goal Features:** A navigation task had an associated destination or goal that the robot had to reach. We converted the goal pose in a global frame in the warehouse to a pose in a coordinate frame attached to the robot. This pose described the robot's proximity and relative orientation to its destination.
**Occupancy Features:** During navigation, the robot localized [72] against a 2D map of the warehouse. We used a cropped section of the map around the robot (of \(7.2\)m \(\times\)\(7.2\)m) to describe the occupancy of nearby space by static objects.
### _Perceived Robot Performance_
Impressions of robot performance were as expected: ratings for competence and clear intention were generally higher for _Nav-Stack_ than for _Spinning_ and _Wrong-Way_, while the latter two tended to be more surprising than the former. Pairs of performance dimensions were significantly correlated with absolute Pearson r-values greater than 0.6. An exploratory factor analysis suggested that the dimensions could be combined into one performance factor (which explained 77% of the variance).
Using the features described before and the impressions of robot performance provided by the participants, we created a dataset of paired observation sequences and target performance values. We further refer to this data as the SEAN virTual rObot GuidE with implicitT Human fEedback and peRformance Dataset (SEAN TOGETHER Dataset). As described below, we used this dataset to investigate the questions in Sec. III.
## V Analyses
### _How Well Can Human Observers Predict a User's Impression of Robot Performance?_
To better understand the complexity of inferring impressions of robot performance, we evaluated how well human annotators could solve the prediction problem. To this end, we administered an online survey through www.prolific.co, a platform for human data collection. In the survey, human annotators observed visualizations of observations in our SEAN TOGETHER Dataset. Then, they tried to predict performance ratings provided by the people who followed the robot.
**Method:** For the survey, we randomly selected 2 data samples from each of the 60 participants in our data collection, with one gathered before and the other gathered after the robot's behavior changed. The observations in each sample corresponded to an 8-second 5-hz window of features right before the corresponding performance label was provided.
As shown in Fig. 2, data samples were visualized in two ways:
_1. Facial Rendering._ We created a human face rendering in Unity by replaying the facial expression features on an SRanipal compatible avatar, as shown in Fig. 2 (right). This visualization was motivated by the use of facial expressions in prior work on implicit feedback (e.g., [2]).
_2. Navigation Rendering._ We created a plot of features that described the navigation behavior of the robot and the avatars in the simulation. The plot showed features that, using existing perception techniques, may be easier to estimate than facial features in real-world deployments. These features are the spatial behavior features, the robot's goal location, the occupied space near the robot, and the gaze direction of the participant - the last of which could be approximated using an estimate of the person's head orientation [73]. Because prior work suggests that it is easier to make sense of implicit human feedback in context [37], the plot was always centered on the robot, making its surroundings always visible as in Fig. 2 (left).
We used the visualizations to create three annotation conditions that helped understand the value of different features:
1) _Facial-Only_: for a given data sample, annotators only saw the facial rendering; 2) _Nav.-Only_: annotators only saw the navigation rendering; and 3) _Nav.+Facial_: annotators saw the navigation rendering first, then the facial rendering and, finally, saw a video with both visualizations together (Fig. 2).
Each of the data samples was annotated by 10 unique people in each condition. The annotators were instructed to predict
Fig. 2: A data sample from the _Nav.+Facial_ condition. The **left** plot shows gaze, spatial behavior, goal, and occupancy features: \(\bullet\)\(\bullet\)\(\bullet\) is the robot’s pose; \(\bullet\)\(\bullet\)\(\bullet\) is the pose of the participant following the robot during the VR interaction; \(\rightarrow\) indicates the gaze of the participant; \(\bullet\)\(\bullet\)\(\bullet\) are the poses of algorithmically controlled avatars; \(\blacksquare\) is the destination position that the robot navigated towards; and occupancy in the environment is indicated by black pixels (occupied) and white pixels (unoccupied). The **right** visualization shows a rendering of the facial expression features of the participant.
how the participant who controlled the avatar to follow the robot perceived the robot's performance. Each annotator was paid US$7.5 for approximately \(30\) min of annotation time. To encourage high-quality annotations, we also gave them a bonus of US$0.125 for each correct prediction that they made.
**Annotators:** We recruited a total of 100 annotators. Thirty-two of them identified as female, 61 as male, and 7 as non-binary or third gender. Ages ranged from 18 to 76 years old. Annotators indicated similar familiarity with robots (M = 4.13, SE = 0.14) as the data collection participants, though the annotators were slightly more familiar with VR (M = 4.07, SE = 0.17).
**Results:** We used linear mixed models estimated with RE-stricted Maximum Likelihood (REML) [74, 75] to analyze errors in the predictions for each performance dimension. Our independent variables were Before/After Robot Behavior Change (_Before_, _After_) and Annotation Condition (_Facial-Only_, _Nav-Only_, _Nav\(\_+\)Facial_). Also, we considered Annotator ID as a random effect because annotators provided predictions for multiple data samples. Our dependent variables were the absolute error between an annotator's prediction and the performance rating in our SEAN TOGETHER Dataset.
We found that the Annotation Condition had a significant effect on the absolute error for Competence, Surprise, and Intention (p \(<\) 0.0001 in all cases). As in Fig. 3(a), Tukey HSD post-hoc tests showed that for Competence and Surprise, the errors for _Nav\(\_+\)Facial_ and _Nav-Only_ were significantly lower than _Facial-Only_, yet the difference between the former two conditions was not significant. For Intention, all conditions led to significantly different errors. _Nav\(\_+\)Facial_ resulted in the lowest error, followed by _Nav-Only_ and then _Facial-Only_. These results suggest that facial expressions provide information about impressions of robot performance though, more generally, the features used to create the Navigation Renderings seem to be the most critical for these predictions.
Before/After Robot Behavior Change had a significant effect on the prediction errors for Competence and Intention (p \(<\) 0.0001 in both cases). As in Fig. 3(b), the error was significantly lower for samples _Before_ a behavior change than for samples _After_ a change for these performance dimensions. We suspect this was because the robot sometimes demonstrated 2 behaviors in the samples collected _After_ a behavior change.
Table I shows the \(F_{1}\)-Scores for the annotator predictions (see HA rows). The low scores suggest that correctly predicting impressions of robot performance on a 5-point responding format was difficult for humans. To better understand annotators' predictions, we transformed the ground truth ratings from our data collection to binary values, one corresponding to low performance (e.g., 1-2 ratings for competence) and another to medium-to-high performance (3-5 ratings for competence). Also, we transformed the annotators' predictions similarly. This led to \(F_{1}\) scores of 0.69 for Competence, 0.64 for Surprise, and 0.69 for Intention, suggesting that human annotators were better at telling the directionality of robot performance ratings than at predicting their exact magnitude.
### _Can Machine Learning Methods Predict Impressions of Robot Performance as Well as Humans?_
We compared human prediction performance with a variety of classifiers, including a random forest and neural networks.
**Method:** Machine learning (ML) models were evaluated on the same samples shown to the human annotators (\(n=120\)). The rest of the data was used for training (\(n=2280\)) and validation (\(n=569\)). One model was trained for each combination of feature sets shown to the human annotators (_Facial-Only_, _Nav-Only_, and _Nav\(\_+\)Facial_). The _Nav._ feature set included occupied space near the robot, which we encoded using a ResNet-18 representation [76]. The Random Forest (RF) used 100 trees and the depth was grown until leaves had less than 2 samples. The neural networks had a number of parameters on the same order of magnitude: \(5.4\times 10^{6}\) for a Multi-Layer Perceptron (MLP), \(2.1\times 10^{6}\) for a message-passing Graph Neural Network (GNN) [77], and \(6.5\times 10^{6}\) for a Transformer (T) [78]. Networks were trained using minibatch gradient descent with the Adam optimizer and cross-entropy loss. Learning rate, batch size, and dropout were chosen using grid search with validation-based early stopping [79]. We also compared all these models with a random sampling baseline.
**Results:** As is shown in Table I, ML models outperformed both human-level performance and random baseline in all cases when measured via \(F_{1}\)-Score. When measured using Accuracy and Mean Absolute Error, ML models performed the best, except for Intention when using _Nav\(\_+\)Facial_ features. These outcomes indicate that our implicit feedback data contain useful information that can be leveraged by ML models to predict users' impressions of robot performance. Further, ML models trained with _Nav.-Only_ and _Nav.\(\_+\)Facial_ features outperformed those trained with _Facial-Only_ features. This result aligns with our observation in Sec. V-A on the criticality of Navigation features in comparison to Facial Expressions.
### _Can Machine Learning Generalize to Unseen Users?_
We investigated how well learning models could predict performance by a user whose data was held out from training.
**Method:** We used the models and training scheme from Sec. V-B with all features (_Nav.\(\_+\)Facial_), but split the data using leave-one-out cross-validation. For each fold, the data for one participant was used as the test set and the remaining examples were split between training (\(80\%\)) and validation (\(20\%\)). We
Fig. 3: Errors for annotators’ predictions by (a) Annotation Conditions and (b) Before/After Robot Behavior Change. (**) and (*) denote \(p<0.0001\) and \(p<0.05\), respectively.
searched for new hyperparameters and computed results both on 5-classes and on binary classification. Binary targets and prediction labels were computed as in Sec. V-A. **Results:** Fig. 4 reports \(F_{1}\)-Scores over all folds. The models generalized to unseen people with only a slight reduction in performance in comparison to Table I. Also, the average \(F_{1}\)-Score across all performance dimensions improves from 0.25 in the multiclass case to 0.62 in the binary case. This begins to make the ML predictions usable for real-world applications.
## VI Discussion
**Guidelines for Real-World Applications:** We hope that future work leverages our findings to build effective models for mapping implicit human feedback to users' impressions of robot performance in real-world social navigation tasks. To this end, we first recommend prioritizing robust people tracking and pose estimation over computing fine-grained facial expressions, especially when computational resources may be limited. Reasoning about spatial behavior features in the context of the task can facilitate achieving reasonable prediction performance with lower sensor requirements. Second, we recommend building models that focus on identifying poor robot performance instead of predicting more specific impressions of robot performance (e.g., on a 5-point scale). Even for humans, the latter type of predictions are hard because of the subjectivity of performance ratings. Finally, if a robot is executing multiple behaviors, we recommend considering whether the robot switched behaviors recently when reasoning about performance predictions. As in our results, predicting performance recently after a behavior change can be more difficult than before, when the behavior was more consistent.
**Limitations and Future Work:** Our work has several limitations. First, we obtained human baselines for prediction performance, but used only a limited set of feature combinations. In the future, it would be interesting to consider a broader set of feature categories. Second, our work focused on navigation in a VR setup. An immediate next step is to extend our work to real-world interactions, verifying the generalizability of prediction models to different tasks and considering sensor noise in the detected features. Lastly, the inferred performance predictions, which could be considered instantaneous rewards, could be used in the future to adapt robot behavior in HRI.
**Conclusion:** This work contributes the SEAN TOGETHER Dataset, consisting of observations of human-robot interactions in VR, including implicit human feedback, and corresponding performance ratings in guided robot navigation tasks. Our analyses revealed that facial expressions can help predict impressions of the robot, but spatial behavior features in the context of the navigation task were more critical for these inferences. Our dataset and accompanying analyses pave a path forward to enabling mobile robots to leverage passive observations of their users to infer how well they complete navigation tasks. Potentially, they could also use this feedback to interactively improve their behavior in the future.
Fig. 4: ML models trained on _Nav._+_Facial_ features using leave-one-out cross-validation and evaluated on the held-out participant’s data. \(F_{1}\)-Scores are computed over 5 classes (Multiclass) and 2 classes (Binary). See the text for details.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \multicolumn{1}{c}{} & \multicolumn{3}{c}{\(F_{1}\)-Score \(\uparrow\)} & \multicolumn{3}{c}{Accuracy \(\uparrow\)} & \multicolumn{3}{c}{Mean Absolute Error \(\downarrow\)} \\ \cline{3-11} \multicolumn{1}{c}{} & \multicolumn{2}{c}{Facial} & Nav. & Nav.+Facial & Facial & Nav. & Nav.+Facial & Facial & Nav. & Nav.+Facial \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & HA & \(0.16\pm 0.0\) & \(0.28\pm 0.1\) & \(0.30\pm 0.1\) & \(0.19\pm 0.1\) & \(0.40\pm 0.1\) & \(0.43\pm 0.1\) & \(1.74\pm 0.2\) & \(1.03\pm 0.3\) & \(0.96\pm 0.3\) \\ & R & \(0.18\pm 0.0\) & \(0.19\pm 0.0\) & \(0.17\pm 0.0\) & \(0.21\pm 0.0\) & \(0.21\pm 0.0\) & \(0.20\pm 0.0\) & \(1.73\pm 0.1\) & \(1.75\pm 0.1\) & \(1.81\pm 0.1\) \\ & RF & \(0.19\pm 0.0\) & \(\mathbf{0.37\pm 0.0}\) & \(\mathbf{0.38\pm 0.0}\) & \(\mathbf{0.33\pm 0.0}\) & \(\mathbf{0.52\pm 0.0}\) & \(\mathbf{0.52\pm 0.0}\) & \(\mathbf{1.43\pm 0.0}\) & \(\mathbf{0.88\pm 0.0}\) & \(\mathbf{0.82\pm 0.0}\) \\ & MLP & \(\mathbf{0.23\pm 0.0}\) & \(0.29\pm 0.1\) & \(0.25\pm 0.1\) & \(0.28\pm 0.0\) & \(0.48\pm 0.0\) & \(0.44\pm 0.1\) & \(1.66\pm 0.1\) & \(1.07\pm 0.3\) & \(1.19\pm 0.4\) \\ & GNN & - & \(0.31\pm 0.1\) & \(0.33\pm 0.0\) & - & \(0.43\pm 0.1\) & \(0.39\pm 0.1\) & - & \(1.22\pm 0.3\) & \(1.04\pm 0.0\) \\ & T & \(0.21\pm 0.0\) & \(0.33\pm 0.0\) & \(0.33\pm 0.0\) & - & \(0.43\pm 0.0\) & \(0.41\pm 0.1\) & \(1.58\pm 0.1\) & \(0.97\pm 0.0\) & \(0.95\pm 0.0\) \\ \hline \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & HA & \(0.18\pm 0.0\) & \(0.24\pm 0.1\) & \(0.25\pm 0.1\) & \(0.20\pm 0.1\) & \(0.30\pm 0.1\) & \(0.33\pm 0.1\) & \(1.53\pm 0.3\) & \(1.19\pm 0.2\) & \(1.09\pm 0.2\) \\ & R & \(0.19\pm 0.0\) & \(0.21\pm 0.0\) & \(0.17\pm 0.0\) & \(0.20\pm 0.0\) & \(0.21\pm 0.0\) & \(0.18\pm 0.0\) & \(1.64\pm 0.1\) & \(1.60\pm 0.1\) & \(1.68\pm 0.1\) \\ & RF & \(\mathbf{0.29\pm 0.0}\) & \(\mathbf{0.38\pm 0.0}\) & \(\mathbf{0.34\pm 0.0}\) & \(\mathbf{0.30\pm 0.0}\) & \(\mathbf{0.40\pm 0.0}\) & \(\mathbf{0.34\pm 0.0}\) & \(\mathbf{0.34\pm 0.0}\) & \(1.30\pm 0.0\) & \(\mathbf{0.93\pm 0.0}\) & \(\mathbf{0.98\pm 0.0}\) \\ & MLP & \(0.24\pm 0.0\) & \(0.26\pm 0.1\) & \(0.24\pm 0.1\) & \(0.25\pm 0.0\) & \(0.30\pm 0.0\) & \(0.29\pm 0.1\) & \(\mathbf{1.23\pm 0.1}\) & \(1.12\pm 0.2\) & \(1.08\pm 0.1\) \\ & GNN & - & \(0.29\pm 0.0\) & \(0.27\pm 0.0\) & - & \(0.30\pm 0.0\) & \(0.28\pm 0.0\) & - & \(1.13\pm 0.1\) & \(1.07\pm 0.1\) \\ & T & \(0.27\pm 0.0\) & \(0.29\pm 0.0\) & \(0.32\pm 0.1\) & \(0.28\pm 0.0\) & \(0.31\pm 0.0\) & \(0.33\pm 0.1\) & \(1.37\pm 0.1\) & \(1.07\pm 0.1\) & \(1.04\pm 0.1\) \\ \hline \multirow{4}{*}{
\begin{tabular}{} \end{tabular} } & HA & \(0.18\pm 0.0\) & \(0.25\pm 0.1\) & \(0.30\pm 0.1\) & \(0.21\pm 0.1\) & \(0.37\pm 0.2\) & \(\mathbf{0.42\pm 0.1}\) & \(1.64\pm 0.2\) & \(1.19\pm 0.4\) & \(\mathbf{1.04\pm 0.3}\) \\ & R & \(0.21\pm 0.1\) & \(0.19\pm 0.0\) & \(0.17\pm 0.0\) & \(0.23\pm 0.1\) & \(0.22\pm 0.0\) & \(0.19\pm 0.0\) & \(1.70\pm 0.1\) & \(1.73\pm 0.1\) & \(1.80\pm 0.1\) \\ & RF & \(\mathbf{0.28\pm 0.0}\) & \(0.28\pm 0.0\) & \(0.24\pm 0.0\) & \(\mathbf{0.37\pm 0.0}\) & \(\mathbf{0.43\pm 0.0}\) & \(\mathbf{0.43\pm 0.0}\) & \(\mathbf{0.41\pm 0.0}\) & \(\mathbf{1.45\pm 0.0}\) & \(\mathbf{1.13\pm 0.0}\) & \(1.14\pm 0.0\) \\ & MLP & \(0.27\pm 0.0\) & \(0.26\pm 0.1\) & \(0.22\pm 0.0\) & \(0.31\pm 0.0\) & \(0.41\pm 0.1\) & \(0.39\pm 0.1\) & \(1.86\pm 0.1\) & \(1.31\pm 0.3\) & \(1.51\pm 0.5\) \\ & GNN & - & |
2302.00854 | Learning PDE Solution Operator for Continuous Modeling of Time-Series | Learning underlying dynamics from data is important and challenging in many
real-world scenarios. Incorporating differential equations (DEs) to design
continuous networks has drawn much attention recently, however, most prior
works make specific assumptions on the type of DEs, making the model
specialized for particular problems. This work presents a partial differential
equation (PDE) based framework which improves the dynamics modeling capability.
Building upon the recent Fourier neural operator, we propose a neural operator
that can handle time continuously without requiring iterative operations or
specific grids of temporal discretization. A theoretical result demonstrating
its universality is provided. We also uncover an intrinsic property of neural
operators that improves data efficiency and model generalization by ensuring
stability. Our model achieves superior accuracy in dealing with time-dependent
PDEs compared to existing models. Furthermore, several numerical pieces of
evidence validate that our method better represents a wide range of dynamics
and outperforms state-of-the-art DE-based models in real-time-series
applications. Our framework opens up a new way for a continuous representation
of neural networks that can be readily adopted for real-world applications. | Yesom Park, Jaemoo Choi, Changyeon Yoon, Chang hoon Song, Myungjoo Kang | 2023-02-02T03:47:52Z | http://arxiv.org/abs/2302.00854v1 | # Learning PDE Solution Operator for Continuous Modeling of Time-Series
###### Abstract
Learning underlying dynamics from data is important and challenging in many real-world scenarios. Incorporating differential equations (DEs) to design continuous networks has drawn much attention recently, however, most prior works make specific assumptions on the type of DEs, making the model specialized for particular problems. This work presents a partial differential equation (PDE) based framework which improves the dynamics modeling capability. Building upon the recent Fourier neural operator, we propose a neural operator that can handle time continuously without requiring iterative operations or specific grids of temporal discretization. A theoretical result demonstrating its universality is provided. We also uncover an intrinsic property of neural operators that improves data efficiency and model generalization by ensuring stability. Our model achieves superior accuracy in dealing with time-dependent PDEs compared to existing models. Furthermore, several numerical pieces of evidence validate that our method better represents a wide range of dynamics and outperforms state-of-the-art DE-based models in real-time-series applications. Our framework opens up a new way for a continuous representation of neural networks that can be readily adopted for real-world applications.
Machine Learning, Time-Series, Neural ODEs, Time-Series, Time-Series, Time-Series, Time-Series
## 1 Introduction
The modeling of time-evolving data plays an important role in various applications in our everyday lives, including climate forecasting (Schneider, 2001; Mudelsee, 2019), medical sciences (Stoffer and Ombao, 2012; Jensen et al., 2014), and finance (Chatigny et al., 2020; Andersen et al., 2005). Numerous deep learning architectures (Connor et al., 1994; Hochreiter and Schmidhuber, 1997; Cho et al., 2014) have been developed to learn sequential patterns from time-series data. In recent years, leveraging differential equations (DEs) to design continuous networks has attracted increasing attention, first sparked by neural ordinary differential equations (Neural ODEs; Chen et al. 2018). DEs that characterize the rates of change and interaction of continuously varying quantities have become the indispensable mathematical language to describe time-evolving real-world phenomena (Cannon and Dostrovsky, 2012; Sunden and Fu, 2016; Black and Scholes, 2019). By virtue of their ability to represent and predict the world around us, incorporating DEs into neural networks has reinvigorated research in continuous deep learning, offering the ability to handle irregular time-series (Rubanova et al., 2019; De Brouwer et al., 2019; Schirmer et al., 2022).
Despite their eminent success, Neural ODEs have yet to be successfully applied to complex and large-scale tasks due to the limitation of expressiveness of ODEs. To respond to this drawback, several works have enhanced the expressiveness of Neural ODEs (Gholami et al., 2019; Gu et al., 2021). Another line of work attempts to introduce more diverse differential equations, such as controlled DEs (Kidger et al., 2020), delay DEs (Zhu et al., 2020; Anumasa and PK, 2021), and integro DEs (Zappala et al., 2022). However, in real-world applications, we usually know very little about the underlying dynamics of time-evolving systems, such as how their states evolve in general and which differential equations they obey, how variables depend on each other, and how high derivatives it contains. Therefore, it is necessary to develop a model that can learn an extended class of differential equations that is able to cover more diverse applications (Holt et al., 2022).
In this work, we propose a partial differential equation (PDE) based novel framework that can learn a broad range of time-evolving systems without prior knowledge of governing equations. PDEs that enjoy relations between the various partial derivatives of multivariable states represent much general dynamics, including ODEs as a special case. As the
underlying dynamics are unknown in real-world data, it should be oblivious to the knowledge of the underlying PDE structure and needs to be learned from the data. To this end, we adopt Fourier neural operator (FNO; (Li et al., 2021)) that automatically learns PDE solution operators in a completely data-driven way without prior information on the governing PDE. Because FNO handles time in discrete representation, however, FNO is difficult to directly transfer to irregularly-sampled time-series commonly arising in real-world problems. To render it more suitable for continuous time-series, we propose a continuous-time FNO, termed _CTFNO_, that can treat time continuously without requiring a specific temporal grid. We also demonstrate the representational power of CTFNO via rigorous theoretical proof of the universal approximation theorem. Moreover, we present a property of neural operator that guarantees stability. As it leads to well-posed learning problems, the stabilization makes to model better at generalization. A wide array of numerical evidence validates that CTFNO can flexibly capture diverse time-dependent systems, outperforming baseline models not only for PDEs but also for various dynamics. Furthermore, our model provides superior performance on a wide array of real-world time-series data.
## 2 Background
Fourier Neural OperatorLet \(\Omega\subset\mathbb{R}^{n}\) be a bounded domain. For a given input \(a:\Omega\rightarrow\mathbb{R}^{d_{a}}\), which could be any of source or initial functions, neural operators learn the corresponding solution \(u:\Omega\rightarrow\mathbb{R}^{d_{a}}\) to a governing PDE. The solution to fairly general PDEs is represented as a convolution operator with a kernel \(G:\mathbb{R}^{n}\rightarrow\mathbb{R}^{d_{a}\times d_{a}}\) called by a Green's function (Evans, 2010) as follows:
\[u\left(x\right)=G\ast a\left(x\right)=\int_{\Omega}G\left(x-y\right)a\left(y \right)dy,\;\forall x\in\Omega. \tag{1}\]
Due to the shift-invariant nature of the Green's function, this solution operator can be efficiently computed through the Fourier transform, known as the convolution theorem (Bracewell & Bracewell, 1986). This elucidates a way to design Fourier neural operator (FNO; Li et al. 2021). The overall computational flow of FNO for approximating the convolution operator (1) is given as
\[a\xrightarrow{\mathcal{P}}v_{0}\xrightarrow{\mathcal{L}_{1}}v_{1} \xrightarrow{\mathcal{L}_{2}}\ldots\xrightarrow{\mathcal{L}_{L}}v_{L} \xrightarrow{\mathcal{Q}}u\]
for a given depth \(L\). To increase expressiveness, the input function \(a\) is lifted to a higher dimensional representation by \(v_{0}=\mathcal{P}(a)(x)\coloneqq Pa(x)\) with a matrix \(P\in\mathbb{R}^{d_{v}\times d_{a}}\). \(\mathcal{Q}\) is a projection operator of the form \(\mathcal{Q}(v)(x)\coloneqq Qv(x)\) for \(Q\in\mathbb{R}^{d_{u}\times d_{v}}\). Fourier layers \(\mathcal{L}_{\ell}\) are defined as follows:
**Definition 2.1**.: (**Fourier layers** (Li et al., 2021)) For a convolution kernel \(\kappa_{\ell}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{d_{x}\times d_{v}}\), a linear transform \(W_{\ell}:\mathbb{R}^{d_{v}}\rightarrow\mathbb{R}^{d_{v}}\), and an activation function \(\sigma:\mathbb{R}\rightarrow\mathbb{R}\), the \(\ell\)-th Fourier layer \(\mathcal{L}_{\ell}\) is defined as follows:
\[\mathcal{L}_{\ell}\left(v\right)(x) \coloneqq\sigma\left(W_{\ell}v\left(x\right)+k_{\ell}\ast v \left(x\right)\right) \tag{2}\] \[=\sigma\left(W_{\ell}v\left(x\right)+\mathcal{F}^{-1}\left(R_{ \ell}\cdot\left(\mathcal{F}v\right)\right)\left(x\right)\right),\;\forall v: \Omega\rightarrow\mathbb{R}^{d_{v}},\;x\in\Omega, \tag{3}\]
where \(R_{\ell}=\mathcal{F}\left(\kappa_{\ell}\right)\) is directly learned, \(\mathcal{F}\) is Fourier transform, of which the inverse operator is denoted by \(\mathcal{F}^{-1}\).
Both \(\mathcal{F}\) and \(\mathcal{F}^{-1}\) are implemented by fast Fourier transform (Nussbaumer, 1981) with truncated frequencies.
Treatment of time-varying problemsWhen applied to time-dependent PDEs, the original FNO can only learn an operator that maps the initial function to a solution for a single fixed time. To deal with time-varying problems, two methods are suggested in (Li et al., 2021): FNO-RNN poses the time-dependent problem as a sequence-to-sequence task. But autoregressive training is often hard to train. Alternatively, FNO-2D treats it as an \((n+1)-\)dimensional problem by adding one more dimension and applying FNO layers to convolve in the space-time domain. In this case, the model can only predict solutions at times on a fixed equispaced temporal mesh. In addition, it requires quite a few parameters because, for example, a one-dimensional problem is treated as a two-dimensional problem.
## 3 Continuous-Time PDE Solution Operator
### Continuous-Time FNO
FNO has shown a promising ability to learn complex PDEs, however, it deals with time-evolving systems with iterative rollouts or on specific temporal grids as we discussed in the previous section. This notion of discrete time of FNO hinders its wider applicability to time-varying systems. To ameliorate this limitation, we introduce a _continuous-in-time Fourier neural operator (CTFNO)_. The design of CTFNO is inspired by the Green's function formula for time-dependent PDEs,
which says that there exists a Green's function \(G:\left[0,\infty\right)\times\mathbb{R}^{n}\rightarrow\mathbb{R}^{d_{u}\times d_{u}}\) such that the solution for an initial condition \(a\left(x\right)\) is represented by follows:
\[u\left(t,x\right)=\int_{\Omega}G\left(t,x-y\right)a\left(y\right)dy,\;\forall \left(t,x\right)\in\left[0,\infty\right)\times\Omega. \tag{4}\]
For example, Green's function of the heat equation \(u_{t}-\nu u_{xx}=0\), describing the temperature on a surface as a function of time, is \(G\left(t,x-y\right)=\frac{1}{\sqrt{4\pi\nu t}}\exp\left(-\frac{\left|x-y\right| ^{2}}{4\nu t}\right)\). This shows that the **weights of an FNO layer should be conditioned on time**, to learn an operator \(\left(t,a\left(x\right)\right)\mapsto u\left(t,x\right)\) for an arbitrary time \(t\) and initial condition \(a\left(x\right)\). To this end, we propose a time-aware Fourier layer that updates the solution based on (4) as follows:
**Definition 3.1**.: **(Continuous-time Fourier layers)** For \(t\in\left[0,\infty\right)\), a convolution kernel \(\kappa_{\ell}\left(t\right):\mathbb{R}^{n}\rightarrow\mathbb{R}^{d_{u}\times d _{u}}\), \(R_{\ell}\left(t\right)=\mathcal{F}\left(\kappa_{\ell}\left(t\right)\right)\), a linear transform \(W_{\ell}\left(t\right):\mathbb{R}^{d_{u}}\rightarrow\mathbb{R}^{d_{u}}\), \(\varphi_{\ell}\left(t\right):\mathbb{R}^{n}\rightarrow\mathbb{C}\), and \(\psi_{\ell}\left(t\right)\in\mathbb{R}^{d_{u}\times d_{u}}\) the \(\ell\)-th continuous-time Fourier layer is defined as follows: \(\forall v:\Omega\rightarrow\mathbb{R}^{d_{v}},\;x\in\Omega\),
\[\mathcal{L}_{\ell}\left(v\right)\left(t,x\right) =\sigma\left(W_{\ell}\left(t\right)v\left(x\right)+k_{\ell} \left(t\right)*v\left(x\right)\right) \tag{5}\] \[=\sigma\left(W_{\ell}\left(t\right)v\left(x\right)+\mathcal{F}^{- 1}\left(R_{\ell}\left(t\right)\cdot\left(\mathcal{F}v\right)\right)\left(x \right)\right)\] \[=\sigma\left(W_{\ell}\psi_{\ell}(t)v\left(x\right)+\mathcal{F}^{- 1}\left(\varphi_{\ell}\left(t\right)R_{\ell}\cdot\left(\mathcal{F}v\right) \right)\left(x\right)\right).\]
Time ModulationTo equip the FNO network with the ability to capture information on the time of observations, a time-dependent layer (5) is constructed as the following temporal modulation: For each time \(t\) and frequency \(\xi\),
\[\begin{cases}W_{\ell}\left(t\right)&=W_{\ell}\psi_{\ell}(t),\\ R_{\ell}\left(t,\xi\right)&=\varphi_{\ell}\left(t,\xi\right)R_{\ell}\left( \xi\right),\end{cases} \tag{6}\]
where we use notational shortcuts \(\varphi_{\ell}\left(t,\xi\right)=\varphi_{\ell}\left(t\right)\left(\xi\right)\) and similar for \(R(t,\xi)\). We design \(\varphi_{\ell}\) and \(\psi_{\ell}\) as follows:
1. Two sharing networks, parameterized by two-layer fully connected networks together with sinusoidal embedding (Vaswani et al., 2017), first convert the input time \(t\) into multi-dimensional representations \(\varphi\left(t\right),\;\psi\left(t\right)\in\mathbb{R}^{c}\) for a hidden dimension \(c\).
2. For each Fourier layer \(\mathcal{L}_{\ell}\) and frequency \(\xi\), learnable \(A_{\ell}:\mathbb{R}^{n}\rightarrow\mathbb{C}^{c}\) and \(B_{\ell}\in\mathbb{R}^{d_{u}\times c}\) produce time information \(\varphi_{\ell}\left(t,\xi\right)=\varphi\left(t\right)^{T}A_{\ell}\left(\xi \right)\in\mathbb{C}\) and \(\psi_{\ell}\left(t\right)=\text{diag}\left(B_{\ell}\psi\left(t\right)\right) \in\mathbb{R}^{d_{u}\times d_{v}}\) (the diagonal matrix with the elements of vector \(B_{\ell}\psi\left(t\right)\) on the main diagonal).
See Figure 1 for a schematic diagram.
### Universal Approximation
In this section, we prove the universality of the proposed CTFNO, which is condensed in the following informal statement. A formal statement and details of the proof are provided in Appendix A.
**Theorem 3.2**.: **(Informal)** _CTFNO can approximate any time-dependent continuous operator, of arbitrary accuracy._
Figure 1: The visualization of CTFNO architecture. Two time embedding networks specify the time \(t\) into hidden representations \(\varphi(t)\) and \(\psi(t)\). Then \(\varphi(t)\) and \(\psi(t)\) incorporate the temporal information into each Fourier layer through time modulation operators \(A_{\ell}\) and \(B_{\ell}\). See Appendix C.2 for details.
### Stability
When we learn a model for the time evolution of dynamical systems, the learned dynamics should be guaranteed to be well-posed. It is important because the learned system can be unstable when using a generic neural network (Szegedy et al., 2013; Moosavi-Dezfooli et al., 2017). Such unstable networks are vulnerable to adversarial attacks, overfitting, and unauthorized exploitation, which may render the network useless in practice. Therefore, stability is a necessary condition in real-world applications. The stability is measured by the sensitivity of the prediction with respect to small perturbations of the inputs (Hadamard, 1902). A formal definition is given as follows.
**Definition 3.3**.: (Stability) A time-dependent PDE is said to be _stable_ if for any solution \(u\left(t,x\right)\) with initial condition \(u_{0}\left(x\right)\) and \(\epsilon>0\), there exists \(\delta>0\) such that for all new initial function \(\tilde{u}_{0}\left(x\right)\) satisfying \(\left\|\tilde{u}_{0}-u_{0}\right\|<\delta\), the corresponding solution \(\tilde{u}\left(t,x\right)\) satisfies \(\left\|\tilde{u}-u\right\|<\epsilon\) for all \(t\geq 0\).
The stability is a hard constraint imposed upon the model. While some studies have addressed the stability of network architectures, it has typically been used as a soft constraint by adding an extra regularization loss (Moosavi-Dezfooli et al., 2019), or required computation of the eigenvalues of the Jacobian matrix (Ross and Doshi-Velez, 2018; Hoffman et al., 2019). Besides, the stability of Neural ODEs is elicited by stable discretization techniques of the ODEs (Haber and Ruthotto, 2017; Yan et al., 2020). The stability of CTFNO is connected to the well-posedness of the learned solution operator. The kernel formulation (4) sheds light on a way to ensure stability. Proof is deferred to Appendix B.
**Proposition 3.4**.: _(Stability of CTFNO) If \(\left\|R\left(t,\xi\right)\right\|_{2}\) and \(\left\|W\left(t\right)\right\|_{2}\) are bounded for every \(t>0\), then the corresponding CTFNO with a Lipschitz continuous activation function is stable._
Gershgorin discs normalizationAs given in proposition 3.4, the global stability of CTFNO is guaranteed if the Fourier kernel and weight have bounded \(L^{2}\) norms. Because of the expensive computational cost of \(L^{2}\) norm, however, we suggest a practical method for enforcing stability conditions. Gershgorin's circle theorem (Varga, 2010) allows us to make fast deductions on the bound of eigenvalues. It states that every eigenvalue \(\lambda\) of a square matrix \(A=\left(a_{ij}\right)\) satisfies \(\left|\lambda-a_{ii}\right|\leq\Sigma_{j\neq i}\left|a_{ij}\right|\) for each \(i\). Therefore, as we regulating the \(L^{1}\) norm of each row \(\mathbf{r}_{i}\), we can impose the requisite stability. In implementation, we normalize \(\parallel\mathbf{r}_{i}\parallel_{L^{1}}\leq M\) for each \(i\) with a pre-defined \(M>0\).
## 4 Experiments
### Experiments for learning time-dependent PDEs
In this section, we empirically validate the performance of the proposed model as a continuous-time neural PDE surrogate. Given an initial function \(u_{0}\), we train models to learn \(\left(t,u_{0}\left(x\right)\right)\mapsto u\left(t,x\right)\) for \(t\in\left(0,T\right]\) with mean squared error (MSE) loss. Throughout all experiments, we run models three times with different random seeds and report the averaged value.
DatasetsWe choose four PDEs for numerical experiments. We consider **heat**(Baron Fourier, 1878) and **Burgers**' equations (Bateman, 1915), which are canonical time-dependent linear and nonlinear PDEs, respectively. They take the form
\[\frac{\partial u}{\partial t}+\alpha u\frac{\partial u}{\partial x}=\nu\frac{ \partial^{2}u}{\partial x^{2}},\ \ x\in\left(0,1\right),\ t\in\left(0,T\right], \tag{7}\]
with the corner cases: heat \(\alpha=0\) and Burgers' equation \(\alpha=1\). Here, \(\nu\) is a positive constant. We also apply our model for two examples provided by PDEBench(Takamoto et al., 2022): **compressible Navier-Stokes equations** equations, which describes the motion of fluid dynamics, and **diffusion-sorption** equation. Diffusion-sorption equation is a diffusion process influenced by a retardation factor, which is is a variable stands for the degree to which the diffusion process is hindered by the sorption interactions. Detailed descriptions are provided in Appendix C.1.
BaselinesWe compare the performance of the proposed model with representative PDE surrogates. DeepONet (DON; Lu et al. 2019) is an alternative operator learning method that represents the solution operator by a basis expansion. POD-DeepONet (PDN; Lu et al. 2022), a model based on a proper orthogonal decomposition of function spaces, is also considered. FNO-2D (Li et al., 2021) is a Fourier neural operator with spatio-temporal inputs and FNO-RNN is an autoregressivly trained FNO.
ResultsResultsResults of the test root MSE (RMSE) are reported in Table 1. We can see that CTFNO significantly outperforms all baselines. Comparing the results of CTFNO with the original FNOs, the core strengths of the proposed model stand out more. The results show that the autoregressive learning-based FNO-RNN is difficult to capture the dynamics of PDEs accurately. Also, it requires several autoregressive rollouts to predict the solution after a long time, which is rather time-consuming. On the other hand, CTFNO can predict the solution with a single call. Besides, unlike FNO-2D, which can only predict solutions at times on a fixed uniform grid, CTFNO can predict a solution at any desired time, retaining the number of parameters regardless of the length of prediction time. Table 1 shows that our model uses five times fewer parameters than FNO-2D. The results demonstrate that the use of the proposed time-dependent structure significantly improves the capacity of the model to treat time. Furthermore, we obtain 2-8\(\times\) and 6-15\(\times\) speed-ups for training and inference time, respectively (See Table 2). Moreover, our model is superior to existing benchmark PDE models. The results confirm that our model describes the diffusion phenomenon quite well. Furthermore, CTFNO outperforms other models even in the dissipative nonlinear system with shock formation (Burgers), and complex fluid dynamics (Navier-Stokes). We also include additional heatmaps of the learned solution compared with the exact solution over the entire time in Appendix D.3. The overall results confirm the superiority of CTFNO over existing PDE surrogates for learning time-dependent PDEs.
Ablation study on where to assign timeThe rationale of the way of imposing temporal information on CTFNO is based on Green's formula of time-dependent PDEs (4). Here, we examine how useful the weight modulating structure of CTFNO is for learning time-dependent PDEs. Rather than **weights**, there are three more places in the FNO network where temporal information can be mounted. We study these three alternative ways as follows:
* _Baseline 1_: concatenate \(t\) with an **input** function \(u_{0}\left(x\right)\).
* _Baseline 2_: concatenate encoded time \(\varphi_{0}(t)\) with a **lifted input** function \(v_{0}(x)\).
* _Baseline 3_: concatenate encoded times \(\varphi_{0}(t)\), \(\ldots\), \(\varphi_{L}(t)\) with intermediate **features**\(v_{0}(x),\ldots,v_{L}(x)\), respectively.
See Appendix C.2 for more details. We test the ability of these models to learn heat and Burgers' equations. Results in Table 3 show that, in both examples, the ablation models result in significantly lower performance than CTFNO. A method of concatenating time into the input allows us a simple way to put time information into the network, but the results confirm that it is not effective at all. Despite the structure being capable of handling arbitrary times consecutively, the three ablation study models perform similarly or worse than FNO-2D, which can only evaluate solution values over time on a specific grid. The results validate that the time modulating structure of CTFNO designed based on the time-dependent Green's formula is much more expressive in learning the PDE.
### Capability to represent diverse dynamics
In this section, we further harness the proposed CTFNO for modeling a variety of time-evolving dynamics, not confining to physical PDE problems. In all experiments, every model was trained with MSE loss.
Why do we consider PDEs for time-series modeling?Starting with Neural ODEs, leveraging DEs has been found to be effective in modeling time-series data. They have shown promising results, however, their model architectures and
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Model & Heat & Burgers & Diffusion-Sorption & Navier-Stokes & \# Params \\ \hline FNO-RNN & 9.506 & 73.100 & 5.187 & 36.791 & 2.02M \\ FNO-2D & 0.033 & 3.136 & 0.053 & 4.486 & 7.00M \\ DON & 0.473 & 6.022 & 0.234 & 4.207 & 1.58M \\ PDN & 0.323 & 5.796 & 0.314 & 3.781 & 1.78M \\ CTFNO & **0.026** & **1.952** & **0.042** & **2.947** & 2.38M \\ \hline \hline \end{tabular}
\end{table}
Table 1: RMSE (\(\times 10^{-2}\)) results and the number of parameters of each model on PDE problems.
\begin{table}
\begin{tabular}{c c c} \hline \hline Model & Training & Inference \\ \hline FNO-RNN & 6.34 & 0.74 \\ FNO-2D & 2.06 & 0.30 \\ CTFNO & **0.79** & **0.05** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Training and inference time (second/epoch) on heat equation.
\begin{table}
\begin{tabular}{c c c} \hline \hline Model & Heat & Burgers \\ \hline Baseline 1 & 1.538 & 5.654 \\ Baseline 2 & 0.370 & 5.305 \\ Baseline 3 & 0.794 & 4.426 \\ CTFNO & **0.026** & **1.952** \\ \hline \hline \end{tabular}
\end{table}
Table 3: RMSE (\(\times 10^{-2}\)) errors of ablation studies.
inference schemes are specialized to the specific DEs on which they are based. These bespoke model structures rule out their generalization ability to other classes of DEs, which is further exacerbated in real-world applications. The necessity for a model capable of learning a wide range of dynamics has also been discussed by Holt et al. (2022). However, these existing studies consider the target state only as a function of a single time variable, not a multivariate function of other variables as well as time. This makes the models difficult to understand which variables the dynamics depend on and how they relate to each other. It will be even more limited in real applications that approximate dynamics in latent space where how states evolve and what kinds of differential equations they follow are unknown. On the other hand, time-dependent PDEs describe the evolution of a physical quantity, not only with time but also according to other variables such as spatial variables. Due to their heavy expressivity, PDEs are widely used to describe complex continuous processes (Temam, 2001; Kulov and Gordeev, 2014; Joshi, 2002). In what follows, we show that our PDE-based model can better represent a diverse class of dynamics than existing DE-based models.
DatasetsWe use several illustrative examples to demonstrate the outstanding capacity of the proposed method in learning diverse classes of dynamics. A mathematical formulation of these dynamics can be found in Appendix C.1.
* **Square and Sawtooth**(Bilos et al., 2021) generate piecewise differentiable trajectories having cusps. We consider these to evaluate the capability on modeling waveform signals.
* **Stiff ODE**(Holt et al., 2022) is a second-order ODE which exhibits regions of high stiffness. This is a typical example that Neural ODEs fail to learn.
* **Spiral ODE**(Bilos et al., 2021) is a two-dimensional system of nonlinear ODEs, commonly arising in biological systems. The dynamics describe spiral shaped trajectories.
* **Reaction ODE** is commonly used to model chemical reactions and is of the form \(\partial u/\partial t=6u\left(1-u\right)\). Unlike to aforementioned ODEs, a solution to Reaction ODE is regarded as a function.
BaselinesWe evaluate the performance of CTFNO in comparison with several DE-based continuous time models: the standard Neural ODE (NODE; Chen et al. 2018), ANODE (Dupont et al., 2019), and Neural Flow (NF; Bilos et al. 2021), which directly parametrizes the solution operator of an ODE, are adopted for ODE-based models. We also consider Neural Laplace (NL; Holt et al. 2022) which can represent diverse classes of equations by modeling them in the Laplace domain.
ResultsThe overall results of RMSE are reported in Table 4 and Figure 2 provides qualitative results of the learned solutions. Note that all models have a comparable number of parameters (See Table 7). We can see that the performance of NODE and ANODE on different datasets varies a lot. They perform well on certain data and fail to learn the dynamics of another one. Unlike these two, NF, NL, and our model directly learn the solution operator without using the numerical ODE solver. Neural Flow parameterizes the ODE solution operator in the time domain. On the other hand, NL can describe more diverse dynamics by modeling them in the Laplace domain instead of the time domain. The results, which show the superiority of NL over NF, confirm that the range of dynamics that the model can describe is crucial. For one-dimensional problems, CTFNO is similar but slightly better than NL. Qualitative comparisons in Figure 2 show that CTFNO approximates the discontinuity more accurately than NL without spurious oscillations. The advantage of CTFNO is evident in the spiral
Figure 2: Visualization of learned solutions to synthetic datasets, which shows the superiority of our CTFNO.
and reaction ODEs. Spiral is a system of ODEs, and reaction ODE describes the time-dependent evolution of a function defined in a spatial domain. NL can cover a wide range of dynamics, however, it only considers univariate DEs that depend on only a single time variable. The same goes for other models. On the other hand, our model can represent time-varying dynamics by considering other correlations. In Table 4, RMSE of CTNO on reaction ODE is more than ten times better than other models and more than two times better for the spiral. It confirms that the structure of CTFNO is helpful in approximating systems or high-dimensional dynamics. In addition, the results in Table 15 show that PDE surrogates learn PDE solution operators much better than NF and NL (both cannot represent spatial relations). Moreover, results that show the superiority of CTFNO in extrapolating the spiral trajectories are provided in Figure 9. The overall results demonstrate how important the range of expressible dynamics of the model is, validating the suitability of CTFNO for learning a wide array of dynamics.
### Real Time-Series Applications
This section is devoted to investigating the performance of our model on interpolation and prediction tasks on real-world time-series datasets, including partially observed, multi-variate sequences.
Time-Series Modeling in Latent SpaceTo represent sporadically observed real time-series data, we follow the encoder-decoder framework of Latent ODEs (Rubanova et al., 2019) that leverage a VAE (Kingma and Welling, 2014; Szegedy et al., 2013) architecture to represent incomplete time-series data as a continuous-time model. To focus on the model representation of inherent dynamics in the latent space, we employ a simple RNN encoder. By passing a given input time-series through the RNN encoder, a latent vector \(z_{0}\) is sampled using the last feature vector as the mean \(\mu\) and standard deviation \(\sigma\), which is the same as VAE. Now, assuming implicit dynamics in the latent space with \(z_{0}\) as the initial point, CTFNO returns an output vector at the desired time steps. Finally, after passing through a decoder with a fully connected layer, the difference between the output and the target becomes the loss function for training. Precise implementation details can be found in Appendix C.
DatasetsWe evaluate our model on three real time-series data. For more details, see Appendix C.1.
* **MuJoCo**(Tassa et al., 2018) hopper environment from the Deepmind Control Suite records 14-dimensional attributes, including state and action, with 100 timestamps. To deal with partial observations, we conduct interpolation and prediction tasks, in which we reveal either 10%, 20%, 30%, or 50% of the ground truth.
* **PhysioNet 2012**(Silva et al., 2012) is an irregularly sampled real-world clinical dataset, which is investigated to evaluate our model on sparsely observed time-series. The goal is to interpolate and predict 41 biomedical features, such as heart rate and glucose, of intensive care unit (ICU) patients.
* **Human Activity**(Kaluza et al., 2010) consists of sporadically observed sensor data collected from five individuals performing several activities (i.e. walking, standing, etc). We use pre-processing steps as they were provided by Rubanova et al. (2019), resulting in 6554 sequences of 211 time points. We train the models to classify the type of human activities from sequential data.
BaselinesDE-based models applied to real time-series data are chosen for the comparisons: RNN-VAE is a variational autoencoder (VAE; Kingma and Welling 2014; Rezende et al. 2014) model whose encoder and decoder are recurrent neural networks (RNNs). ODE-RNN (Rubanova et al., 2019) is a RNN model which uses Neural ODEs to model hidden state dynamics. Two Latent ODE (LODE) models with RNN (Chen et al., 2018) and ODE-RNN (Rubanova et al., 2019) encoders
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Model & Square & Sawtooth & Stiff & Spiral & Reaction \\ \hline NODE & 97.50 & 28.09 & 44.23 & 3.26 & 5.109 \\ ANODE & 80.95 & 9.92 & 37.18 & 3.26 & 4.241 \\ Neural Flow & 20.40 & 7.93 & 26.39 & 3.26 & 10.300 \\ Neural Laplace & 17.06 & 4.84 & 20.83 & 4.25 & 2.804 \\ CTFNO & **11.69** & **3.90** & **16.53** & **1.87** & **0.239** \\ \hline \hline \end{tabular}
\end{table}
Table 4: RMSE (\(\times 10^{-2}\)) results on synthetic data.
are also considered. Finally, Coupling Flow in Neural Flow (Bilos et al., 2021), an ODE solution operator which directly models the solution curves of an ODE, is compared.
ResultsIn all experiments, we build models with comparable network sizes to make a fair comparison. Results on MuJoCo reported in Table 5 show that CTFNO consistently outperforms the baseline models to a large extent on all tasks of both interpolation and prediction across all kinds of observed time points. Another observation is that the performance difference between the cases with many and few observation points is the smallest. These imply that our model approximates the time evolution of latent variables well, even with a small number of observations. The results in Table 5 suggest that CTFNO can be leveraged as a relevant model for real applications on time-series with missing time steps. Table 6 summarizes interpolative and predictive performance on PhysioNet and per-time-point classification accuracy on Activity. CTFNO is superior to all of the benchmark models on both PhysioNet and Activity, which indicate that our model provides a useful utilization of sparsely observed time-series data and a meaningful representation for classification. Moreover, the numerical integration used in ODE-based models is computationally expensive and sometimes shows numerical instability (See Table 14). On the other hand, Neural Flow and our method efficiently predict the latent trajectories without the need for costly numerical schemes. The overall results demonstrate that our proposed model consistently improves the performance of baseline models in real applications and is a novel approach that can handle a wide range of real-world problems.
### Stability and Generalization
Data EfficiencyOne of the main drawbacks of neural operators is that they require a wealth of available data. A large corpus of input-output pairs of numerical PDE solutions is costly to generate. Even if the data is obtained from observations of the physical phenomena, they could be scarce. Therefore, successfully training neural operators with small data is very important to make them useful in real applications. In Figure 3, we investigate that the stabilization scheme proposed in Section 3.3 improves the data efficiency capability. Boxplots report test MSEs of CTFNOs with and without stabilization per varying percentages of training data. We can see that the error of the non-stabilized CTFNO increases a lot when the training data is of limited quantity. Meanwhile, the stabilized CTFNO retains a consistently smaller test error with lower variance in all scenarios. These show that the stabilization scheme allows the model to make better use of the data, resulting in efficient learning without a lot of expensive data. Moreover, since the training cost is proportional to the size of the training dataset, stabilization provides an effective way to reduce the training cost. The stabilization that can
\begin{table}
\begin{tabular}{c c c c} \hline \hline Model & \multicolumn{2}{c}{PhysioNet} & Activity \\ \cline{2-4} & Interpolation & Prediction & Accuracy \\ \hline RNN-VAE & 5.93 & 3.05 & 34.3 \\ ODE-RNN & 2.36 & - & 82.9 \\ LODE (RNN enc) & 3.16 & 5.78 & 83.5 \\ LODE (ODE enc) & 2.23 & 2.95 & 84.6 \\ Neural Flow & 2.94 & - & 65.7 \\ CTFNO & **1.80** & **2.10** & **85.0** \\ \hline \hline \end{tabular}
\end{table}
Table 6: MSE (\(\times 10^{-3}\)) on PhysioNet and per-time-point classification accuracies (\(\%\)) on Human Activity.
Figure 3: Evaluation loss of CTFNO on heat equation. Stabilization improves the data efficiency.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Model & \multicolumn{4}{c}{Interpolation (\% Observed Points)} & \multicolumn{4}{c}{Prediction (\% Observed Points)} \\ \cline{2-9} & \(10\%\) & \(20\%\) & \(30\%\) & \(50\%\) & \(10\%\) & \(20\%\) & \(30\%\) & \(50\%\) \\ \hline RNN-VAE & 65.14 & 64.08 & 63.05 & 61.00 & 23.78 & 21.35 & 20.21 & 17.82 \\ ODE-RNN & 16.47 & 12.09 & 9.86 & 6.65 & 135.08 & 319.5 & 154.65 & 264.63 \\ LODE (RNN enc) & 24.77 & 5.78 & 27.68 & 4.47 & 16.63 & 16.53 & 14.85 & 13.77 \\ LODE (ODE enc) & 3.60 & 2.95 & 3.00 & 2.85 & 14.41 & 14.00 & 11.75 & 12.58 \\ Neural Flow & 7.15 & 5.58 & 4.96 & 4.60 & 17.99 & 16.10 & 15.48 & 15.29 \\ CTFNO & **1.53** & **1.18** & **1.12** & **1.15** & **9.26** & **8.93** & **8.42** & **8.70** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Interpolation and prediction MSE (\(\times 10^{-3}\)) on the MuJoCo dataset.
achieve these benefits brings our PDE surrogates one step closer to practical applications.
Model GeneralizationPlane Vibration (PV; Noel & Schoukens 2017) is a multi-variate data, consisting of five features; force, voltage, and accelerations measured in three spots. We comply with the experimental setup as provided in HBNODEs (Xia et al., 2021), whose task is to forecast the next eight time steps from the previous 64 consecutive time observations. PV dataset is considered to elucidate how the stabilization scheme works in practical time-series data. MSE (\(\times 10^{-2}\)) of second-order Neural ODEs, containing SONODE (Noccliffe et al., 2020) and HBNODE (Xia et al., 2021), reported in (Xia et al., 2021) is \(\geq\) 2.5, and it is annotated by a green dashed line in Figure 4. Our CTFNO achieves much lower MSE of 1.96. Figure 4 also presents a positive effect of stabilization; a tendency of CTFNO to be less prone to overfitting. We investigate the variability of the performance of CTFNO with respect to the Gershgorin's disc stabilization parameter \(M\). The ability to generalize well outside the training dataset is essential for models to be practically useful. We observe that increasing the \(M\) leads to overfitting, and too small \(M\) produces a degenerated system, dropping the performance. Besides, the model with congenial stabilization does not suffer from overfitting or degradation of accuracy, which validates the effect of stabilization. Moreover, it is notable that Neural ODEs require tremendous computational burdens due to their use of numerical ODE solvers. By obviating the need for the ODE solver, the computational time of CTFNO is merely about 10% of Neural ODEs.
_Remark 4.1_.: Our stabilization controls the amplification of output changes in response to input perturbations. In addition to the results discussed here, there are more effects of the stabilization; it prevents overfitting and enhances the robustness against noisy observations and adversarial attacks. We refer to Appendices D.1 and D.3 for further investigation of the effects of stabilization over model generalization.
## 5 Related Work
Neural PDE SurrogatesPioneering works (Raissi et al., 2019; Sirignano & Spiliopoulos, 2018) have incorporated physical principles into neural networks directly into loss functional. But, they have a challenging optimization landscape (Wang et al., 2021; Krishnapriyan et al., 2021) and require training a new network for a new PDE instance. An orthogonal class of methods is autoregressive approaches (Brandstetter et al., 2022; Horie & Mitsume, 2022), which solve PDEs iteratively. They have been well-suited to irregular boundaries and integrated with existing numerical PDE schemes and benefit from that. But, repetitively applying rollouts, even using numerical ODE solvers (Bar-Sinai et al., 2019; Lienen & Gunnemann, 2022), requires high computational costs and often makes the model hard to train. Recently, a new line of work on operator learning (Kovachki et al., 2021) has learnt a mapping from initial/boundary conditions to solutions. Some (Lu et al., 2019) learns a basis expansion of operators, others (Li et al., 2020; Gin et al., 2021; You et al., 2022; Salvi & Lemercier, 2021) use a neural network as the ansatz of the solution integral operator. In this work, we focus on Fourier neural operator (FNO; Li et al. 2021), which has delivered success in learning various PDEs and vision representation (Guibas et al., 2021).
Dynamics-based Time-Series ModelsThe interpretation of a residual network (He et al., 2016) as a discretization of an ODE (Chen et al., 2018; Haber & Ruthotto, 2017; Lu et al., 2018) provided an interface between deep learning and ODEs. Subsequently, extensive work has been conducted on parametrizing the continuous dynamics of hidden states using an ODE (Greydanus et al., 2019; Lu et al., 2019; Liu et al., 2021). Owing to the continuous representation of neural networks, Neural ODEs are particularly attractive for irregularly-sampled time-series data (Rubanova et al., 2019; De Brouwer et al., 2019; Chang et al., 2019; Kidger et al., 2020; Chen et al., 2020). A recent work (Bilos et al., 2021) circumvents the usage of an expensive numerical integration by directly parametrize the solution trajectory of an ODE. However, they employ over simplistic ODEs, leading to constraints on the transformation of data, which limit expressivity of the models. To tackle this limitation, there have been several parallel attempts to introduce more diverse differential equation, including controlled DEs (Kidger et al., 2020), delay DEs (Zhu et al., 2020; Anumasa & PK, 2021), integro DEs (Zappala et al., 2022), and Laplace transform-based method (Holt et al., 2022). Also, some studies have integrated PDEs into the design of neural networks
Figure 4: Effect of stabilization on the model generalization, tested on the Plane Vibration dataset.
(Eliasof et al., 2021; Ruthotto and Haber, 2020; Ben-Yair et al., 2021; Sun et al., 2020; Kim et al., 2020). But, all of these works assign author-defined specific PDEs to neural networks and all these are not applied to real time-series applications.
## 6 Conclusion & Limitations
In this paper, we presented a novel approach for modeling time-series in terms of PDEs. As time is intrinsically continuous, we proposed a neural operator CTFNO that learns the underlying PDE by equipping FNO with the ability to represent time in a continuous manner. We also provided theoretical guarantees for the universal approximation and the stability of CTFNO. Our comprehensive experiments demonstrated that CTFNO outperforms existing differential equation-based models on synthetic and real-world datasets, and the proposed stabilization method effectively improves model generalization and robustness.
We note that FNO is hard to embody discontinuous features because Fourier transform only captures global information. We expect that extending our approach to a model that can extract local features, such as (Gupta et al., 2021), provides interesting avenues for future work. Although stabilization increases the generalization of the model, the optimal \(M\) for the given data is unknown, and strongly granted stabilization for some problems, such as ill-posed PDEs, may cause performance degradation.
|
2301.08780 | Multitwists in big mapping class groups | We show that the closure of the compactly supported mapping class group of an
infinite-type surface is not generated by the collection of multitwists (i.e.
products of powers of twists about disjoint non-accumulating curves). | George Domat, Federica Fanoni, Sebastian Hensel | 2023-01-20T19:30:20Z | http://arxiv.org/abs/2301.08780v2 | # Multitwists in big mapping class groups
###### Abstract.
We show that the closure of the compactly supported mapping class group of an infinite-type surface is not generated by the collection of multitwists (i.e. products of powers of twists about disjoint non-accumulating curves).
## 1. Introduction
The mapping class group of a surface of finite type has been thoroughly studied for decades. In particular, multiple _simple_ sets of generators are known. The Dehn-Lickorish theorem ([1], [11]), in combination with the Birman exact sequence ([1]), shows that the pure mapping class group of a finite-type surface can be generated by finitely many Dehn twists about nonseparating curves, and we need to add finitely many half-twists to generate the full mapping class group. Humphries [12] proved that, if the surface is closed and of genus \(g\geq 2\), \(2g+1\) Dehn twists about nonseparating curves suffice to generate the mapping class group, and moreover this number is optimal: fewer than \(2g+1\) Dehn twists cannot generate. Other results show that mapping class groups can be generated by two elements (see e.g. [13]), by finitely many involutions or by finitely many torsion elements (see e.g. [1]).
In the case of surfaces of infinite type, the (pure) mapping class group is uncountable, so in particular it is not finitely (nor countably) generated. For a special class of surfaces, Malestein and Tao [14] proved that mapping class groups are generated by involutions, and normally generated by a single involution, but to the best of our knowledge, no other generating set is known.
Note that the (pure) mapping class group of a surface of infinite type is endowed with an interesting topology, induced by the compact-open topology on the group of homeomorphisms of the surface. So it is interesting to talk about _topological_ generating sets (sets whose _closure_ of the group they generate is the (pure) mapping class group). It follows from the finite-type results that Dehn twists topologically generate the closure of the compactly supported mapping class group. Moreover, Patel and Vlamis [10] proved that the pure mapping class group of a surface is topologically generated by Dehn twists if the surface has at most one nonplanar end, and by Dehn twists and maps called _handle shifts_ otherwise.
The goal of this note is to investigate a natural candidate for a set of generators of the closure of the compactly supported mapping class group of a surface: the collection of _multitwists_. A multitwist is a (possibly infinite) product of powers of Dehn twists about a collection of simple closed curves that do not accumulate anywhere in the surface. Our main result is a negative one:
**Theorem A**.: _Let \(S\) be an infinite-type surface. Then the collection of multitwists does not generate the closure of the compactly supported mapping class group._
The idea of the proof is to produce an explicit element that is not in the subgroup generated by multitwists. This element is built by taking an infinite product of increasing powers of partial pseudo-Anosov homeomorphisms supported on disjoint finite-type subsurfaces. We use work of Bestvina, Bromberg and Fujiwara [1] to certify that the mapping class we construct is not in the subgroup generated by multitwists.
Theorem A also begs the following question:
**Question B**.: _What is the subgroup generated by the collection of multitwists? Is there an alternative, more explicit description of its elements?_
Furthermore, our example shows that the subgroup generated by the collection of multitwists is not a closed subgroup of the mapping class group. Therefore, it does not immediately inherit a Polish topology from the topology on the mapping class group.
**Question C**.: _Is the subgroup generated by the collection of multitwists a Polish group?_
### Acknowledgements
The authors would like to thank Mladen Bestvina for his suggestion of how to remove an unnecessary assumption in the main theorem. They are also grateful to the organizers of the _Big Mapping Class Groups and Diffeomorphism Groups_ conference, during which most of the work was done.
The first author was supported in part by the Fields Institute for Research in Mathematical Sciences and NSF RTG-1745670.
## 2. Preliminaries
In this note, a surface is a connected, orientable, Hausdorff, second countable two-dimensional manifold, without boundary unless otherwise stated. One notable exception are subsurfaces, which will always have compact boundary. Boundary components of subsurfaces are assumed to be homotopically nontrivial, but are allowed to be homotopic to a puncture.
Surfaces are _of finite type_ if their fundamental group is finitely generated and _of infinite type_ otherwise. A surface \(S\) is _exceptional_ if it has genus zero and at most four punctures or genus one and at most one puncture, otherwise it is _nonexceptional_.
The _mapping class group_ of a surface \(S\) is the group \(\operatorname{MCG}(S)\) of orientation preserving homeomorphisms of \(S\) up to homotopy. The _pure mapping class group_\(\operatorname{PMCG}(S)\) is the subgroup of \(\operatorname{MCG}(S)\) fixing all ends and boundary components, and \(\operatorname{MCG}_{c}(S)\) denotes the closure of the subgroup generated by compactly supported mapping classes.
A _curve_ on a surface is the homotopy class of an essential (i.e. not homotopic to a point, a puncture or a boundary component) simple closed curve. Given a curve \(\alpha\), we denote by \(\tau_{\alpha}\) the Dehn twist about \(\alpha\).
An _integral weighted multicurve_\(\mu\) is a formal sum \(\sum_{i\in I}n_{i}\alpha_{i}\), where the \(\alpha_{i}\) are pairwise disjoint curves not accumulating anywhere and the \(n_{i}\) are integers. Given an integral weighted multicurve \(\mu\), we define \(\tau_{\mu}\) to be the mapping class
\[\tau_{\mu}=\prod_{i\in I}\tau_{\alpha_{i}}^{n_{i}}.\]
Such a mapping class is called a _multitwist_.
We say that an integral weighted multicurve is _finite_ if \(I\) is finite (i.e. it contains finitely many curves). An integral weighted multicurve \(\nu\) is a _submulticurve_ of an integral weighted multicurve \(\mu=\sum_{i\in I}n_{i}\alpha_{i}\) if \(\nu=\sum_{i\in J}n_{i}\alpha_{i}\), where \(J\subset I\).
Given a surface with boundary, an _arc_ is the homotopy class (relative to the boundary) of a simple arc that cannot be homotoped into the boundary. We denote by \(C(S)\) the _curve and arc graph_ of a surface \(S\), where vertices are curves and, if \(\partial S\neq\emptyset\), arcs, and two vertices are adjacent if they have disjoint representatives.
For any two subsurfaces \(A\) and \(B\) of \(S\) that have an essential intersection, the _subsurface projection_ of \(B\) to \(A\) is the subset \(\partial B\cap A\subset C(A)\). This projection is denoted by \(\pi_{A}(B)\). For any \(\beta\in C(B)\) we also define \(\pi_{A}(\beta):=\pi_{A}(B)\). These projections always have bounded diameter [13] and given any intersecting subsurfaces \(A,B,C\subset S\) we define the _projection distance_ as:
\[d_{A}(B,C):=\operatorname{diam}_{C(A)}(\pi_{A}(B)\cup\pi_{A}(C)).\]
## 3. Proof of Theorem A
Fix an infinite-type surface \(S\), different from the Loch Ness monster.
**Lemma 1**.: _If \(S\) is an infinite-type surface, different from the Loch Ness monster, it contains infinitely many finite-type \(\overline{\operatorname{MCG}_{c}(S)}\)-nondisplaceable subsurfaces, which are pairwise disjoint, pairwise homeomorphic and non-accumulating. Moreover, the subsurfaces can be chosen to be nonexceptional._
Proof.: The proof is a case-by-case analysis. In each case we describe a finite-type \(\overline{\operatorname{MCG}_{c}(S)}\)-nondisplaceable subsurface such that we can clearly find infinitely many copies with the required properties.
Case 1:\(S\) is the once-punctured Loch Ness monster. Then note that any separating curve \(\alpha\) which separates the two ends cannot be mapped disjointly from itself by any mapping class (because it bounds a nondisplaceable subsurface). As a consequence, for any \(g\geq 1\), any genus-\(g\) subsurface with two boundary components separating the two ends is nondisplaceable.
Case 2:\(S\) has at least two nonplanar ends. Note that by the argument in [14, Proposition 6.3], any separating curve such that both complementary components have infinite genus is \(\overline{\operatorname{MCG}_{c}(S)}\)-nondisplaceable.
* If \(S\) has at least one nonplanar end -- denoted \(e\) -- which is isolated in \(\operatorname{Ends}(S)\), for any \(g\geq 1\), any genus-\(g\) subsurface with two separating boundary components, each of which cuts off a surface containing only the end \(e\), is \(\overline{\operatorname{MCG}_{c}(S)}\)-nondisplaceable.
* If \(S\) has at least one nonplanar end -- denoted \(e\) -- which is isolated in \(\operatorname{Ends}_{g}(S)\) but not in \(\operatorname{Ends}(S)\), for any \(g\geq 1\), any genus-\(g\) subsurface with three separating boundary components, two of which cut off a subsurface whose only nonplanar end is \(e\) and the third one cuts off a planar surface, is \(\overline{\operatorname{MCG}_{c}(S)}\)-nondisplaceable.
* If no nonplanar end is isolated in \(\operatorname{Ends}_{g}(S)\), \(\operatorname{Ends}_{g}(S)\) is a Cantor set. If it contains an end \(e\) that is not accumulated by planar ends, we choose a genus-\(g\) subsurface with two separating boundary components and no planar ends, so that each complementary component has infinite genus, for \(g\geq 1\). Otherwise, we choose a genus-\(g\) subsurface with three separating boundary components, so that two complementary components have infinite genus and one is a planar subsurface, for \(g\geq 1\).
Case 3: \(S\) has no nonplanar ends. We can then choose any \(n\)-holed sphere whose boundary curves are separating, so that there at least one end in each complementary component, for \(n\geq 5\).
Fix a finite-type \(\overline{\operatorname{MCG}_{c}(S)}\)-nondisplaceable subsurface \(\Sigma\subset S\) and let \(\mathcal{Y}\) be the \(\overline{\operatorname{MCG}_{c}(S)}\)-orbit of \(\Sigma\). As \(\Sigma\) is \(\overline{\operatorname{MCG}_{c}(S)}\)-nondisplaceable, any two surfaces in \(\mathcal{Y}\) have intersecting boundaries -- in particular, subsurface projections \(\pi_{A}\) between surfaces in \(\mathcal{Y}\) are always defined. Moreover, by [1] and [13], there is some constant \(\mu>0\) so that for every \(A,B,C\in\mathcal{Y}\):
* at most one of \(d_{A}(B,C)\), \(d_{B}(A,C)\) and \(d_{C}(A,B)\) is bigger than \(\mu\), and
* \(|\{D\in\mathcal{Y}\mid d_{D}(A,B)>\mu\}|\) is finite.
See [14, Lemma 3.8] for details on checking these in the infinite-type case. We can therefore run the projection complex machinery to deduce (see [1, Proposition 2.7]):
**Proposition 2**.: \(\overline{\operatorname{MCG}_{c}(S)}\) _acts by isometries on a hyperbolic graph \(\mathcal{C}(\mathcal{Y})\) so that for every \(A,B\in\mathcal{Y}\), \(A\neq B\):_
1. \(\mathcal{C}(A)\) _isometrically embedded as a convex set in_ \(\mathcal{C}(\mathcal{Y})\) _and the images of_ \(\mathcal{C}(A)\) _and_ \(\mathcal{C}(B)\) _are disjoint;_
2. _the inclusion_ \[\bigsqcup_{C\in\mathcal{Y}}\mathcal{C}(Y)\hookrightarrow\mathcal{C}(\mathcal{Y})\] _is_ \(\overline{\operatorname{MCG}_{c}(S)}\)_-equivariant;_
Figure 1. The subsurfaces of Lemma 1
_._
3. _the nearest point projection to_ \(\mathcal{C}(A)\) _sends_ \(\mathcal{C}(B)\) _to a bounded set, which is at uniformly bounded distance from_ \(\pi_{A}(B)\)_;_
4. _if_ \(g\in\overline{\mathrm{MCG}_{c}(S)}\) _is supported on_ \(A\) _and the restriction is pseudo-Anosov, and_ \(\Gamma\) _is the subgroup of_ \(\overline{\mathrm{MCG}_{c}(S)}\) _given by elements leaving_ \(A\) _invariant and preserving the stable and unstable foliations of_ \(g\)_, then_ \((\overline{\mathrm{MCG}_{c}(S)},\mathcal{C}(\mathcal{Y}),g,\Gamma)\) _satisfies WWPD._
Furthermore, the same proof as [1, Lemma 2.8] yields:
**Lemma 3**.: _Let \(\tau\) be a multitwist about a finite multicurve \(\mu\). Then for every \(A\in\mathcal{Y}\) there is a vertex \(v_{A}\) of \(\mathcal{C}(\mathcal{Y})\) such that the nearest point projection to \(\mathcal{C}(A)\) of the \(\tau\) orbit of \(v_{A}\) is uniformly bounded. In particular, if \(\tau\) is hyperbolic, its virtual quasi-axis can intersect \(\mathcal{C}(A)\) only in a bounded length segment._
Proof.: If \(\mu\cap A=\emptyset\), \(\tau\) fixes any element of \(\mathcal{C}(A)\), so it's elliptic. Otherwise, let \(v_{A}\) be an element of \(A\cap\mu\neq\emptyset\). Then the nearest point projection of \(\tau^{n}(v_{A})\) to \(\mathcal{C}(A)\) is a uniformly bounded distance from \(\pi_{A}(\tau^{n}(v_{A}))\), which is defined to be \(\pi_{A}(\partial(\tau^{n}(A))\). But this is at bounded distance from \(\tau^{n}(\mu)\cap A=\mu\cap A\), so the projection of \(\tau^{n}(v_{A})\) is at uniformly bounded distance from \(A\cap\mu\) for every \(n\). This proves the first statement of the lemma. The second statement follows as in the proof of [1, Lemma 2.8].
As a consequence, we can apply [1, Proposition 3.1] to deduce:
**Proposition 4**.: _Let \(\Sigma\) be a \(\overline{\mathrm{MCG}_{c}(S)}\)-nondisplaceable subsurface of finite type and \(f\) a mapping class that is a pure chiral pseudo-Anosov mapping class of \(\Sigma\) of sufficiently large enough translation length and the identity on the complement. Then there is a homogeneous quasimorphism \(\varphi:\overline{\mathrm{MCG}_{c}(S)}\to\mathbb{R}\) of defect \(\Delta\) such that \(|\varphi(f^{n})|\to\infty\) and \(\varphi(\tau)\leq\Delta\) for every multitwist \(\tau\). Furthermore, the defect depends only on the topological type of \(\Sigma\)._
Proof.: The only thing we need to check is that \(\varphi(\tau)\leq\Delta\) for every multitwist \(\tau\). But a multitwist \(\tau\) associated to an integral weighted multicurve \(\mu=\sum_{i\in I}n_{i}\alpha_{i}\) can be written as a product of two multitwists, \(\tau_{1}\) and \(\tau_{2}\), where \(\tau_{1}\) is associated to the integral weighted multicurve
\[\mu_{1}=\sum_{i:\alpha_{i}\cap\Sigma\neq\emptyset}n_{i}\alpha_{i}\]
and \(\tau_{2}\) to the integral weighted multicurve
\[\mu_{2}=\sum_{i:\alpha_{i}\cap\Sigma=\emptyset}n_{i}\alpha_{i}.\]
Then \(\tau_{2}\) acts elliptically on \(\mathcal{C}(\mathcal{Y})\), for \(\mathcal{Y}=\overline{\mathrm{MCG}_{c}(S)}\cdot\Sigma\), so \(\varphi(\tau_{2})=0\). By Lemma 3, if \(\tau_{1}\) doesn't act elliptically on \(\mathcal{C}(\mathcal{Y})\), its virtual quasi-axis has small projections, so again \(\varphi(\tau_{1})=0\), provided the translation length of \(f\) is larger than this small projection bound. As a consequence \(\varphi(\tau)\leq C\).
Proof of Theorem A.: Suppose first that \(S\) is not the Loch Ness monster. By Lemma 1, we can find pairwise disjoint and nonaccumulating finite-type subsurfaces \(\Sigma_{n}\), all
pairwise homeomorphic and \(\overline{\mathrm{MCG}_{c}(S)}\)-nondisplaceable. Fix \(\Sigma\) a surface homeomorphic to the \(\Sigma_{n}\) and \(\theta_{n}:\Sigma\to\Sigma_{n}\) a homeomorphism. Choose a pure chiral pseudo-Anosov mapping class \(f\) of \(\Sigma\), and let \(F_{n}\) be the mapping class of \(S\) with support on \(\Sigma_{n}\) and so that \(F_{n}|_{\Sigma_{n}}=\theta_{n}^{-1}\circ f\circ\theta_{n}\). Let \(F\) be the product of the \(F_{n}^{n}\). Then for any \(n\), by Proposition 4 (after potentially passing to a power in order to increase the translation length), we can find a homogeneous quasimorphism \(\varphi_{n}:\overline{\mathrm{MCG}_{c}(S)}\to\mathbb{R}\) with defect \(C\) (independent on \(n\)) that is unbounded on powers of \(F_{n}\) and zero on all finite multitwists or elements acting elliptically on \(\mathcal{C}(\overline{\mathrm{MCG}_{c}(S)}\cdot\Sigma_{n})\). Moreover, \(\varphi_{n}(F_{n})=\varphi_{m}(F_{n})\) for every \(n,m\).
Suppose that \(F\) is a product of \(k\) multitwists \(\tau_{1},\dots,\tau_{k}\). By the assumptions on the quasimorphisms, \(|\varphi_{n}(F)|\to\infty\). On the other hand, for any \(n\), \(\varphi_{n}(F)=\varphi_{n}(\tau_{k}\circ\dots\circ\tau_{1})\leq 2kC\), which gives a contradiction.
Suppose now that \(S\) is the Loch Ness monster and fix a point \(x\in S\). By [11], \(\overline{\mathrm{MCG}_{c}(S)}=\mathrm{MCG}(S)\) and \(\overline{\mathrm{MCG}_{c}(S\smallsetminus\{x\})}=\mathrm{MCG}(S\smallsetminus\{x\})\). By the Birman exact sequence [14, Appendix], since the kernel of the surjection \(\mathrm{MCG}(S\smallsetminus\{x\})\to\mathrm{MCG}(S)\) is generated by twists, if \(\mathrm{MCG}(S)\) is generated by multitwists, so is the mapping class group of the once-punctured Loch Ness monster, a contradiction.
|
2307.13990 | Inter-orbital Cooper pairing at finite energies in Rashba surface states | Multi-band effects in hybrid structures provide a rich playground for
unconventional superconductivity. We combine two complementary approaches based
on density-functional theory (DFT) and effective low-energy model theory in
order to investigate the proximity effect in a Rashba surface state in contact
to an $s$-wave superconductor. We discuss these synergistic approaches and
combine the effective model and DFT analysis at the example of a Au/Al
heterostructure. This allows us to predict finite-energy superconducting
pairing due to the interplay of the Rashba surface state of Au, and
hybridization with the electronic structure of superconducting Al. We
investigate the nature of the induced superconducting pairing and quantify its
mixed singlet-triplet character. Our findings demonstrate general recipes to
explore real material systems that exhibit inter-orbital pairing away from the
Fermi energy. | Philipp RüÃmann, Masoud Bahari, Stefan Blügel, Björn Trauzettel | 2023-07-26T06:55:19Z | http://arxiv.org/abs/2307.13990v1 | # Inter-orbital Cooper pairing at finite energies in Rashba surface states
###### Abstract
Multi-band effects in hybrid structures provide a rich playground for unconventional superconductivity. We combine two complementary approaches based on density-functional theory (DFT) and effective low-energy model theory in order to investigate the proximity effect in a Rashba surface state in contact to an \(s\)-wave superconductor. We discuss these synergistic approaches and combine the effective model and DFT analysis at the example of a Au/Al heterostructure. This allows to predict finite-energy superconducting pairing due to the interplay of the Rashba surface state of Au, and hybridization with the electronic structure of superconducting Al. We investigate the nature of the induced superconducting pairing and quantify its mixed singlet-triplet character. Our findings demonstrate general recipes to explore real material systems that exhibit inter-orbital pairing away from the Fermi energy.
## I Introduction
Materials that exhibit strong spin orbit coupling (SOC) build the foundation for a plethora of physical phenomena [1; 2] with applications ranging from non-collinear topological magnetic textures (e.g. skyrmions) [3] over spinorbitronics [1] or topological insulators [4] to quantum information processing [5; 6; 7; 8]. Combining different materials in heterostructures not only gives rise to breaking of symmetries, which is essential to Rashba SOC [9], but it also allows us to tailor proximity effects, where the emergent physics of the heterostructure as a whole is richer than the sum of its constituents. In the past, this has attracted a lot of interest in the context of increasing SOC in graphene [10; 11; 12]. Combining a strong-SOC material with a superconductor is, moreover, of particular use to realize topological superconductivity, that can host Majorana zero modes (MZMs). In turn, MZMs are building blocks of topological qubits [13].
In this work, we study the inter-orbital physics inherent to heterostructures consisting of superconductors and Rashba materials. In a novel way, we combine theoretical modelling of two complementary approaches that have their roots in rather disjoint communities focusing on either microscopic or mesoscopic physics. We combine the predictive power of material-specific DFT simulations with the physical insights of an analytically solvable low-energy model. The Bogoliubov-de Gennes (BdG) formalism [14; 15] is the basis for both models, in particular, the DFT-based description of the superconducting state, commonly referred to as Kohn-Sham Bogoliubov-de Gennes (KS-BdG) approach [16; 17; 18; 19; 20]. While DFT naturally accounts for multi-band effects, the effective low-energy model with a simpler treatment of only a few bands allows us to identify the symmetry of the superconducting pairing. Crystal symmetries have profound effects. For example, they may or may not cause wavefunctions to overlap, which is visible in DFT calculations. A group-theoretic analysis allows us to infer possible (unconventional) pairing channels from crystal symmetries [21]. However, group theory alone does not tell us which of the possible pairing channels really matters in a given material. Hence, only the combination of both approaches (DFT and group theory) is able to predict the emergence of experimentally relevant (unconventional) pairing channels in the laboratory.
Figure 1: (a) Localization of the electron density (arb. units) around the Fermi energy throughout the Al/Au heterostructure. The background shows a cut through \(x=0\). (b) Illustration of three kinds of Cooper pair tunneling and the formation of different singlet/triplet components due to the Rashba surface state. Cooper pairs formed by electrons originating from different orbitals are denoted by different colors.
Rashba SOC is intimately related to orbital mixing, often involving \(p\) electrons [2]. Evidence for strong Rashba-SOC is found in a variety of materials ranging from heavy metal surfaces like Au or Ir and surface alloys (e.g. \(\sqrt{3}\times\sqrt{3}\) Bi/Ag) [22; 23; 24], over semiconductors like InSb [25], to topological insulators (e.g. Bi\({}_{2}\)Se\({}_{3}\)) [26]. We investigate the combination of such metals in hybrid structures with common superconductors, where multi-band effects are essential. In general, multi-band effects have crucial implications. They are, for instance, relevant for transport across superconductor-semiconductor interfaces in presence of Fermi surface mismatch [27], and play a major role in the superconducting diode effect [28; 29; 30].
As a prime example experiencing the multi-band physics of a proximitized Rashba state, we identify the interface between aluminium (Al) and gold (Au). This combination allows us to study the proximity effect with Rashba surface states. On the one hand, Al is a well-known and widely used \(s\)-wave superconductor whose valence band electrons are of \(s-p\) orbital character. On the other hand, Au is a simple heavy metal where effects of strong SOC are particularly pronounced. In fact, as a consequence of strong SOC, the (111) surface of Au hosts a set of two spin-momentum-locked Rashba surface states [31; 32; 33; 34; 35; 22; 35]. Both Al and Au grow in the face centered crystal (fcc) structure and their lattice constants vary only marginally [36; 37]. Hence, epitaxial growth of this heterostructure is feasible. It is ideally suited to gain insight into (i) hybridization of the electronic structure of Al- and Au-derived bands at the interface, (ii) proximity effect of the SOC from Au into the superconductor Al, (iii) interplay of the superconducting proximity effect and SOC in this multi-band system, and (iv) mixed singlet-triplet nature of induced superconducting pairing. The hybridized electronic structure in the Al/Au heterostructure and the emerging superconducting pairing channels due to multi-band effects are depicted in Fig. 1.
This article is structured as follows. In Sec. II, the normal state electronic structure of the Au/Al heterostructure is discussed with DFT and low-energy model approaches. In Sec. III, the DFT and model access to superconducting heterostructures are presented with emphasis on complementary insights. This modelling allows us to study the proximity effects of SOC and superconductivity in multi-band systems at the example of Al/Au interfaces. We conclude in Sec. IV, where we also comment on the feasibility of experimental detection of our predictions.
## II Normal state spectrum
The DFT and model-based approaches described in this article are complementary and uniquely distinct in their methodologies. The DFT-based numerical calculations provide an _ab-initio_ approach to the description of the electronic structure of the normal state, their scope encompasses _all_ electronic degrees of freedom, resulting in a precise and extensive representation applicable to a broad range of materials merely from the knowledge about the crystal structure. Consequently, the intricate band structure generated by this method can be complex, comprising several bands with diverse orbital and spin character.
The effective low-energy model aims to simplify the complexity of the electronic structure by describing only a few bands, particularly those close to the \(\Gamma\)-point and the Fermi level. The model-based approach has the distinct advantage of deriving analytical expressions that can be applied to a wide range of material classes. Additionally, the model enables the analysis and inclusion of certain symmetries. For instance, only odd terms in \(\mathbf{k}\) might appear in certain parts of the model Hamiltonian. To create a model that applies to a real material, it is, however, necessary to determine model parameters by fitting to experimental or DFT data.
### Density functional theory results
Our DFT calculations for heterostructures, consisting of thin Al and Au films, are summarized in Figs. 1 and 2(a,b). Both Al and Au have a face-centered cubic (fcc) crystal structure with lattice constants of 4.08A and 4.05A, respectively [36; 37]. We investigate an ideal interface in the close-packed (111) surface of the fcc lattice. To model the heterostructure, we use a unit cell that consists of 6 layers of Al and 6 layers of Au with the average experimental lattice constant of Al and Au, differing only by about 0.4% from their respective bulk lattice constants. For our DFT calculations, we employ the full-potential relativistic Korringa-Kohn-Rostoker Green function method, as implemented in the JuKKR code [38]. This allows us to include the effect of superconductivity on the footings of the Bogoliubov-de Gennes formalism [39]. Computational details are provided in App. A.
The electronic structure of Au below the Fermi level is dominated by the fully occupied shell of \(d\)-electrons around \(-2\) to \(-8\) eV (see App. B for the corresponding DOS). In thin-film heterostructures (called "slabs"), the electrons are confined inside the slab, leading to finite-size quantization and the appearance of two-dimensional quantum-well states manifested as a series of discrete bands in the region where the bulk electronic structure is projected into the surface Brillouin zone. The presence of surfaces and interfaces, and the possible appearance of broken bonds, often leads to additional surface states or surface resonances in the electronic structure. For the Au(111) surface, Rashba surface states appear in surface projection of the bulk \(L\)-gap around the \(\Gamma\) point of the
surface Brillouin zone. They are are of \(s\)-\(p_{z}\) orbital character [22; 32].
The region around \(\Gamma\), highlighted by the blue box in Fig. 2(a), is the focus of our study. It is enlarged in Fig. 2(b). The in-plane component of the spin-polarization (\(s_{y}\)) perpendicular to the direction of the momentum (\(k_{x}\)) is shown by the color coding of the bands. Note that, due to crystal symmetries, \(s_{x}\) is exactly zero in the plane through \(k_{y}=0\), and \(s_{z}\) is negligibly small. From the full band structure information based on DFT, we select a regime of interest for the analytical effective low-energy model. We restrict our analysis to the four states labeled 1-4, which (at small \(|\mathbf{k}|\) close to \(\Gamma\)) are derived from the Rashba surface state of Au (states 1,2) or from Al (states 3,4), respectively. The Al states (3,4) have a quadratic dispersion and show much weaker spin-splitting. Importantly, our study reveals the existence of only a single pair of Au Rashba surface states localized at the interface of Au and vacuum. Notably, no second pair of states arises from the interface of Al and Au. This can be deduced from the localization of these states depicted in App. B. The real-space distribution of the charge density at the Fermi energy is shown in Fig. 1a. We conclude that the scattering potential at the interface is weak enough to prevent the formation of a second state at the Al/Au interface.
Aluminium is a light metal with negligible intrinsic SOC. The small SOC-induced spin-splitting seen for states 3,4 is merely a result of a proximity-induced SOC from Au to Al, hinting at sizable hybridization of the electronic structure of Al and Au. In Sec. II.2, we discuss in detail that, at higher momenta, the parabolas of the Al-derived states and the Rashba surface states intersect and hybridize, resulting in more delocalized states throughout the entire Al-Au heterostructure. This hybridization can be attributed to the compatible orbital character of the Al and Au bands, which both possess \(s\)-\(p_{z}\) like orbital character. Ultimately, this hybridization leads to the proximity effect of the spin-orbit coupling (SOC) observed in the Al quantum well states.
### Effective low-energy model
Complementary to our DFT results, we develop an effective four-band model Hamiltonian to evaluate the spectral properties of the heterostructure in an analytical manner. Guided by the insights from our DFT calculation, we construct a model for the proximitized Rashba surface state. We note a hybridization of spin-split Au surface bands and the doubly degenerate Al band near the Fermi energy. Thus, we propose the normal state model Hamiltonian to be
\[H_{N}\!=\!\!\sum_{\mathbf{k}}\!\left(c_{\mathbf{k},\mathrm{Al}}^{\dagger},c_{ \mathbf{k},\mathrm{Au}}^{\dagger}\right)\!\!\!\begin{pmatrix}\hat{h}_{ \mathrm{Al}}(\mathbf{k})&F_{0}\hat{\sigma}_{0}\\ F_{0}\hat{\sigma}_{0}&\hat{h}_{\mathrm{Au}}(\mathbf{k})\end{pmatrix}\!\! \left(\begin{array}{c}c_{\mathbf{k},\mathrm{Al}}\\ c_{\mathbf{k},\mathrm{Au}}\end{array}\right)\!\!, \tag{1}\]
where the electron annihilation operator is denoted as \(\alpha_{\mathbf{k},\nu}=(c_{\mathbf{k},\nu,s},c_{\mathbf{k},\nu,-s})^{T}\) labeled by the 2D momentum vector \(\mathbf{k}=(k_{x},k_{y})\) with orbital (\(\nu\in\{\mathrm{Al},\mathrm{Au}\}\)) and spin (\(s\in\{\uparrow,\downarrow\}\)) degrees of freedom. \(F_{0}\) signifies the hybridization strength between Al and Au bands. Furthermore, \(\hat{h}_{\mathrm{Al(Au)}}(\mathbf{k})\) denotes the \(2\times 2\) sector for the Al (Au) segment given by
\[\hat{h}_{\mathrm{Al}}(\mathbf{k}) =(\alpha_{\mathrm{Al}}k^{2}-\mu_{\mathrm{Al}})\hat{\sigma}_{0}, \tag{2}\] \[\hat{h}_{\mathrm{Au}}(\mathbf{k}) =(\alpha_{\mathrm{Au}}k^{2}-\mu_{\mathrm{Au}})\hat{\sigma}_{0}+ \lambda(\hat{\sigma}_{x}k_{y}-k_{x}\hat{\sigma}_{y})\] \[+g(\hat{\sigma}_{x}(k_{y}^{3}+k_{y}k_{x}^{2})-(k_{x}^{3}+k_{x}k_{y} ^{2})\hat{\sigma}_{y}), \tag{3}\]
where \(k\equiv|\mathbf{k}|=\sqrt{k_{x}^{2}+k_{y}^{2}}\); \(\alpha_{\mathrm{Al(Au)}}\) and \(\mu_{\mathrm{Al(Au)}}\) characterize mass term and chemical potential for Al (Au) bands, respectively. First (third) order spin-orbit coupling in the Au sector is parametrized by \(\lambda\) (\(g\)) leading to broken inversion symmetry, i.e., \(\hat{h}_{\mathrm{Au}}(-\mathbf{k})\neq\hat{h}_{\mathrm{Au}}(\mathbf{k})\). It is worth noting that even though the band spin-splitting of the Rashba surface state is isotropic in Au [22], it is
Figure 2: (a) DFT band structure for the Al/Au hybrid structure consisting of 6 layers of each element. Colorbar shows the localization of the states. (b) Enlarged view of spectrum close to the Fermi energy denoted by the blue rectangle in panel (a). Colorbar shows the spin polarization \(\langle s_{y}\rangle\) (arb. units). (c) [(d)] Excitation spectrum of the normal state obtained by low-energy model Hamiltonian, given in Eq. (1), close to the Fermi energy in absence [presence] of band hybridization and including third-order Rashba spin-orbit coupling, i.e., \(F_{0}=g=0\) [\(F_{0}=0.2\) and \(g=-8.45\)]. Other model parameters are given in Tab. 1.
necessary to consider higher order polynomials for the Rashba SOC in the heterostructure to match the dispersion calculated from first-principles. We attribute this observation to the reduced \(C_{3v}\) point group symmetry of the interface built into the DFT model via the chosen crystal structure. We obtain the third order polynomial presented in the last term of Eq. (3) by taking the direct product of the irreducible representations of \(C_{3v}\)[40]. Hence, this normal-state model is constructed by intuition employing the \(\mathbf{k}\cdot\mathbf{p}\) approach. This is evident in our formulation of the Hamiltonian, where we combine a Rashba model up to third order describing the Au layer with a quadratic dispersion for the Al layer and a band hybridization term \(F_{0}\).
For simplicity, we focus on the 1D Brillouin zone, i.e., \(\mathbf{k}=(k_{x},0)\), since our model is rotationally symmetric. Therefore, the excitation spectra of the hybrid structure become
\[E_{\mathbf{k},s}^{\mathbf{s}^{\prime}}=\frac{1}{2}\left(\mathcal{E}_{\mathrm{ Al}}+\mathcal{E}_{\mathrm{Au}}^{s}+s^{\prime}\sqrt{(\mathcal{E}_{\mathrm{Al}}- \mathcal{E}_{\mathrm{Au}}^{s})^{2}+4F_{0}^{2}}\right), \tag{4}\]
where \(s,s^{\prime}\in\{+,-\}\). The quadratic band in the Al segment is denoted as \(\mathcal{E}_{\mathrm{Al}}=\alpha_{\mathrm{Al}}k^{2}-\mu_{\mathrm{Al}}\), and the spin-split band in the Au segment as \(\mathcal{E}_{\mathrm{Au}}^{\pm}=\alpha_{\mathrm{Au}}k^{2}\pm(k\lambda+gk^{3})- \mu_{\mathrm{Au}}\) [Fig. 2(c)]. Due to hybridization, an effective spin-orbit coupling is induced in the doubly degenerate Al bands, ultimately leading to the lifting of their degeneracy. After fitting to the DFT data, the analytical spectra given by Eq. (4) are in excellent agreement with the DFT calculation, compare Figs. 2(b) and (d).
## III Superconducting excitation spectrum
In general, a microscopic theoretical description of the superconducting excitations can be achieved on the basis of the Bogoliubov-de Gennes (BdG) formalism [14; 15], a generalization of the BCS theory of superconductivity [41]. The BdG formalism is based on the Hamiltonian
\[\hat{\mathcal{H}}_{\mathrm{BdG}}=\left(\begin{array}{cc}\hat{H}_{0}&\hat{ \Delta}\\ [\hat{\Delta}]^{\dagger}&-\hat{H}_{0}^{*}\end{array}\right), \tag{5}\]
where \(\hat{H}_{0}\) denotes the normal state Hamiltonian and \(\hat{\Delta}\) the superconducting pairing between particle and hole blocks. The BdG method is also key to the extension of DFT for superconductors [16; 17; 18; 19; 20], commonly referred to as Kohn-Sham Bogoliubov-de Gennes (KS-BdG) formalism. One major difference between DFT and model formulations is that Eq. (5) is formulated in real-space (DFT) or momentum space (model), if translation invariance is given.
### Kohn-Sham Bogoliubov-de Gennes formalism
The central task in the superconducting DFT approach (sketched in Fig. 3) is to solve the Kohn-Sham BdG (KS-BdG) equation [16; 18; 42]
\[H_{\mathrm{BdG}}^{\mathrm{KS}}(\mathbf{x})\Psi_{\nu}^{\mathrm{KS}}(\mathbf{x })=\varepsilon_{\nu}\Psi_{\nu}^{\mathrm{KS}}(\mathbf{x}), \tag{6}\]
which is a reformulation of the Schrodinger equation (or Dirac equation if relativistic effects are taken into account) in terms of an effective single particle picture. The effective single-particle wavefunctions in Nambu space \(\Psi_{\nu}^{\mathrm{KS}}(\mathbf{x})=(u_{\nu}(\mathbf{x}),v_{\nu}(\mathbf{x}) )^{T}\) describe, respectively, the particle and hole components at excitation energy \(\varepsilon_{\nu}\) (\(\nu\) is a band index labelling the electronic degrees of freedom). The KS-BdG Hamiltonian can be written in matrix form as [18; 39]
\[H_{\mathrm{BdG}}^{\mathrm{KS}}(\mathbf{x})=\left(\begin{array}{cc}H_{0}^{ \mathrm{KS}}(\mathbf{x})-E_{\mathrm{F}}&\Delta_{\mathrm{eff}}(\mathbf{x})\\ \Delta_{\mathrm{eff}}^{*}(\mathbf{x})&E_{\mathrm{F}}-\left(H_{0}^{\mathrm{KS}}( \mathbf{x})\right)^{*},\end{array}\right) \tag{7}\]
where \(E_{\mathrm{F}}\) is the Fermi energy. The normal state Hamiltonian
\[H_{0}^{\mathrm{KS}}(\mathbf{x})=-\nabla^{2}+V_{\mathrm{eff}}(\mathbf{x}), \tag{8}\]
and the effective superconducting pairing potential \(\Delta_{\mathrm{eff}}\) appear in the Kohn-Sham formulation (Rydberg atomic units are used where \(\hbar=1\)). For \(\Delta_{\mathrm{eff}}=0\), the KS-BdG equation reduces to solving the conventional Kohn-Sham equation of DFT that describes the electronic structure of the normal state.
The effective single-particle potentials in Eq. (7) are functionals of the charge density \(\rho(\mathbf{x})\) and the anomalous density \(\chi(\mathbf{x})\) (the superconducting order parameter)[42; 16],
\[V_{\mathrm{eff}}(\mathbf{x}) = V_{\mathrm{ext}}(\mathbf{x})+2\int\frac{\rho(\mathbf{x}^{\prime} )}{|\mathbf{x}-\mathbf{x}^{\prime}|}\mathrm{d}\mathbf{x}^{\prime}+\frac{ \delta E_{\mathrm{xc}}[\rho,\chi]}{\delta\rho(\mathbf{x})}, \tag{9}\] \[\Delta_{\mathrm{eff}}(\mathbf{x}) = \frac{\delta E_{\mathrm{xc}}[\rho,\chi]}{\delta\chi(\mathbf{x})}, \tag{10}\]
where functional derivatives of the exchange correlation functional \(E_{\mathrm{xc}}\) appear requiring a self-consistent solution of the non-linear KS-BdG equations. The exchange correlation functional can be expressed as [42]
\[E_{\mathrm{xc}}[\rho,\chi]=E_{\mathrm{xc}}^{0}[\rho]-\int\chi^{*}(\mathbf{x}) \,\lambda\,\chi(\mathbf{x})\,\mathrm{d}\mathbf{x}, \tag{11}\]
where the conventional exchange-correlation functional \(E_{\rm xc}^{0}\) is the standard DFT term (in the normal state).
It is important to note that the above formulation of the KS-BdG equations assume a simplified form of the superconducting pairing kernel [42] (i.e. the second term in Eq. (11)) which reduces \(\lambda\) to simple constants within the cells surrounding the atoms that are however allowed to take different values throughout the computational unit cell. This assumes that the pairing interaction is local in space. This approximation was successfully used to study conventional \(s\)-wave superconductors [43; 44; 18; 45], heterostructures of \(s\)-wave superconductors and non-superconductors [46; 47; 45], or impurities embedded into superconductors [48; 49]. Hence, the effective pairing interaction takes the simple form [42]
\[\Delta_{\rm eff}({\bf x})=\lambda_{i}\chi({\bf x}) \tag{12}\]
where \(\lambda_{i}\) is a set of effective coupling constants describing the intrinsic superconducting coupling that is allowed to depend on the position \(i\) in the unit cell.
Finally, the charge density \(\rho\) and the anomalous density \(\chi\) are calculated from the particle (\(u_{\nu}\)) and hole components (\(v_{\nu}\)) of the wavefunction
\[\rho({\bf x}) = 2\sum_{\nu}f(\varepsilon_{\nu})|u_{\nu}({\bf x})|^{2}+[1-f( \varepsilon_{\nu})]|v_{\nu}({\bf x})|^{2}, \tag{13}\] \[\chi({\bf x}) = \sum_{\nu}[1-2f(\varepsilon_{\nu})]u_{\nu}({\bf x})v_{\nu}^{*}( {\bf x}), \tag{14}\]
where \(f(\varepsilon)\) is the Fermi-Dirac distribution function and the summation over \(\nu\) includes the full spectrum of the KS-BdG Hamiltonian.
### DFT results for superconducting Al/Au
For the superconducting state, we assume that only Al has an intrinsic superconducting coupling and set the layer-dependent coupling constant in the KS-BdG calculation to
\[\lambda_{i}=\left\{\begin{array}{ll}\lambda_{\rm Al},&\mbox{if $i\in{\rm Al}$,}\\ 0,&\mbox{else},\end{array}\right. \tag{15}\]
where \(\lambda_{\rm Al}\) is a positive real-valued constant and \(i\) is an index counting the atomic layers in the Al/Au heterostructure. While the value of \(\lambda_{\rm Al}\) can be regarded as a fitting parameter in this approach, we stress that only an integral quantity, leading to the overall superconducting gap size in Al, is fitted. Other spectral properties like avoided crossings and proximity effects are in fact predictions of this theory. The results of our KS-BdG simulations and analytical model for the Al/Au heterostructure are summarized in Fig. 4. For better visibility, we show results for scaled-up values of the superconducting pairing. The general trends we discuss here are, however, transferable from large to small pairing strengths with only quantitative changes. We find superconducting gaps and avoided crossings at low and finite excitation energies, labelled with \(\delta\) in Fig. 4(c). These avoided crossings are rooted in the \(s\)-wave superconductivity induced from the Al segment included in the DFT-based simulations by \(\lambda_{i}\) (the _only_ adjustable parameter in our description of the superconducting state). The hybridization between Al and Au bands enables Cooper pair tunneling from the superconductor into the metal (see Fig. 1b). This results in a superconducting proximity effect in the Rashba surface state of Au. The large spin-splitting of the Rashba surface state allows for the pairing to have triplet character because the superconducting hybridization happens between quasiparticle bands with identical pseudo-spin degree of freedom. This will be further explained in the effective model analysis of Sec. III.3.
The DFT calculations disclose the anisotropy of the pairing gap (see Fig. 4), which is stronger for the Rashba state at smaller momentum with \(\delta_{\rm Au}^{-}\approx 0.51\delta_{\rm Al}^{\pm}\) and decreases to \(\delta_{\rm Au}^{+}\approx 0.38\delta_{\rm Al}^{\pm}\) for the state at larger momentum. Furthermore, we also observe that inter-orbital pairings appear away from the Fermi energy, as indicated
Figure 3: Schematic overview of a KS-BdG simulation that starts from the crystal structure which (in a standard DFT calculation) gives the ground state density \(\rho_{0}\). For the superconducting state, the KS-BdG equations are then solved self-consistently to obtain charge and anomalous densities \(\rho,\chi\) in the superconducting state which determines the superconducting band structure.
by \(\delta^{\pm}_{\rm IOP}\), where the states with dominant Au orbital character and pseudo-spin-up intersects with the hole states with dominant Al orbital character having pseudo-spin-down degrees of freedom. This phenomenon has been referred to as inter-band pairing [50; 51; 52; 53; 54], mirage gap [55], and finite-energy Cooper pairing [56; 57; 58]. However, conclusive experimental evidence supporting it is still elusive. The Al/Au hybrid structure presented here provides a simple system in which such finite-energy pairing can be observed.
Similar to the two pairing gaps \(\delta^{\pm}_{\rm Au}\) in the Rashba surface state, the DFT calculation shows that the inter-orbital pairings \(\delta_{\rm IOP}\) also decrease at larger momentum, i.e., \(\delta^{-}_{\rm IOP}/\delta_{\rm Al}=0.60\) to \(\delta^{+}_{\rm IOP2}/\delta_{\rm Al}=0.44\). Based on these observations, we pose four questions:
1. Is inter-orbital pairing exclusively the result of superconducting order or other mechanisms?
2. What determines the magnitude of the finite-energy pairing?
3. What is the magnitude of the induced spin-singlet and triplet components of the effective pairing?
4. What specific symmetries are responsible for protecting certain electron-hole band crossings that occur away from the Fermi energy?
These questions will be answered in the following sections.
### Effective low-energy model for the superconducting heterostructure
Based on an effective low-energy, we can achieve a deeper understanding of the KS-BdG results. The results of our low-energy model are illustrated in Figs. 4(b) and (d). They are obtained by the model introduced in Sec. II.2. In order to obtain an analytical characterization of the superconducting pairing in the heterostructure, it is necessary to construct a BdG formalism for our minimal model, _cf._ Eq. (1). Assuming that the superconducting pairing arises from the Al layer, we model the single-particle pairing operator as
\[H_{\Delta}=\sum_{\bf k}\left(c^{\dagger}_{{\bf k},{\rm Al}^{\dagger}},c^{ \dagger}_{{\bf k},{\rm Au}}\right)\left(\begin{array}{cc}\Delta i\hat{ \sigma}_{y}&0\\ 0&0\end{array}\right)\left(\begin{array}{c}c^{\dagger}_{-{\bf k},{\rm Al}}\\ c^{\dagger}_{-{\bf k},{\rm Au}}\end{array}\right), \tag{16}\]
where \(\Delta\) denotes the superconducting pairing strength, and the nonvanishing diagonal entry corresponds to \(s\)-wave spin singlet pairing in the Al layer. Since pure Au does not become a superconductor at experimentally relevant temperatures, the pairing strength in the Au layer is put to zero.
It is illuminating to represent the BdG Hamiltonian in the eigenbasis of the normal state, given in Eq. (1), as defined by the \(8\times 8\) matrix in Nambu space
\[\hat{H}_{\rm BdG}\!=\!\!\left(\!\!\begin{array}{cccc}\hat{N}^{++}_{\bf k}&0& \hat{\Delta}^{++}_{\bf k}&\hat{\Delta}^{+-}_{\bf k}\\ 0&\hat{N}^{--}_{\bf k}&\hat{\Delta}^{-+}_{\bf k}&\hat{\Delta}^{--}_{\bf k}\\ [\hat{\Delta}^{++}_{\bf k}]^{\dagger}&[\hat{\Delta}^{--}_{\bf k}]^{\dagger}&- \hat{N}^{++}_{\bf k}&0\\ [\hat{\Delta}^{+-}_{\bf k}]^{\dagger}&[\hat{\Delta}^{--}_{\bf k}]^{\dagger}&0&- \hat{N}^{--}_{\bf k}\end{array}\!\!\right)\!, \tag{17}\]
where the diagonal entries are the normal state dispersion relations \(\hat{N}^{\nu\nu}_{\bf k}={\rm diag}(E^{\nu}_{{\bf k},+},E^{\nu}_{{\bf k},-})\) with \(\nu=\pm\). Note that \(E^{+}_{{\bf k},\pm}\) (\(E^{-}_{{\bf k},\pm}\)) refer to the upper (lower) spin-split bands which predominantly exhibit Al (Au) orbital character for small momenta, as can be seen in Fig. 2. Furthermore, the off-diagonal block in \(\hat{H}_{\rm BdG}\) is the pairing matrix projected onto the band basis (_cf._ App. \(\mathbb{C}\)) as obtained by
\[\hat{\mathcal{V}}^{\dagger}_{\bf k}{\rm diag}(\Delta i\hat{\sigma}_{y},0)\hat{ \mathcal{V}}^{\dagger T}_{-{\bf k}}=\left(\begin{array}{cc}\hat{\Delta}^{++} _{\bf k}&\hat{\Delta}^{+-}_{\bf k}\\ \hat{\Delta}^{-+}_{\bf k}&\hat{\Delta}^{--}_{\bf k}\end{array}\right), \tag{18}\]
where \(\hat{\mathcal{V}}_{\bf k}\) is the matrix of eigenvectors associated to the eigenvalue \(\hat{\mathcal{A}}_{\bf k}\) of the normal state Hamiltonian. \(\hat{\Delta}^{++}_{\bf k}\) (\(\hat{\Delta}^{--}_{\bf k}\)) correspond to the intra-band pairing matrices,
Figure 4: Superconducting band structure of the Al/Au heterostructure obtained by (a) DFT and (b) low-energy model. The red/green and grey bands indicate the particle and hole character of the BdG bands, respectively. The red/green color of the particle bands indicate the localization of the states. Panels (c) and (d) show enlarged view of the region marked by the blue box in (a) where five different superconducting avoided crossings emerge (labeled \(\delta^{\pm}_{\rm Al}\), \(\delta^{\pm}_{\rm Au}\), and \(\delta^{\pm}_{\rm IOP}\)). The absence of avoided crossings marked by black circles in (c) and (d) is due to pseudo-spin-rotational symmetry. For illustration purposes, we show results for scaled-up values of the superconducting pairing. The model parameters for the analytical model are those given in Table 1 and \(\Delta=0.4F_{0}\).
specifically pairing between \(E_{\mathbf{k},+}^{+}\) and \(E_{\mathbf{k},-}^{+}\) (\(E_{\mathbf{k},+}^{-}\) and \(E_{\mathbf{k},-}^{-}\)) with their hole counterparts leading to the superconducting gap for Al, i.e., \(\delta_{\rm Al}\), and the proximity-induced pairing gaps labeled by \((\delta_{\rm Au}^{\pm})\) in Fig. 4. Such matrices are explicitly given by the relation
\[\hat{\varDelta}_{\mathbf{k}}^{\nu\nu}=\frac{i\Delta}{2}\left(\begin{array}{cc }0&1+\nu G_{\mathbf{k}}^{-}\\ -1-\nu G_{\mathbf{k}}^{+}&0\end{array}\right), \tag{19}\]
where \(\nu=+(-)\) and
\[G_{\mathbf{k}}^{\pm}=\frac{\mathcal{E}_{\rm Al}-\mathcal{E}_{\rm Au}^{\pm}}{ \sqrt{[\mathcal{E}_{\rm Al}-\mathcal{E}_{\rm Au}^{\pm}]^{2}+4F_{0}^{2}}}. \tag{20}\]
In Eq. (18), \(\hat{\varDelta}_{\mathbf{k}}^{+-}\) (\(\hat{\varDelta}_{\mathbf{k}}^{-+}\)) indicates the inter-orbital pairing, i.e., pairing between electron bands \(E_{\mathbf{k},+}^{+}\) and \(E_{\mathbf{k},-}^{+}\) with hole band bands \(-E_{-\mathbf{k},+}^{-}\) and \(-E_{-\mathbf{k},-}^{-}\). This gives rise to the emergence of finite-energy Cooper pairing resulting in avoided crossings at finite excitation energy (\(\delta_{\rm OP}^{\pm}\)) in Fig. 4 (c) and (d). The explicit form for the inter-band pairing matrix is given by
\[\hat{\varDelta}_{\mathbf{k}}^{+-}=\Delta F_{0}^{2}\left(\begin{array}{cc}0& \frac{-4i}{\Lambda_{\mathbf{k},1}^{-}\Lambda_{\mathbf{k},2}}\\ \frac{4i}{\Lambda_{\mathbf{k},1}^{-}\Lambda_{\mathbf{k},2}^{-}}&0\end{array} \right), \tag{21}\]
with
\[\Lambda_{\mathbf{k},l}^{\pm}\!=\!\sqrt{\left(\mathcal{E}_{\rm Al}-\mathcal{E}_ {\rm Au}^{\pm}\right)^{2}\left(1+(-1)^{l}/G_{\mathbf{k}}^{\pm}\right)^{2}+4F_ {0}^{2}}, \tag{22}\]
where \(l=\{1,2\}\). Importantly, the interplay between band hybridization and superconductivity, manifested by \(\Delta F_{0}^{2}\) in Eq. (21), intrinsically allows for the emergence of finite-energy pairing. Therefore, the inter-orbital pairing is not induced solely by superconducting order but also by band hybridization in the normal state. This is the answer to question (Q.1).
### Pairing symmetry analysis
In order to determine the pairing symmetry in the hybrid structure, it is essential to establish an effective formalism that concentrates on either low _or_ finite excitation energies. With this respect, it is necessary to derive a \(4\times 4\) matrix formalism from the full \(8\times 8\) BdG Hamiltonian \(\hat{H}_{\rm BdG}\). This can be done by utilizing the downfolding method specified in App. \(D\). The downfolding method yields the effective model that enables us to investigate the superconducting properties within a given set of energy bands. As mentioned above, there are three distinct sets of spin-split bands where pairing occurs. These bands are characterized by \(\nu=\nu^{\prime}=+(-)\), indicating that the pairing takes place at the Fermi energy, where the energy bands possess predominant Al (Au) orbital character. Another set of bands corresponds to the inter-orbital bands, where Al-dominated states intersect with Au-dominated hole states (and vice versa). Thus, the general form for the \(4\times 4\) effective superconducting Hamiltonian becomes
\[\hat{H}_{\mathbf{k},\rm eff}^{+\iota}=\left(\begin{array}{cc}\hat{N}_{ \mathbf{k}}^{++}+\hat{\xi}_{1}&\hat{\varDelta}_{\mathbf{k},\rm eff}^{+\iota}\\ [\hat{\varDelta}_{\mathbf{k},\rm eff}^{+\iota}]^{\dagger}&-\hat{N}_{-\mathbf{k }}^{\iota\iota}+\hat{\xi}_{2}\end{array}\right), \tag{23}\]
where the diagonal entries \(\hat{\xi}_{1(2)}\) are the energy shifts arising from multiband effects given by
\[\hat{\xi}_{1}=\hat{\varDelta}_{\mathbf{k}}^{+\nu}\frac{1}{\omega +\hat{N}_{-\mathbf{k}}^{\nu\nu}[\hat{\varDelta}_{\mathbf{k}}^{+\nu}]^{\dagger}}, \tag{24}\] \[\hat{\xi}_{2}=\left[\hat{\varDelta}_{\mathbf{k}}^{-\iota}\right]^ {\dagger}\!\frac{1}{\omega-\hat{N}_{\mathbf{k}}^{-\iota}}\hat{\varDelta}_{ \mathbf{k}}^{-\iota}, \tag{25}\]
and \(\omega\) is a constant. In addition, the effective pairing matrix in Eq. (23) becomes
\[\hat{\varDelta}_{\mathbf{k},\rm eff}^{+\iota}\!=\!\hat{\varDelta}_{\mathbf{k}}^ {+\iota}+\hat{\varDelta}_{\mathbf{k}}^{+\nu}\frac{1}{\omega+\hat{N}_{-\mathbf{ k}}^{\nu\nu}[\hat{\varDelta}_{\mathbf{k}}^{-\nu}]^{\dagger}}\frac{1}{\omega-\hat{N}_{ \mathbf{k}}^{--}}\hat{\varDelta}_{\mathbf{k}}^{-\iota}. \tag{26}\]
The effective intra-(inter-)orbital superconducting Hamiltonian, i.e., \(\hat{H}_{\mathbf{k},\rm eff}^{+(+-\iota)}\), can be obtained by setting \(\iota=+(-)\) and \(\nu=-(+)\). Note that \(\hat{H}_{\mathbf{k},\rm eff}^{--}\) can also be derived by substituting \((+)\leftrightarrow(-)\), and setting \(\iota=-\) and \(\nu=+\) in Eqs. (23-26). The spectra of the effective superconducting Hamiltonians \(\hat{H}_{\mathbf{k},\rm eff}^{++}\), \(\hat{H}_{\mathbf{k},\rm eff}^{--}\), and \(\hat{H}_{\mathbf{k},\rm eff}^{+-}\) are obtained numerically and depicted in Fig. 5(a-c). Additionally, the magnitudes of the pseudo-spin-singlet and triplet components corresponding to these spectra are illustrated in Fig. 5(d-f).
Importantly, the proximity-induced intra- and inter-orbital pairing states are mixtures of singlet and triplet states due to broken inversion symmetry in the Au layer. Based on our model, only the \(z\)-component of the \(\mathbf{d}\) vector, i.e., \(\hat{\varDelta}_{\mathbf{k},\rm eff}^{+\iota}(i\hat{\sigma}_{y})^{-1}=\varphi_{ \mathbf{k}}^{+\iota}\hat{\sigma}_{0}+\mathbf{d}_{\mathbf{k}}^{+\iota}\cdot\hat{ \sigma}\), either at the Fermi energy or finite excitation energies is present. According to Eqs. (19) and (21), the pairing matrices are off-diagonal. Therefore, \(\hat{\varDelta}_{\mathbf{k},\rm eff}^{+\iota}\) becomes an off-diagonal matrix reflecting an effective mixed-pairing state having nonvanishing pseudo-spin-singlet \(\varphi_{\mathbf{k}}^{\nu\nu}\) and pseudo-spin-triplet \(d_{\mathbf{k},z}^{\nu\nu}\) character obtained as
\[\varphi_{\mathbf{k}}^{\nu\nu} =\frac{i\Delta}{4}\left[2+\nu\left(G_{\mathbf{k}}^{-}+G_{\mathbf{ k}}^{+}\right)\right], \tag{27}\] \[d_{\mathbf{k},z}^{\nu\nu} =\frac{i\Delta}{4}\nu\left[G_{\mathbf{k}}^{-}-G_{\mathbf{k}}^{+} \right]. \tag{28}\]
where \(\nu\in\{+,-\}\). Note that we have excluded terms proportional to the third order of \(\Delta\) in Eqs. (27) and (28) as they are negligibly small in the weak pairing limit. It is worth mentioning that the property \(G_{-\mathbf{k}}^{\pm}=G_{\mathbf{k}}^{\mp}\) leads to even (odd) parity for the pseudo-spin-singlet (triplet) state, i.e., \(\varphi_{-\mathbf{k}}^{\nu\nu}=\varphi_{\mathbf{k}}^{\nu\nu}\) (\(d_{\mathbf{k},z}^{\nu\nu}=-d_{\mathbf{k},z}^{\nu\nu}\)). The inter-orbital
pairing components take the form
\[\varphi_{\mathbf{k}}^{+-} =\Delta F_{0}^{2}\left(\frac{-2i}{\Lambda_{\mathbf{k},1}^{-}\Lambda _{\mathbf{k},2}^{-}}-\frac{2i}{\Lambda_{\mathbf{k},1}^{+}\Lambda_{\mathbf{k},2 }^{+}}\right), \tag{29}\] \[d_{\mathbf{k},z}^{+-} =\Delta F_{0}^{2}\left(\frac{-2i}{\Lambda_{\mathbf{k},1}^{-} \Lambda_{\mathbf{k},2}^{-}}+\frac{2i}{\Lambda_{\mathbf{k},1}^{+}\Lambda_{ \mathbf{k},2}^{+}}\right). \tag{30}\]
Overall, we observe that the pseudo-spin-singlet component is consistently larger in magnitude than the triplet component, see Fig. 5(d-f). Note that the pairing state becomes purely pseudo-spin-singlet in the absence of either band hybridization or Rashba spin-orbit coupling, i.e., when \(F_{0}=0\) or \(\lambda=g=0\). Therefore, the pseudo-spin-triplet component originates from the interplay between Rashba surface states and band hybridization.
The size of the avoided crossing in the spectrum of the effective pairing Hamiltonian, as expressed in Eq. (23), is given by
\[\delta^{\nu\nu^{\prime}}_{\mathbf{k},\pm} = \sqrt{|\varphi_{\mathbf{k}}^{\nu\nu^{\prime}}|^{2}+|d_{\mathbf{k},z}^{\nu\nu^{\prime}}|^{2}\pm|(\varphi_{\mathbf{k}}^{\nu\nu^{\prime}})^{*}d_{ \mathbf{k},z}^{\nu\nu^{\prime}}+\varphi_{\mathbf{k}}^{\nu\nu^{\prime}}(d_{ \mathbf{k},z}^{\nu^{\prime}})^{*}|}, \tag{31}\]
where the third term effectively accounts for the anisotropy observed in the magnitude of the avoided crossing, as initially demonstrated in the KS-BdG simulation in Figs. 4 and 5. This point addresses question (Q.2). Note that the Fermi surface of the hybrid structure consists of four circular rings. The inner rings are primarily composed of spin-split Al states, while they are surrounded by predominantly spin-split Au states. The superconducting hybridization happens at four Fermi momenta, i.e.,
\[|\mathbf{k}^{F}| \in\{k_{1}^{\text{Al}},k_{2}^{\text{Al}},k_{1}^{\text{Au}},k_{2}^ {\text{Au}}\} \tag{32}\] \[= \pm\{0.124,0.141,0.278,0.308\}\text{\AA}^{-1}. \tag{33}\]
At the above momenta, we have defined the following quantities
\[\delta_{\text{Al}}\!\equiv\!\delta_{k_{1}^{\text{Al}},+}^{++}\!\approx\! \delta_{k_{2}^{\text{Al}},-}^{++},\ \delta_{\text{Au}}^{-}\!\equiv\!\delta_{k_{1}^{\text{Au}},-}^{--},\ \delta_{\text{Au}}^{+}\!\equiv\!\delta_{k_{2}^{\text{Au}},+}^{--}. \tag{34}\]
Therefore the full pairing gap for the hybrid structure at the Fermi energy can be determined by \(\delta_{\text{Au}}^{+}=\min(\delta_{\text{Al}},\delta_{\text{Au}}^{-},\delta_ {\text{Au}}^{+})\). The inter-orbital Cooper pairing away from the Fermi energy happens at momenta \(k_{1}^{\text{IOP}}=0.221\,\text{\AA}^{-1}\) and \(k_{1}^{\text{IOP}}=0.26\,\text{\AA}^{-1}\). Accordingly, the magnitude of finite-energy Cooper pairing is defined by \(\delta_{\text{IOP}}^{-}\equiv\delta_{k_{1}^{\text{OP}},-}^{+-}\) and \(\delta_{\text{IOP}}^{+}\equiv\delta_{k_{1}^{\text{OP}},+}^{+-}\).
The magnitudes of both intra- and inter-orbital pairings are plotted in Fig. 6(b). Apparently, the intra-orbital bands labeled by \(\nu=\nu^{\prime}=+(-)\) exhibit the largest (smallest) pairing gap at low momenta, indicating a dominant Al (Au) orbital character. Interestingly, the inter-orbital pairing leads to larger avoided crossings compared to the intra-orbital pairing of predominantly Au electrons. The Fermi momenta for the intra-orbital energy bands are marked in blue and red crosses at \(k=0.124\,\text{\AA}^{-1}\), \(k=0.141\,\text{\AA}^{-1}\), \(k=0.278\,\text{\AA}^{-1}\), and \(k=0.308\,\text{\AA}^{-1}\), respectively. At these momenta, the pairing anisotropy for Al-dominated states is slightly larger than the energy bands with dominant Au orbital character. Importantly, we observe that the pairing anisotropy disappears at critical momenta \(k_{c}=0.368\,\text{\AA}^{-1}\), resulting in identical sizes for the pairing potentials. This occurs because the induced intra- and inter-orbital pairing becomes a purely pseudo-spin-singlet state by eliminating the spin-split nature of the bands, specifically, \(d_{\mathbf{k}_{c},z}^{++}=d_{\mathbf{k}_{c},z}^{--}=d_{\mathbf{k}_{c},z}^{+-}=0\). The critical momenta can be determined by setting \(\mathcal{E}_{\text{Al}}-\mathcal{E}_{\text{Au}}^{\pm}=0\) according to Eqs. (20) and (22). In general, the proximity-induced pairing exhibits a stronger presence of the pseudo-spin-singlet component over the triplet component, i.e., \(d_{\mathbf{k}_{c},z}^{\nu\nu}/\varphi_{\mathbf{k}}^{\nu\nu^{\prime}}<1\), as illustrated in Fig. 6(a). This answers question (Q.3). Notably, among the various pairing potentials, the Automated states, labeled by \(\nu=\nu^{\prime}=-\), display the largest contribution from the triplet component.
### Finite-energy inter-orbital avoided crossing with external magnetic fields
Note that we do not observe the occurrence of an inter-band pairing between the two dominant Rashba states displaying opposite spin-polarization marked by
Figure 5: Effective superconducting excitation spectra for (a) \(\hat{H}_{\mathbf{k},\text{eff}}^{+}\) with \(\Delta=0.4F_{0}\) (b) \(\hat{H}_{\mathbf{k},\text{eff}}^{-}\) with \(\Delta=0.8F_{0}\), and (c) \(\hat{H}_{\mathbf{k},\text{eff}}^{+-}\) with \(\Delta=0.4F_{0}\). (d-f) Real and imaginary part of the pseudo-spin-singlet and pseudo-spin-triplet components of the effective pairing matrix associated with the dispersion relation illustrated in the top panels. The model parameters are the same as those given in Table 1.
black and red circles in Figures 4 and 5, respectively. These crossings are protected by time-reversal and spin-rotational symmetries. They can, however, be lifted if an external Zeeman field is applied to the heterostructure. This point answers question (Q.4).
The effect of an external magnetic field on the electronic structure is shown in Fig. 7, both from DFT and low-energy model perspective. As the Zeeman field strength increases, the Rashba spin-split bands undergo further splitting.This shift of the bands leads to a decreasing superconducting energy gap in predominant Al states because spin up and spin down states are shifted away from each other. For large external magnetic fields, the gap closes completely and superconductivity is destroyed at the critical field of the superconductor. Note that the inter-band pairing between particle-hole Rashba states at finite excitation energy is clearly visible before the superconducting gap for Al states closes.
## IV Discussion and Conclusion
Our results show the existence of finite-energy pairing due to the complex multi-band effects arising in the proximity effect of heterostructures between \(s\)-wave superconductors and heavy metals hosting Rashba surface states. The main ingredients are:
1. \(s\)-wave superconductivity,
2. surface states originating from the normal metal,
3. Rashba SOC in the normal metal,
4. significant hybridization between Rashba surface states and electronic structure of the \(s\)-wave superconductor.
If all these requirements are met finite-energy pairing emerges between discrete states of the superconductor _and_ the Rashba surface states. This unconventional pairing leads to avoided crossings in the BdG band structures. In our case, discrete states in the superconductor are pronounced due to finite-size effects of the thin Al films. Their location relative to the position of the Au surface states can be fine-tuned by appropriate doping or film thickness. This allows us to control at which finite energy the inter-orbital pairing between Al and Au Rashba states occurs.
The size of the observable avoided crossings for the Al/Au heterostructure crucially depends on the superconducting gap of the superconductor (summarized in Tabs. A1 and A1 of the Appendix). Aluminium has a critical temperature of \(T_{c}\approx 1\,\mathrm{T}\) and a critical magnetic field of \(H_{c}\approx 10\,\mathrm{mT}\)[59]. In the thin film limit, both \(T_{c}\) and \(H_{c}\) increase substantially [60, 61, 62], together with an increased size of the superconducting gap of \(\delta_{\mathrm{Al}}\approx 300\,\mathrm{\mu eV}\)[61]. The proximity-induced pairings at zero (within the Au Rashba bands) and finite excitation energy (due to Al-Au inter-orbital pairing) are of size \(\delta_{\mathrm{lOP}}^{\pm}\approx 100-200\,\mathrm{\mu eV}\). The Au-Au inter-orbital avoided crossing that only opens up under a finite magnetic field is of size \(\delta_{\mathrm{Intra}}^{\mathrm{Au}}\approx 30-50\,\mathrm{\mu eV}\) for values of the magnetic field well below the critical field of Al. These energy scales are rather small but within experimental reach. Note that energy resolutions below \(10\,\mathrm{\mu eV}\) can be achieved at low temperatures [63], see also App. F for further details.
Suitable materials engineering might further enhance the chances to detect and eventually exploit finite-energy pairings. A strong Rashba effect is typically seen in \(p\)-electron materials. Superconductors whose electronic structure close to the Fermi level is dominated by \(sp\)
Figure 6: (a) Magnitude of the effective superconducting avoided crossings for different pairing potentials. (b) Strength of pseudo-spin-triplet \(d_{\mathbf{k},z}^{\nu\nu^{\prime}}\) compared to pseudo-spin-singlet \(\varphi_{\mathbf{k},z}^{\mu\nu^{\prime}}\) for the effective intra-orbital pairing potential, namely \(\Delta_{\mathbf{k},\mathrm{eff}}^{++}\) and \(\tilde{\Delta}_{\mathbf{k},\mathrm{eff}}^{--}\), as well as inter-orbital pairing potential \(\tilde{\Delta}_{\mathbf{k},\mathrm{eff}}^{+-}\).
Figure 7: Superconducting band structure of Al/Au obtained by (a) DFT calculations and (b) analytical model in the presence of Zeeman magnetic field of size \(B=2\,\mathrm{mRy}\). Finite-energy Cooper pairings, highlighted by blue circles, emerge due to the interplay between superconductivity and magnetic field. The colorbar indicates particle (red/green) and hole (grey) components of the BdG spectra. The model parameters for the analytical model are those given in Table 1 and \(\Delta=0.4F_{0}\).
electrons, as it is the case for Al, are therefore well suited to achieve strong hybridization with Rashba materials. Consequently, other superconductors with larger superconducting gaps (e.g. Pb with \(T_{c}\approx 7.2\,\)K), that nevertheless have dominating \(p\)-electron character responsible for superconductivity, are promising to increase the observable size of the finite-energy pairing. Furthermore, replacing Au by the Bi/Ag(111) surface alloy, which shows a gigantic Rashba effect [24], is another option for optimization. Apart from Rashba-type SOC, also bulk-inversion asymmetric crystals (e.g. BiTeI or IrBiSe [64; 65]) where additionally Dresselhaus-type SOC-induced spin-momentum locking can be present, could be explored in this context. Observing finite-energy pairing under broken pseudo-spin-rotational symmetry benefits from a material with larger \(g\)-factor to increase the response to the magnetic field. InSb nanowires could be interesting systems for this purpose [66]. Moreover, van der Waals heterostructures are rich material combinations where proximity effects and inter-orbital pairing can be explored [67]. In these systems, the possibility of engineering the band structures via Moire superlattices provides additional knobs to tune their physical properties [68].
Albeit the abundance of heterostructures currently under investigation in the context of the search for MZMs or superconducting spintronics, multi-band physics in heterostructures remains largely unexplored. A variety of emergent phenomena can be explored in materials which show strong multi-band effects. For instance, multi-band superconductors can lead to exotic odd-frequency superconductivity [69]. Suitable materials engineering might further promote control over the mixed singlet-triplet character of the finite-energy pairing, that we demonstrate for Al/Au. This could be useful to control spin-triplet superconductivity, that in turn plays a pivotal role in superconducting spintronics [70; 71]. Moreover, spin-3/2 superconductors (e.g. YPtBi) or superconductors that show local inversion symmetry breaking in their crystal structures (e.g. CeRh\({}_{2}\)As\({}_{2}\)) are other examples where multi-band physics and broken symmetries inherently leads to unconventional pairing [72; 73]. Finally, novel topological superconducting pairing at finite energies [58] is another exciting direction for future research in real materials beyond model-based calculations.
In summary, in our combined DFT and low-energy model approach we study the proximity effect in a heterostructure of Au with strong Rashba SOC and the \(s\)-wave superconductor Al. We show the existence of finite-energy pairing in the superconducting state and analyze the mixed singlet-triplet character of the proximity-induced pairing. Combining the strengths of predictive DFT simulations with the insights from model calculations, our results pave the way towards a deeper understanding and experimental detection of multi-band effects in superconducting heterostructures.
## Acknowledgements
We acknowledge stimulating discussions with Juba Bouaziz, Julia Link, and Carsten Timm. We thank the Bavarian Ministry of Economic Affairs, Regional Development and Energy for financial support within the High-Tech Agenda Project "Bausteine fur das Quantenomputing auf Basis topologischer Materialien mit experimentellen und theoretischen Ansatzen". The work was also supported by the SFB1170 ToCoTronics and the Wurzburg-Dresden Cluster of Excellence ct.qmat, EXC2147, Project Id 390858490. Furthermore, this work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - Cluster of Excellence Matter and Light for Quantum Computing (ML4Q) EXC 2004/1 - 390534769. We are also grateful for computing time granted by the JARA Vergabegremium and provided on the JARA Partition part of the supercomputer CLAIX at RWTH Aachen University (project number jara0191).
|
2306.15010 | Efficient High-Resolution Template Matching with Vector Quantized
Nearest Neighbour Fields | Template matching is a fundamental problem in computer vision with
applications in fields including object detection, image registration, and
object tracking. Current methods rely on nearest-neighbour (NN) matching, where
the query feature space is converted to NN space by representing each query
pixel with its NN in the template. NN-based methods have been shown to perform
better in occlusions, appearance changes, and non-rigid transformations;
however, they scale poorly with high-resolution data and high feature
dimensions. We present an NN-based method which efficiently reduces the NN
computations and introduces filtering in the NN fields (NNFs). A vector
quantization step is introduced before the NN calculation to represent the
template with $k$ features, and the filter response over the NNFs is used to
compare the template and query distributions over the features. We show that
state-of-the-art performance is achieved in low-resolution data, and our method
outperforms previous methods at higher resolution. | Ankit Gupta, Ida-Maria Sintorn | 2023-06-26T18:49:09Z | http://arxiv.org/abs/2306.15010v3 | # Efficient High-Resolution Template Matching with Vector Quantized Nearest Neighbour Fields
###### Abstract
Template matching is a fundamental problem in computer vision and has applications in various fields, such as object detection, image registration, and object tracking. The current state-of-the-art methods rely on nearest-neighbour (NN) matching in which the query feature space is converted to NN space by representing each query pixel with its NN in the template pixels. The NN-based methods have been shown to perform better in occlusions, changes in appearance, illumination variations, and non-rigid transformations. However, NN matching scales poorly with high-resolution data and high feature dimensions. In this work, we present an NN-based template-matching method which efficiently reduces the NN computations and introduces filtering in the NN fields to consider deformations. A vector quantization step first represents the template with \(k\) features, and then filtering compares the template and query distributions over the \(k\) features. We show that state-of-the-art performance was achieved in low-resolution data, and our method outperforms previous methods at higher resolution showing the robustness and scalability of the approach.
keywords: template matching, vector quantized nearest neighbour field (VQ-NNF), object detection, high-resolution template matching. +
## 1 Introduction
Template matching refers to locating a small template image, \(T\) in a larger image \(I\). It is a fundamental problem in computer vision. It has applications in various fields such as object detection [33; 26; 31; 37], object tracking [40; 43], document information identification [34], counterfeit detection [28] and image registration [46; 25]. It also has a central role in deep learning due to the increasing demand for annotated data. Template matching has, for example, been used as a tool for human-in-the-loop data annotation frameworks [10; 37; 15; 24; 11] as it offers fast detection at a relatively low cost of labour and resources. Here, we refer to human-in-the-loop data annotation as object detection on an image set where one or multiple instances of different classes can be present. Template matching allows users to find similar objects quickly without expensive classifier training.
Traditional template matching approaches such as sum-of-squared distance (SSD), sum-of-absolute distance (SAD), and normalized cross-correlation (NCC) are very efficient. However, they consider every pixel pair in the sliding subwindow of query image \(I\) and \(T\), making them vulnerable to occlusion and transformations. More recently, nearest neighbour field (NNF) based matching approaches [29; 36; 35; 22] have been suggested and shown to overcome these shortcomings, making them state-of-the-art in the field. NNFs constitute a general and non-parametric framework for generating correspondences between the sub-regions of images and have been successfully used for motion-tracking in videos [51; 4], optical flow algorithms [7], and structural image editing [3]. NNF-based template matching approaches rely only on the subset of "good" matches between the template and the query subwindow. This makes them more robust against complex non-rigid transformations, occlusions, and background clutter. They achieve this by matching between two point sets, namely, the template point set and the query point set. A similarity measure is defined between the sets based on the matching statistics, for example, bi-directional matches [9] or unique matches [36].
Due to the powerful representation capability, features from pre-trained deep learning models are attractive for template matching. However, the approximate nearest neighbour search methods used in the algorithms mentioned above scale poorly with both the feature dimensions as seen in [2; 22; 23] and the template sizes (more feature points). Both speed and recall of the approximate nearest neighbour methods are affected negatively by increasing either the number of data points or the feature dimensions. Emphasizing
speed, the methods use PCA to reduce the feature dimensions, which reduces the features' representation capacity.
An inadvertent effect of using too many points in the NNF representation is the difficulty in considering the deformation implied in the NNF in the similarity score. To model the deformation in the NNF, previous methods rely on a deformation measure for each pixel in the query subwindow, penalising large relative distances between the template point location and its NN match location. This relies on the strong assumption of the similar orientation of the template and query subwindow, which makes these methods incapable of handling significant rotational or deformation changes. It also requires more computations as the relative distance has to be calculated for each point in the query sub-window.
NNF-based approaches represent the points in the query image with the
Figure 1: Overview of the proposed template matching method. First, a \(k\)-sized vector-quantized codebook is constructed using the template points. The codebook is used to generate the NNF label image for the template and the query image. Multiple template representations are generated using coarse filters and compared with the filtered query distribution. The matching scores from different filters are combined to get the final heatmap.
nearest neighbour of the points in the template image. This formulation reduces the pixel representation space in relation to the template. For example, in an 8-bit RGB image, the possible pixel values can be from a set size of \(|255|^{3}\), but representing the image with the NNF of a template size \(|w\times h|\) will reduce the representation set to \(wh\) thus greatly lowering the set of possible values. It can be viewed as a form of vector quantization [17] where the codebook is defined by the template pixels, and the query image pixels are then quantized with respect to the codebook. However, as mentioned above, the codebook is still too large for practical implementations and contains redundant information, as the representation of nearby pixels would be very similar.
In this work, we introduce a fast and flexible template-matching approach that reduces the NN calculations in previous approaches and better utilizes the NNF. We reduce the template codebook significantly to \(k\) points obtained by vector quantization of the template points. The points in the codebook represent the \(k\) major patterns in the template. This changes the computational complexity in the matching step to depend on only \(k\) points instead of the number of template pixels, greatly reducing the NN computation complexity and NNF creation time. The difference in the pixel distribution in the template and query subwindow among these \(k\) patterns is then used as the similarity measure in the matching. Since the number of NNF labels is small, simple coarse filters can be used to model the deformation in the NNF instead of considering pixel-wise distances. Our major contributions are:
* We present a simple and fast NNF-based template matching method that greatly reduces computational costs in the NNF creation while maintaining performance.
* We introduce filtering in the NNF space to model the deformation using coarse filters and show that state-of-the-art performance can be achieved with simple gaussian and haar filters.
* We show that our method's quantitative performance and run-time scale better with the image resolution than previous approaches.
## 2 Related Work
Traditional methods in template matching, such as SSD, SAD, and NCC, work well in cases where only translational variance is present. They are very
sensitive to deformations and non-rigid transformations. Classical algorithms that use SSD and SAD as the dissimilarity measure are comprehensively reviewed in [30].
Different approaches have been proposed to model affine transformations between the template and query window [18; 38; 20; 48; 14]. More recently, [16] proposed an approach based on template feature co-occurrence matrix statistics for the matching algorithm reaching state-of-the-art performance on the BBS datasets. In [19], a method is proposed that provides a quadratic improvement in the search complexity while also being robust to partial occlusion by formulating the template matching as a consensus set maximization problem, i.e., finding the transformation where the maximum number of pixels between the template and query window are co-visible. In [45], an adaptive radial ring code histogram (ARRCH) image descriptor is used, which is robust against large-scale and rotation variations. The ARRCH descriptor is created by generating a histogram of the stable pixels at different concentric rings in the template. In [44], a superpixel region binary descriptor (SRBD) is suggested to construct a multilevel semantic fusion vector used in the matching. First, the template is divided into superpixels by the KD-SLIC [1] algorithm, whereafter, a region binary vector is constructed by describing the dominant superpixel orientation. The method is robust against large deformations, occlusions, noise and illumination variations; however, the computation complexity is \(O(360\cdot K\cdot|I|)\) where \(K\) is the number of superpixels. This makes it challenging for real-time use. In [8], a quality-aware template matching approach is proposed, which can also be implemented as a trainable layer in a deep learning model.
Methods relying on NNF-based similarity measures have shown great promise due to their robustness against occlusion and deformations. In [9], the Best-Buddies Similarity (BBS) measure was introduced based on the properties of the nearest-neighbour (NN) matches between the features of the template and the query image. This bidirectional matching focuses only on the relevant corresponding features, thus making it robust against deformations and occlusion, outperforming previous state-of-the-art. This approach is slow and hard to use in practice as the computational complexity for BBS calculation is \(O(|I|\cdot|T|\cdot|I|)\) (where \(|I|\) denotes the size of the query image and \(|T|\) denotes the size of the template). In [36], inspired by the idea of matching objects for texture synthesis using the patch diversity [13], diversity similarity (DIS) and deformable diversity similarity (DDIS) measures were proposed that only rely on the NN matching in a single direction. This
approach is faster than BBS; however, with the computational complexity of \(O(|I|\cdot|T|)\), it still becomes time-consuming for larger image and template sizes.
To reduce the computational complexity of the methods proposed above while retaining the performance, [35] proposed image-based unpopularity (IWU) and deformable image-based unpopularity (DIWU). They use the diversity within the query image NNF instead of the subwindow of NNF, reducing the complexity to \(O(I)\). The authors used "unpopularity" in meaningfully the same way as "diversity" used in [36]. In [21], a majority neighbour similarity and annulus projection transformation were used to provide a fast template matching method which is also robust to different challenges. Recently, [22] proposed a global-aware diversity-based method, GAD, which combines the IWU and DIS scores to propose a parameter-free algorithm, also with \(O(I)\) complexity. An inverse NNF-based method was proposed in [23] where instead of searching the NNs of query points, the diversity of the NNs of template points is used. However, deformability is not considered, which limits the method's usability. A scale-adaptive NNF-based method was proposed in [50]. It extends DIS through statistical analysis, which makes it robust to outliers. However, the method performs calculations over multiple scales, which is time-consuming. In [49], authors use bi-directional NN calculations to make the diversity similarity measure robust against scaling, rotation, and illumination changes at the expense of added computational complexity.
## 3 Method
We define the input to our template matching method as a template \(\tau\) of size \(w\times h\) and a query image \(I\) of size \(W\times H\). The goal is to find the \(w\times h\) subwindow \(q\) in \(I\) most similar to \(\tau\). To do this, a similarity score \(S_{q_{i}}\) is calculated for each query subwindow \(q_{i}\) using the sliding window procedure, and the subwindow with the highest score is our desired output.
Our template-matching approach consists of three main steps. An overview of the pipeline is shown in Fig. 1, and each step is described below.
### Template Vector Quantization
Our method uses K-means clustering of the template features to get the \(k\) quantized vectors for the codebook. This reduces the number of points from \(w\times h\) to \(k\) (where \(k<<wh\)) for the NN computations in the next step.
Furthermore, it reduces the redundant information present in the template representation. The \(k\) cluster centers, \(C=\{c_{i}\}_{i=1}^{k}\), are then used for the new template representation for the NNF calculations. We next represent each pixel in the template with the cluster it belongs to and form a template label map, \(L_{\tau}\), as shown in Fig. 1. The NNF for the query image \(I\) is calculated using the codebook vectors and represented similarly as \(L_{I}\).
### Feature Vector Construction
Since the template point set is now greatly reduced in size, the similarity measures used in previous state-of-the-art methods can't be used. They rely on the diversity of the matches in the query subwindow [36] or whole image [35] and depend on all the pixels in the template to provide good discrimination between a potential match and background in the query image. Instead, we represent the template and the query subwindow with a distribution histogram of the cluster labels and use the difference between the two as our similarity measure. As the number of clusters is small, this process can be done efficiently by first converting the label image into a one-hot encoded (binary) image with \(k\) channels and then using the integral image of each channel to reduce the operations per query subwindow. The integral image representation enables rapid summation over rectangular image subregions and can be evaluated in constant time for any rectangular size. The integral image \(ii\) at each location \((x,y)\) of the \(k\) dimensional one-hot image \(\iota:\mathbb{Z}^{2}\rightarrow\{0,1\}^{k}\) is defined as:
\[\iota(x,y)=\sum_{x^{\prime}\leq x,y^{\prime}\leq y}\iota(x^{\prime},y^{\prime }). \tag{1}\]
In an integral image \(\iota\), the sum of the values in a rectangular region \(A\) defined by a top-left point \((x_{1},y_{1})\), top-right point \((x_{2},y_{1})\), bottom-left point \((x_{1},y_{2})\), and bottom-right point \((x_{2},y_{2})\) is calculated as:
\[sum(A)=\iota\iota(x_{2},y_{2})+\iota(x_{1},y_{1})-\iota(x_{1},y_{2})-\iota(x_ {2},y_{1}) \tag{2}\]
Thus, a \(k\) dimension histogram vector can be calculated efficiently to represent the cluster labels for a window size of \(w\times h\) in the query image.
As evident by the limitations of the histogram-based methods, this formulation removes the orientation information of the template. Furthermore, it does not explicitly incorporate spatial information, meaning it does not give any spatial preferences to labels at different locations in the template.
To address these limitations, we employed coarse filters and compared the filter response of the template and the query subwindow to calculate the similarity score. Coarse filters operate on larger regions of an image, such as blocks or segments, rather than operating on individual pixels, combining information from these regions. Fig. 1 shows a few examples of coarse filters used in this work.
#### 3.2.1 Coarse filters with dilated convolutions
As the weight in a coarse filter is the same for a set (region) of pixels, dilated convolution can be used to implement the filters and work on the integral images efficiently. Dilated convolutions are often used for multi-scale context aggregation in convolutional neural networks (CNNs) for semantic segmentation[47; 41; 5; 6]. Dilated convolutions work similarly to regular convolutions but allow for skipping pixels during convolution. The number of pixels to be skipped is referred to as the dilation rate and is used to increase the receptive field of the convolution while keeping the same computational complexity. Thus, coarse filters implemented using dilated convolutions and made to work on the integral images allow for fast computation of filter responses over a relatively large window size.
**Dilated Convolutions:** Let \(f:([-h_{f},h_{f}]\times[-w_{f},w_{f}])\cap\mathbb{Z}^{2}\) be a discrete filter of size \(r_{h}\times r_{w}\), where \(r_{w}=2w_{f}+1\), \(r_{h}=2h_{f}+1\). The filter response is then calculated using dilated convolution as:
\[(\iota*_{(d_{x},d_{y})}f)(x,y)=\frac{1}{(r_{w}\cdot d_{x}\cdot r_{h}\cdot d_{ y})}\sum_{m=-h_{f}}^{h_{f}}\sum_{k=-w_{f}}^{w_{f}}\iota\iota(x+d_{x}\cdot k,y+d_{ y}\cdot m)\cdot f(k,m), \tag{3}\]
where \(d_{x}\) and \(d_{y}\) are the dilation rates of the kernel in the x and y direction, respectively. The receptive field of the filter is \((d_{x}\cdot(r_{w}-1)+1,d_{y}\cdot(r_{h}-1)+1)\) and hence can be modified using the dilation rate and the kernel size. For example, the receptive field of \((w,h)\) can be achieved by a convolutional kernel of size \((3,3)\) by setting the dilation rate to \(((w-1)/2,(h-1)/2)\). Thus the filter response over a larger area can be achieved while requiring only relatively few (9) operations for each query sub-window. This, however, comes at the cost of large bin size \((w/3,h/3)\) for the label aggregation and, thus, lower deformation granularity. This means that the filtering operation can not capture label distribution shifts in the NNF within the bin size.
The granularity of the filter response for a fixed receptive field can be modulated in the following ways (as also illustrated in Fig. 2):
1. Increasing the filter kernel size. Since the bin size is inversely proportional to the kernel size for a fixed receptive field, increasing the kernel size increases the granularity of the filtering. However, the computations required for each window increase quadratically with the kernel size, which might be undesirable.
2. Successive filtering. Applying the filters multiple times increases the granularity exponentially with the dilation rate. At the same time, the number of computations grows linearly.
3. Sub-scale aggregation. Reducing the receptive field results in finer filter bins and, thus, higher granularity. This enables variable deformation penalties for selective template sub-regions. For example, a higher deformation penalty can be enforced in a template subregion by reducing the receptive field without increasing the number of computations.
**Filter Modification for Integral Image:** To get the filter response over the integral image, the coarse filter must first be converted into a dilation
Figure 2: Different ways of modulating the bin size for label distribution aggregation by a) increasing kernel size, b) successive filtering and c) sub-scale aggregation. The grey-shaded region in c) shows the region not considered in the filtering.
filter. Let \(f_{i}\) be a coarse filter of kernel size \((r_{w},r_{h})\) with each weight repeated over the rectangular bins of size \((d_{x},d_{y})\). The dilation filter \(f_{d}\) would then have the kernel size of \((r_{w}+1,r_{h}+1)\) and the filter weights of the dilation filter can be calculated using Algorithm 1. Thus, the number of calculations per window has been reduced from \(r_{w}\cdot d_{x}\cdot r_{h}\cdot d_{y}\) to \((r_{w}+1)\cdot(r_{h}+1)\) making the filtering process efficient. Fig. 3 shows the process of calculating the filter response with this approach.
#### 3.2.2 Template Representation
Multiple different filters can be used in the aforementioned ways to model spatial and orientation information of the template label distribution without much computational overhead. Here, we used the combination of coarse gaussian and rectangular haar-like filters to construct multiple template representation vectors from the response. A gaussian filter is the simplest way to impart spatial information as it assigns more weight to central regions and gradually decreases the weight as the distance from the center increases. Haar-like filters were introduced in [39] for rapid object detection using a
Figure 3: Process to efficiently calculate the coarse filter response over the label map. The one-hot encoded image is converted to an integral image, and the coarse filter is converted into a dilation filter to reduce the computational complexity. A small \(k=8\) was chosen here for illustration purposes.
boosted cascade of a large set of simple filters. Haar filters can be used to encode the orientation efficiently. This technique of combining Gaussian and Haar-like filter responses was chosen due to its ability to encode both spatial and orientation information with minimal computational overhead. More complex filters can also be used to generate richer template representations; however, we show that state-of-the-art (SOTA) performance can be achieved using these simple filters at different subscales (the c-option in Fig. 2). Since SOTA was achieved using the subscale approach, the main experiment did not consider alternative approaches of increasing the kernel size or successive filtering.
The template representation vectors obtained from the filter responses, \(R(\tau)\), can be defined as a collection of filtered responses:
\[R(\tau)=\{L_{\tau,s}^{f}\},\forall s\in S,f\in\{f_{i}\}_{i=1}^{n}\] \[\text{where, }L_{\tau,s}^{f}=\iota_{\tau}*_{((w-1)/(s\cdot(r_{w}-1)),(h -1)/(s\cdot(r_{h}-1))}f\]
\(S=\{1,2,...,s\}\) defines the set of all levels of sub-scale aggregation considered for the template, i.e., for a scale \(s\), the dilation rate was changed such that the receptive field of the filter was \((w/s,h/s)\); and \(f\) represents one of the \(n\) different filters used for feature vector construction. The number of template representation vectors would then be \(n\cdot S\). Since the number of operations for each filter is significantly less than the template window size \(w\times h\), multiple filters can efficiently define different template representations.
### Similarity Score
The NN of the \(k\) cluster centers is calculated for each pixel in the query image and converted into a label image \(L_{I}\). The distribution similarity score at each sub-window \(q_{i}\in Q\) in the query image is then defined as
\[\text{sim}(q_{i},\tau)=\sum_{s\in S,\sigma_{i}\in\{\sigma_{i}\}_{i=0}^{n}}-w_{s, \sigma_{i}}\cdot|L_{q,s}^{\sigma_{i}}-L_{\tau,s}^{\sigma_{i}}| \tag{4}\]
where \(w_{s,\sigma_{i}}\) is the weight assigned to the difference in the representations of filter \(\sigma_{i}\) at scale \(s\).
## 4 Computational Complexity
**K-means computation:** The first step in the method is to calculate the \(k\) cluster centers of the \(w\times h\triangleq l\) template points. The k-means clustering algorithm complexity is \(O(klt)\) where \(t\) is the number of iterations. The complexity would be \(O(l)\) as \(k,t\ll l\).
**NN Search:** The NN calculation is done by calculating the euclidean distance of all the points in the image \(W\times H\triangleq L\) to the \(k\) cluster centers. The complexity of this step is \(O(kL)\).
**Filter Responses:** The indices of the NN are converted into a one-hot tensor of size \(W\times H\times k\) and an integral image is constructed in \(O(L)\). Different coarse filters at different scales can be applied to aggregate the distribution. Each filtering operation with the dilated filter size \((r_{h}+1)\times(r_{w}+1)\) has a time complexity of \(O(((r_{h}+1)\cdot(r_{w}+1)\cdot k\cdot L)\). Multiple scale representations are then computed in \(O(s\cdot(r_{h}+1)\cdot(r_{w}+1)\cdot k\cdot L)\) where \(s\) is the maximum number of scales considered. Similarly, adding \(n\) different filters would increase the complexity by \(O(n)\). Here \(r_{h}\cdot r_{w}\) can be ignored as \(r_{h}\cdot r_{w}\ll k,L\). Hence, the overall complexity of this step would be \(O(nskL)\).
**Similarity Score Calculation:** The distribution similarity score is calculated by subtracting the \(n\cdot s\) template distribution vectors from the query distribution vector in \(O(nskL)\).
**Target Localization:** Maxima location requires another sweep through the image which takes another \(O(L)\) computations.
Overall, the complexity of our method would be \(O(l)+O(kL)+O(nskL)+O(nskL)+O(L)\approx O(nskL)\) where \(1<s\cdot n\cdot k<l\ll L\). Furthermore, the operations are easily parallelizable in a GPU to speed up the computations.
## 5 Experiments
**Datasets:** We evaluated our method on three datasets with different image sizes and challenges. The first dataset, BBS, a subset of the Online Object Tracking benchmark [42], consists of 90 video sequences with challenges such as complex deformation, occlusions, scale differences etc. It was acquired by sampling frames from [42] at different intervals. Three random pairs of frames with constant frame differences, \(dF=\{25,50,100\}\), were extracted from each sequence. This resulted in three sub-datasets, namely, BBS25, BBS50, and BBS100, consisting of 270, 270, and 252 images, respectively. The sampling was repeated 5 times to extract more robust statistics. The image size in the dataset was relatively small, 320x480 or 320x240 pixels.
The TinyTLP dataset is based on a shortened version of the object-tracking dataset Track Long and Prosper (TLP) [27]. This dataset contains 50 video clips with 600 frames of size 1280x720 pixels. To reduce redundancy, we followed the protocol used in [35] and sampled 50 frames [1, 11,..., 491] from each video and then used the 100th next frame as the query pair. The dataset created consists of 2500 template-query pairs.
The TLPattr dataset [27] is a collection of 91 short clips of different durations focusing on six different challenge attributes in the TLP dataset, namely, illumination variation, fast motion, background clutter, out-of-view or full occlusions, scale variations, and partial occlusions. The sequences in TLPattr are selected such that only one of the abovementioned challenge attributes is present in a sequence. There are 15 sequences belonging to a particular attribute, except for scale variation, which has 16 sequences. We randomly selected 15 images from each sequence as templates and chose the 100th next frame as the query image. Overall, the dataset consists of 1351 template-query pairs.
**Implementation:** We implemented our algorithm in python programming language using PyTorch[32] library to efficiently utilize GPU parallelization. All experiments were performed on a Intel(R) Core(TM) i7-5930K CPU and a GeForce GTX TITAN X GPU, respectively. Following a similar feature extraction strategy as in [36, 35], we investigated and compared two feature descriptors in our experiments: colour and deep features. The colour features were extracted by taking the 3x3 overlapping patches of RGB values. The deep features were extracted from a pre-trained ResNet-18 [12]. This differs from [36, 35] where the VGG model is used. ResNet-18 is a smaller and more parameter-efficient alternative with slightly lower performance on the
ImageNet dataset (69.7%, 89.1% vs 72.4%, 90.8% top-1 and top-5 accuracy) but with a significantly lower number of parameters (11.7 million vs 143.7 million). Here, the features were extracted from the bottleneck and consecutive stages in the model, concatenated and resized to the original input. The highest feature dimension considered in our experiments was 512, for which the output from the conv1 (64), conv2_x (64), conv3_x (128), and conv4_x (256) layers were concatenated and resized to the original image size.
We used a fast GPU PyTorch-based clustering library1 to perform K-means clustering to find \(k\) cluster centers to represent the template. The label aggregation was done using coarse filters implemented as dilated convolutional kernels. In our experiments, we used a \(3\times 3\) gaussian kernel with a sigma of 2 and dilation of \((w/3,h/3)\) to give equal weights to all the bins. This was intentional; we wanted to model the histogram comparison without any spatial location preferences and treat the performance as the baseline performance. For modelling the orientation, 2-rectangle and 3-rectangle haar filters were used by multiplying the gaussian kernel with the filter weights. The filter weights were modified according to Algorithm 1 to work on the integral images to compute the distribution aggregation in the query image efficiently. Multiple template vectors were generated and stored for comparison. We then found the exact NN for each feature vector in the query image in the \(k\) template cluster centers based on the L2 distance measure. The NNF was then converted to one-hot encoding to calculate the feature vector in each query sub-window.
Footnote 1: The k-means clustering library is available at:[https://github.com/DeMoriarty/fast_pytorch_kmeans](https://github.com/DeMoriarty/fast_pytorch_kmeans)
**Quantitative Evaluation:** We evaluated the performance of the different representation schemes in our method using Intersection over Union (IoU) between the estimated window, \(\tau_{x}\) from the algorithm and the ground truth, \(\tau_{GT}\). It is defined as follows:
\[IoU(\tau_{x},\tau_{GT})=\frac{\tau_{x}\cap\tau_{GT}}{\tau_{x}\cup\tau_{GT}}. \tag{5}\]
The performance for each dataset is shown as the mean IoU (MIoU), i.e. the mean of IoUs of all the samples, and Success Rate (SR), which is defined as the fraction of the samples in the dataset where \(IoU>0.5\).
**Performance Comparison:**We compare our method with the official code
releases of DDIS [36] and DIWU [35] as they are the state-of-the-art as well as the closest approaches to ours. The reported time for our method includes the clustering, NN computation and similarity score calculation time. For DDIS and DIWU, the reported time includes the NN computation time and the score calculation time. The computation time of our method with DDIS and DIWU is not directly comparable as the computations for our method are performed on a GPU, and the official implementation of these methods uses a CPU for NNF computation. However, the relative runtime difference between the methods while using RGB vs deep features highlights the challenges in using DDIS and DIWU effectively, especially at higher resolutions.
### Design Choices Evaluation
To quantify the effects of different design choices, we first evaluate the performance of our method on the BBS dataset for the codebook size (k), multiple distribution aggregation scales, and w/wo the haar filters. We considered RGB and deep features with feature dimensions, \(d=\{27,512\}\), number of clusters, \(k=\{4,8,16,32,64,128\}\), total scales, \(S=\{1,2,3\}\), and the inclusion of haar 2-rectangle and 3-rectangle filters, namely, haar-2x, haar-2y, haar-3x, and haar-3y. The weight of the similarity score for the final map from different scales is empirically set to \(1/s\), \(1.0\) for the gaussian and \(0.25\) for each haar filter. Fig. 4 shows the performance on the BBS25, BBS50, and BBS100 datasets and Figure 5 shows the analysis on TinyTLP and TLPattr datasets. The following trends can be observed from Fig. 4 and 5.
**Increasing the codebook size boosts performance.** This is the general trend as shown within each plot (increasing number of clusters on the x-axis of MIoU plots) and between plots (increasing number of features from 27 to 512). With ResNet features of dimension 512 (4 second row, 5 bottom row, first and third columns) and the cluster size set to 128 (right-most points within the MIoU plots), the base method (blue dots), meaning scale of one and no haar filters, reaches close to the performance of DDIS and DIWU for BBS and TLP datasets. This shows the benefits of our method in greatly reducing the NN calculations while managing similar performance with just histogram comparison. Furthermore, even without including any shape information (i.e., the base method), our method shows the competitiveness of simple histogram matching in the NNFs with the similarity measures used in DDIS and DIWU. A similar trend of performance increase with the codebook size was observed with other intermediate feature dimensions (D=64,128,256) of ResNet features.
**Multiple scale aggregation and haar filters boost the performance**. As seen in Fig. 4(top two rows) and Fig. 5 (first and third columns), the sub-scale aggregation and haar filters have clear gains in performance. This boost grows, as expected, smaller as cluster size and feature dimensions grow larger, as can be seen from the trend lines (blue vs green). This suggests that adding simple shape information using haar filters and sub-scale aggregation is not a good approximation of the performance.
Figure 4: Average MIoU (top two rows) and Total time (bottom two rows) performance of our approach with different hyper-parameters on the BBS25, BBS50, and BBS100 datasets. The different scales shown in different colours are shifted slightly for better visualization. DDIS and DIWU performances are shown with the dashed lines in the MIoU plots. The lines in the plots highlight the trends for the base configuration without spatial information (blue line) and the configuration with the most filters and scales (green line) for each cluster size. The x-axis for all the plots is shown in the log scale.
can increase the distribution matching specificity at lower feature dimensions and cluster sizes. It also enables the method to reach the performance of DDIS and DIWU at high dimensions with smaller cluster sizes. The gains diminish much faster with increasing the codebook size for deep features than colour features. This might be attributed to the high feature-matching specificity of the deep features compared with the colour features.
**The execution time increases with cluster size and the number of filters used.** Fig. 4 (bottom two rows) and Fig. 5 (second and fourth column) show the total time taken by the different configurations of our method. The runtimes of colour and deep features differ by a constant \(\sim\)0.1 sec. for BBS datasets and \(\sim\)0.5 sec. for TLP datasets for the same configurations. The trendlines in the time plots (green and blue lines) show the runtime scales worse when using multiple filters (green) than just increasing the codebook size (blue).
The experiments show that multiple hyperparameter configurations can reach the performance of DDIS and DIWU. Here, for the final results, we chose the overall best configuration on BBS25, BBS50, and BBS100 for com
Figure 5: Average MIoU (top two rows) and Total time (bottom two rows) performance of our approach with different hyper-parameters on the TinyTLP and TLPattr datasets. The different scales shown in different colours are shifted slightly for better visualization. DDIS and DIWU performances are shown with the dashed lines in the MIoU plots. The lines in the plots highlight the trends for the base configuration without spatial information (blue line) and the configuration with the most filters and scales (green line) for each cluster size. The x-axis for all the plots is shown in the log scale.
parison. We found that in BBS dataset, for RGB features, the best overall performance was achieved at \(k=128\), \(S=2\), and using both haar 2-rectangle and 3-rectangle filters. For deep features, the best configuration was found to be at \(k=128\), \(S=2\), and using only haar 2-rectangle filters. Our best configuration in the TLP datasets for RGB features was with \(k=128,S=3\), and the best configuration for the deep features was with \(k=128,S=2\), both with haar 2-rectangle filters. Hence, we fixed \(k=128\) for the rest of the experiments and showcased the best configuration results.
### Quantitative Results
The results for the BBS25, BBS50, and BBS100 datasets are shown in Tab. 1. The images in the datasets are relatively small (320x480 or 320x240). Our method lags slightly behind DDIS and DIWU for RGB features; however, for the deep features, the performance is marginally better than that of others. The runtime of our best method for the deep features is approximately the same as the RGB features, while the runtime increased around
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
**Method** & \multicolumn{2}{|c|}{**BBS25**} & \multicolumn{2}{|c|}{**BBS50**} & \multicolumn{2}{|c|}{**BBS100**} & \multicolumn{2}{|c|}{**Total**} \\ \hline & SR & MIoU & SR & MIoU & SR & MIoU & SR & MIoU & Time \\ \hline DDIS (C) & **0.781** & **0.638** & **0.695** & **0.567** & **0.594** & **0.493** & **0.690** & **0.566** & 1.336 \\ \hline DIWU (C) & 0.771 & 0.624 & 0.684 & 0.554 & 0.584 & 0.485 & 0.680 & 0.554 & 0.349 \\ \hline Ours best (C) & 0.764 & 0.632 & 0.657 & 0.544 & 0.552 & 0.475 & 0.658 & 0.550 & **0.171** \\ \hline \hline DDIS (D) & 0.813 & 0.667 & 0.748 & **0.613** & 0.682 & 0.569 & 0.748 & 0.617 & 6.337 \\ \hline DIWU (D) & **0.821** & 0.665 & **0.751** & 0.611 & **0.690** & 0.571 & **0.754** & 0.616 & 5.883 \\ \hline Ours best (D) & 0.813 & **0.675** & 0.732 & 0.611 & **0.690** & **0.577** & 0.745 & **0.621** & **0.167** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance results for BBS datasets. RGB features are marked with (C), and ResNet features are denoted by (D). The best results in each column are written in bold.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
**Method** & \multicolumn{3}{|c|}{**TinyTLP**} & \multicolumn{3}{|c|}{**TLPattr**} \\ \hline & SR & MIoU & Time & SR & MIoU & Time \\ \hline DDIS (C) & 0.607 & 0.528 & 22.565 & 0.557 & 0.479 & 25.486 \\ \hline DIWU (C) & 0.613 & 0.522 & 2.508 & **0.600** & 0.501 & 1.948 \\ \hline Ours best (C) & **0.638** & **0.554** & **0.935** & 0.597 & **0.505** & **0.772** \\ \hline \hline DDIS (D) & 0.631 & 0.549 & 49.686 & 0.617 & 0.522 & 41.362 \\ \hline DIWU (D) & 0.619 & 0.535 & 33.899 & 0.628 & 0.524 & 34.063 \\ \hline Ours best (D) & **0.694** & **0.598** & **1.040** & **0.655** & **0.549** & **0.911** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance results for TinyTLP and TLPattr datasets. RGB features are marked with (C), and ResNet features are denoted by (D). The best results in each column are written in bold.
4.7 times for DDIS (0.349 to 6.337 sec) and 16.8 times for DIWU (0.349 to 5.883). This further shows that increasing the feature dimensions in our method provides better performance without compromising the speed.
The results for the TinyTLP and TLPattr datasets are presented in Tab. 2. The images in the datasets are of higher resolution (1280x720), which makes the datasets suitable for showcasing the advantages provided by our method compared with DDIS and DIWU. Our method outperforms both DDIS and DIWU for RGB and deep features. For RGB features, our best configuration outperforms the DDIS by approx. 2.6% and 2.6%, and DIWU by approx. 2.2% and 0.4% in TinyTLP and TLPattr datasets, respectively. For deep features, our best configuration outperforms the DDIS by approx. 4.9% and 2.7%, and DIWU by approx. 6.3% and 2.5% in TinyTLP and TLPattr datasets, respectively. Here, similar to the BBS performance, the RGB and deep features runtimes of our method are marginally different for both datasets. In contrast, for DDIS, the runtime increased by 2.2 (22.5 to 49.7 secs) and 1.6 (25.5 to 41.4 secs) times for TinyTLP and TLPattr, respectively, and for DIWU, the runtime increased by 13.6 (2.5 to 33.9 secs) and 17.5 times (1.9 to 34.1 secs) for TinyTLP and TLPattr, respectively.
### Qualitative Results
Following the quantitative results in the previous section, we also show the qualitative comparison of our method with DDIS and DIWU. Fig. 6 and 7 show the results of the methods on the BBS and TLPattr datasets, respectively. Although the heatmaps appear noisier than the compared methods, our method successfully finds the template in the query image. Furthermore, our method seems robust in cases where objects similar to the template are present, while DDIS and DIWU fail to detect the template successfully.
#### 5.4.1 Rotation Experiments
We constructed new datasets from BBS and TinyTLP by rotating the query images in the dataset by an angle \(\theta\) while keeping the same template image and box same. The query box is also rotated at the same angle and treated as the new ground truth. The prediction box dimensions are chosen to be the same as the rotated template box dimensions for better comparison. We considered \(\theta\in\{60^{\circ},120^{\circ},180^{\circ}\}\) and compared the results with the original dataset performance without rotation. We chose to show the performance of our method on RGB features and highlight the results of all the spatial filters on our best-performing aggregation scale. Deep features were not considered as the runtime of DDIS and DIWU with deep features is quite high.
Fig. 8 shows the performance of the rotation experiment on the datasets.
Figure 6: Qualitative results for the BBS dataset. Images were chosen to highlight the differences between the methods. The first column shows the template image with the template marked in green. The second column shows the query image with the results of DDIS (in red), DIWU (in orange), and our method (in pink). The third, fourth, and fifth columns show the heatmaps of the DDIS, DIWU, and our method, respectively, with the predicted bounding box marked in the same colours as the second column.
DDIS and DIWU performance drops sharply with increased rotation. As expected, DDIS and DIWU are not robust against large rotations as their deformation penalty relies on the pixel distance. However, the performances of the different configurations of our method do not degrade as sharply. For BBS datasets, the performance of our method at \(\theta=60^{\circ}\) is close to that of DDIS and DIWU; however, our method outperforms them at higher rotational deformations, \(\theta=120^{\circ},180^{\circ}\). For the TinyTLP dataset, our method outperforms DDIS and DIWU for every rotation angle. DDIS and DIWU perform well in the original dataset, however, with a higher penalty for larger deformations, DDIS and DIWU are not robust against large rotations. Our methods perform similarly to DDIS and DIWU for the original dataset, but the performance doesn't degrade as severely when rotations are present. The performance of our methods increases back up at \(\theta=180^{\circ}\), which can be attributed to the filters used. The similarity score from the gaussian filter
Figure 7: Qualitative results for the TLPattr dataset. Images were chosen to highlight the differences between the methods. The first column shows the template image with the template marked in green. The second column shows the query image with the results of DDIS (in red), DIWU (in orange), and our method (in pink). The third, fourth, and fifth columns show the heatmaps of the DDIS, DIWU, and our method, respectively, with the predicted bounding box marked in the same colours as the second column.
representations would be high for \(\theta=180^{\circ}\) as the filters are symmetric.
#### 5.4.2 Scale Experiments
We compare the effect on performance and runtime with respect to the image resolution on the TLPattr dataset. The original resolution is \(1280\times 720\); hence, we downsample the images with different scale factors in the \([1/4,1]\) range. The template, query images, and ground truths are downsampled with the scale factor. Scale factors above one are not chosen as the image resolution is high enough to showcase the runtime variations, and upsampling doesn't add any meaningful information. The experiment also reflects the robustness of the performance if downsampling of the images is preferred to speed up the process.
Figure 9 show our method's MIoU and runtime performance along with DDIS and DIWU. As can be seen from the RGB feature accuracy plot, our methods' performance peaked at a lower resolution (between 0.5-0.75) and
Figure 8: MIoU performance on the rotation experiments on the BBS25, BBS50, BBS100, and TinyTLP datasets.
either remained the same or decreased. We hypothesize that RGB features provide limited benefits at higher resolutions. DIWU performs similarly to our method with haar filters, while DDIS performs similarly without haar filters. For deep features, all configurations of our method perform better than DDIS and DIWU. Furthermore, after the scale factor of 0.5, our methods show only fractional change in performance, which indicates that images can be processed at a lower resolution while maintaining performance.
The runtime analysis is shown in 9 b), and c). As can be seen from the figures, all configurations of our method scale better with the image resolution than DDIS and DIWU. For RGB features, the DIWU scales similarly to our methods with the resolution, but with deep features, both DDIS and DIWU scale equally poor with resolution.
Figure 9: MIoU and Runtime performance at different scale factors on the TLPattr dataset. The x-axis shows the relative scale factor with the image of size \(1280\times 720\). The y-axis corresponds to a) The mean IoU of the algorithms with the RGB features (top) and deep features (below), b) the runtime of the algorithms and c) the zoomed-in runtime with the y-axis clipped to two seconds.
#### 5.4.3 Attribute Experiments
We evaluate the performance of our method on the various challenge attributes of the TLPattr dataset and again compare it with DDIS and DIWU. The attribute-wise performance for both RGB and deep features is shown in Fig. 10.
Our best method performs comparably to DDIS and DIWU for most challenge attributes. DDIS and DIWU rely on a smaller subset of good matches, making them robust to partial occlusions and background clutter. In contrast, our method relies on the distribution of all the matches. Our method outperforms them in both categories, which shows that distribution matching might be better at a higher resolution than NN diversity-based scoring. For the rest of the categories, our performance is similar to or slightly better than DDIS and DIWU.
#### 5.4.4 Dimensionality Reduction
Although our method makes the application of high-dimensional features relatively fast over the GPU, using a lower-dimensional representation for the NNF creation might still be desirable for lower memory and shorter runtime requirements. DDIS and DIWU use PCA dimensionality reduction
Figure 10: The MIoU performances of our best method with the DDIS and DIWU for the different challenge attributes present in the TLPattr dataset. DDIS, DIWU and our performance are shown in blue, yellow and green bars, respectively.
to reduce the computational time of the methods. The methods reduce the dimensionality of the features to \(D=9\) first to speed up the NN search. Here, similarly, we explore the effects of dimensionality reduction by reducing the feature dimensions of our deep model from \(D=512\) to \(D=9\) and \(18\) and measure the relative change in performance on all the datasets. Only the results of our best-performing configuration on the datasets are shown.
Tab. 3 shows the MIoU and runtime of the dimensionality reduction on all the datasets. The overall performance of our method on the BBS datasets is reduced by approx. \(1\%\) of the original when using the same dimensions (\(D=9\)) as used in the DDIS and DIWU. The runtime of our method, however, was reduced by approx. \(25\%\). For \(D=18\), a similar performance to the original was achieved while reducing the runtime by a similar margin.
For TLP datasets, no significant performance reduction was noticed for \(D=9,18\). The runtime for \(D=9\) was reduced by approx. \(29\%\) and \(23\%\) for TinyTLP and TLPattr datasets, respectively. This further shows the robustness of the similarity measure on high-resolution datasets, even with dimensionality reduction. We also noticed that the runtime of \(D=18\) was lower than that of \(D=9\). This can be due to the K-means algorithm being optimized for faster runtime of higher dimensional features with more data points.
## 6 Conclusion
We present a fast and robust template-matching framework using vector quantized NNFs, outperforming the previous state-of-the-art methods on high-resolution images while greatly reducing the NN computation cost. Our method efficiently uses the NN matching paradigm, is easily parallelizable to
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
**Feature Dim.** & \multicolumn{4}{c|}{**BBS Datasets**} & \multicolumn{2}{c|}{**TLP Datasets**} \\ \hline & **BBS25** & **BBS50** & **BBS100** & **BBS(Total)** & **TinyTLP** & **TLPattr** \\ \hline MIoU(D=512) & **0.677** & **0.614** & **0.574** & **0.621** & 0.596 & **0.546** \\ \hline MIoU(D=18) & **0.677** & 0.613 & 0.570 & 0.620 & **0.597** & **0.546** \\ \hline MIoU(D=9) & 0.670 & 0.602 & 0.567 & 0.613 & 0.595 & 0.545 \\ \hline \hline Time(D=512) & 0.161 & 0.159 & 0.164 & 0.161 & 0.957 & 0.888 \\ \hline Time(D=18) & 0.120 & **0.119** & 0.123 & 0.121 & **0.660** & **0.677** \\ \hline Time(D=9) & **0.118** & **0.119** & **0.120** & **0.119** & 0.678 & 0.683 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Dimensionality reduction results for the BBS and TLP datasets. The best results in each column are written in bold.
be implemented on a GPU, and scales well with the image size. Furthermore, the similarity measure is comparable across images, unlike the other fast NNF-based approaches like DDIS and GAD, where the scores are based on image statistics. This can be useful in speeding up the process in applications of template matching where multiple images are considered, e.g., annotation applications and object tracking. We showcase the robustness of the method through exhaustive ablation studies. We show that state-of-the-art performance can be reached with simple coarse filters. This work paves the way for more robust and flexible template matching with more complex and custom-designed filters in the NNFs. The aforementioned advantages make this approach suitable for human-in-the-loop systems where the parameters can be adjusted, and the system's performance can be validated quickly by humans and applied to a large set of images.
## 7 Acknowledgement
The work was supported in part by the Swedish Foundation for Strategic Research under Grant BD15-0008SB16-0046 and in part by the European Research Council under Grant ERC-2015-CoG 683810.
|
2307.15339 | The Radon Signed Cumulative Distribution Transform and its applications
in classification of Signed Images | Here we describe a new image representation technique based on the
mathematics of transport and optimal transport. The method relies on the
combination of the well-known Radon transform for images and a recent signal
representation method called the Signed Cumulative Distribution Transform. The
newly proposed method generalizes previous transport-related image
representation methods to arbitrary functions (images), and thus can be used in
more applications. We describe the new transform, and some of its mathematical
properties and demonstrate its ability to partition image classes with real and
simulated data. In comparison to existing transport transform methods, as well
as deep learning-based classification methods, the new transform more
accurately represents the information content of signed images, and thus can be
used to obtain higher classification accuracies. The implementation of the
proposed method in Python language is integrated as a part of the software
package PyTransKit, available on Github. | Le Gong, Shiying Li, Naqib Sad Pathan, Mohammad Shifat-E-Rabbi, Gustavo K. Rohde, Abu Hasnat Mohammad Rubaiyat, Sumati Thareja | 2023-07-28T06:32:33Z | http://arxiv.org/abs/2307.15339v1 | The Radon signed cumulative distribution transform and its applications in classification of signed images
###### Abstract
Here we describe a new image representation technique based on the mathematics of transport and optimal transport. The method relies on the combination of the well-known Radon transform for images and a recent signal representation method called the Signed Cumulative Distribution Transform. The newly proposed method generalizes previous transport-related image representation methods to arbitrary functions (images), and thus can be used in more applications. We describe the new transform, and some of its mathematical properties and demonstrate its ability to partition image classes with real and simulated data. In comparison to existing transport transform methods, as well as deep learning-based classification methods, the new transform more accurately represents the information content of signed images, and thus can be used to obtain higher classification accuracies. The implementation of the proposed method in Python language is integrated as a part of the software package PyTransKit [12]
Key words and phrases:Signed Radon Cumulative Distribution Transform, Image Representation, Image Classification.
_Communicated by_. xxxxxx xxxxxx
\({}^{\dagger}\)_Corresponding author_
## 1 Introduction
Finding useful mathematical formulas for representing signal and image data can be critical for solving important engineering and scientific problems. Fourier representation methods, for example, can dramatically simplify the solution of shift invariant linear systems of equations (e.g. convolutions), and thus are extensively used to filter sound and other types of signals, in optics and image processing (e.g. deconvolution) and other important problems. Localized sparse representations (e.g. short time Fourier transforms, Wavelets) have been extensively utilized in signal compression, denoising, filtering and other applications given their ability to summarize important signal features using few parameters.
In the past few years, new signal and image representation methods (i.e. transforms) based on the mathematics of optimal mass transport have emerged. Unlike the aforementioned methods (Fourier and Wavelet representations), which represent
functions as linear combinations of basis functions or frame vectors, the emerging transport-based techniques are nonlinear representation methods. They have been extensively used to render problems that are nonlinear, non convex, and difficult to solve in signal domain into linear, convex problems that have closed form solutions in transport transform domain. Examples include reconstructing images from few measurements [16, 6], estimating signal parameters such as time delay and frequency [25], as well as classifying data where they have been shown to be especially effective in allowing for the formulation of closed form, efficient, and accurate solution of a wide variety of signal and image classification problems [26, 27, 28, 34].
To date, several types of transforms based on optimal transport have appeared in the literature, nearly all can be interpreted as a variation of the linear optimal transport (LOT) technique [17, 33], which is naturally applicable to data that can be interpreted as probability distributions. These include 1D signal transforms for non-negative distributional data [21] and signed distributions [2], as well as transforms for representing non-negative and normalized images in two and higher dimensions [14]. However, transport-based representation techniques for signed images in two or higher dimensions are yet to be described, and thus the success of transport-based classification methods in applications that involve signed images [3, 9, 13] has yet to be replicated.
Signed signals in two or higher dimensions are frequently encountered in science and engineering applications. One example being magnetic resonance imaging (MRI) where voltage coils acquire a series of 1D signals which are then reconstructed into images using the Fourier transform method, yielding complex functions that have positive and negative values [5]. Other examples include classification problems that employ scalograms derived from various types of Wavelet transforms [13]. Background subtraction is also a commonly performed pre-processing operation for microscopy and other types of optical imaging, typically yielding signed data [9]. Filtering techniques are also often employed to pre-process data prior to processing and classification. When signed filters are used (e.g. edge-type detection filters, convolutional neural networks), the resulting pre-processed images become signed [3].
Our goal in this manuscript is to describe a transport-based method for representing 2D signed images and demonstrate its ability to solve classification problems involving this type of data. The new image representation is denoted the Radon Signed Cumulative Distribution Transform (RSCDT) and consists of combining the recently developed signed cumulative distribution transform (SCDT) [2] together with the well-known Radon transform [24]. We describe ensuing mathematical properties of the representation, as well as a new Wasserstein-type metric for signed images which can be useful in the solution of pattern recognition problems. We then adapt the transport-based classification problem statement appearing in recent publications involving unsigned images [27] to signed images and describe its solution. Computational results showing the performance of the classification method based on the newly developed RSCDT in comparison to existing methods, including deep learning, are then shown, followed by concluding remarks. For readability, proofs for the given theorems and statements are provided in the appendix.
## 2 Preliminaries
### Notation
Throughout the manuscript, we deal with signals \(s\) assuming these to be square integrable in their respective domains. That is, we assume that \(\int_{\Omega_{s}}|s(x)|^{2}dx<\infty\), where \(\Omega_{s}\subseteq\mathbb{R}\) is the domain over which \(s\) is defined. In addition, we at times make use of the common notation: \(\|s\|^{2}=<s,s>=\int_{\Omega_{s}}s(x)^{*}s(x)dx=\int_{\Omega_{s}}|s(x)|^{2}dx\), where \(<\cdot,\cdot>\) is the inner product. Signals are assumed to be real, so the complex conjugate \({}^{*}\) does not play a role. We will apply the same notation for functions whose input argument is two dimensional, i.e. images. Let \(\mathbf{x}\in\Omega_{s}\subseteq\mathbb{R}^{2}\). A 2D continuous function representing the continuous image is denoted \(s(\mathbf{x}),\mathbf{x}\in\Omega_{s}\). Signals or images are denoted \(s^{(k)}\) when the class information is available, where the superscript \((k)\) represents the class label.
Below we will also make use of one dimensional (1D) increasing diffeomorphisms (one to one mapping functions), which are denoted as \(g(x)\) for signals and \(g^{\theta}(t)\) when they need to be parameterized by an angle \(\theta\). The set of all possible increasing diffeomorphisms from \(\mathbb{R}\) to \(\mathbb{R}\) will be denoted as \(\mathcal{T}\). Finally, at times we also utilize the '\(\circ\)' operator to denote composition. A summary of the symbols and notation used can be found in Table 1.
### The Cumulative Distribution Transform (CDT)
The CDT [21] is an invertible nonlinear 1D signal transform from the space of smooth probability densities to the space of diffeomorphisms. The CDT morphs a given input signal, defined as a probability density function (PDF), into another PDF in such a way that the Wasserstein distance between them is minimized. More formally, let \(s(x),x\in\Omega_{s}\) and \(r(x),x\in\Omega_{r}\) define a given signal and a reference signal, respectively, which we consider to be appropriately normalized such that \(s>0,r>0\), and \(\int_{\Omega_{s}}s(x)dx=\int_{\Omega_{r}}r(x)dx=1\). The
\begin{table}
\begin{tabular}{l l} \hline Symbols & Description \\ \hline \(s(x)\) / \(s(\mathbf{x})\) & Signal / image \\ \(\Omega_{s}\) & Domain of \(s\) \\ \(\widetilde{s}(t,\theta)\) & Radon transform of \(s\) \\ \(\widetilde{s}(x)\) / \(\widetilde{s}(t,\theta)\) & CDT / R-CDT transform of \(s\) \\ \(\mathscr{R}(\cdot)\) / \(\mathscr{R}^{-1}(\cdot)\) & Forward / inverse Radon transform operation \\ \(g(x)\) & Strictly increasing and differentiable function \\ \(g^{\theta}(t)\) & Strictly increasing and differentiable \\ & function, indexed by an angle \(\theta\) \\ \(s\circ g\) & \(s(g(x))\): composition of \(s(x)\) with \(g(x)\) \\ \(\widetilde{s}\circ g^{\theta}\) & \(\widetilde{s}(g^{\theta}(t),\theta)\): composition of \(\widetilde{s}(t,\theta)\) with \(g^{\theta}(t)\) \\ & along the \(t\) dimension of \(\widetilde{s}(t,\theta)\) \\ \(\mathcal{G}\) & Set of increasing diffeomorphisms \(g(x)\) \\ \(\mathcal{G}_{R}\) & Set of increasing diffeomorphisms \(g^{\theta}(t)\) \\ & parameterized by \(\theta\) with \(\theta\in[0,\pi]\) \\ \(\mathcal{T}\) & Set of all possible increasing diffeomorphisms \\ & from \(\mathbb{R}\) to \(\mathbb{R}\) \\ \hline \end{tabular}
\end{table}
Table 1: Description of symbols
forward CDT transform1 of \(s(x)\) with respect to \(r(x)\) is given by the strictly increasing function \(\widehat{s}(x)\) that satisfies
Footnote 1: We are using a slightly different definition of the CDT than in [21]. The properties of the CDT outlined here hold in both definitions.
\[\int_{-\infty}^{\widehat{s}(x)}s(u)du=\int_{-\infty}^{x}r(u)du \tag{2.1}\]
As described in detail in [21], the CDT is a nonlinear and invertible operation, with the inverse being
\[s(x)=\frac{d\widehat{s}^{-1}(x)}{dx}r\left(\widehat{s}^{-1}(x)\right),\text{ and }\widehat{s}^{-1}(\widehat{s}(x))=x.\]
Therefore, \(\hat{s}\) can be seen as a representation of the original function \(s\). Moreover, like the Fourier transform [4] for example, the CDT has a number of properties which will help us render signal and image classification problems easier to solve. Finally, note that throughout the manuscript we use the uniform distribution for the reference function \(r\), i.e.
\[r(x)=\begin{cases}1,\quad x\in\Omega_{r}\\ 0,\quad\text{otherwise}\end{cases} \tag{2.2}\]
**Property II-B.1**_(Composition)_: Let \(s(x)\) denote a normalized signal and let \(\widehat{s}(x)\) be the CDT of \(s(x)\). The CDT of \(s_{g}=g^{\prime}s\circ g\) is given by
\[\widehat{s}_{g}=g^{-1}\circ\widehat{s} \tag{2.3}\]
Here, \(g\in\mathcal{T}\) is an invertible and differentiable function (diffeomorphism), \(g^{\prime}=dg(x)/dx\), and '\(\circ\)' denotes the composition operator with \(s\circ g=s(g(x))\). For a proof, see Appendix A in supplementary materials.
Figure 1: The cumulative distribution transform (CDT) of a signal (probability density function). Note that the CDT of an altered (transported) signal \(s_{g}(x)\) (see text for definition) is related to the transform of \(s\). In short, the CDT renders displacements into amplitude modulations in transform space.
The CDT composition property implies that, variations in a signal caused by applying \(g(x)\) to the independent variable will change only the dependent variable in CDT space. This property is illustrated in Figure 1 where variations along both independent and dependent axis directions in original signal space become changes solely along the dependent axis in CDT space).
**Property II-B.2**: _(Embedding)_: The CDT induces an isometric embedding between the space of 1D signals with the 2-Wasserstein metric and the space of their CDT transforms with a weighted-Euclidean metric [21], i.e.,
\[W_{2}^{2}(s_{1},s_{2})=\big{|}\big{|}(\widehat{s}_{1}-\widehat{s}_{2})\, \sqrt{r}\big{|}\big{|}_{L^{2}(\Omega_{r})}^{2}\,, \tag{2.4}\]
for all signals \(s_{1},s_{2}\). That is to say, if we wish to use the Wasserstein distance as a measure of similarity between \(s_{1},\ s_{2}\), we can compute it as simply a weighted Euclidean norm in CDT space.
The property above naturally links the CDT and Wasserstein distances for PDFs. Wasserstein [32] distances are linked to optimal transport and have been used in a variety of applications in signal and image processing and machine learning (see [15] for a recent review).
### The Radon transform
The Radon transform of an image \(s(\mathbf{x}),\mathbf{x}\in\Omega_{s}\subset\mathbb{R}^{2}\), which we denote by \(\widetilde{s}=\mathcal{R}(s)\), is defined as
\[\widetilde{s}(t,\theta) = \int_{\Omega_{s}}s(\mathbf{x})\delta(t-\mathbf{x}\cdot\xi_{\theta })d\mathbf{x} \tag{2.5}\]
Here, \(t\) is the perpendicular distance of a line from the origin and \(\xi_{\theta}=[\cos(\theta),\sin(\theta)]^{T}\), where \(\theta\) is the angle over which the projection is taken.
Furthermore, using the Fourier Slice Theorem [20, 23], the inverse Radon transform \(s=\mathcal{R}^{-1}(\widetilde{s})\) is defined as
\[s(\mathbf{x}) = \int_{0}^{\pi}\int_{-\infty}^{\infty}\widetilde{s}(\mathbf{x} \cdot\xi_{\theta}-\tau,\theta)w(\tau)d\tau d\theta, \tag{2.6}\]
where \(w\) is the ramp filter (i.e.,\((\mathscr{F}w)(\xi)=|\xi|,\forall\xi\) ) and \(\mathscr{F}\) is the Fourier transform.
**Property II-C.1** (_Intensity equality_): Note that
\[\int_{\Omega_{s}}s(\mathbf{x})d\mathbf{x}=\int_{-\infty}^{\infty}\widetilde{s }(t,\theta)dt,\ \ \ \ \forall\theta\in[0,\pi] \tag{2.7}\]
which implies that \(\int_{-\infty}^{\infty}\widetilde{s}(t,\theta_{i})dt=\int_{-\infty}^{\infty} \widetilde{s}(t,\theta_{j})dt\) for any two choices \(\theta_{i},\theta_{j}\in[0,\pi]\).
### Radon Cumulative Distribution Transform (R-CDT)
The CDT framework was extended for 2D patterns (images as normalized density functions) through the sliced-Wasserstein distance in [14], and was denoted as R-CDT. The main idea behind the R-CDT is to first obtain a family of one dimensional representations of a two dimensional probability measure (e.g., an image) through the Radon transform and then apply the CDT over the \(t\) dimension in Radon transform space. More formally, let \(s(\mathbf{x})\) and \(r(\mathbf{x})\) define a given image and a reference image, respectively, which we
consider to be appropriately normalized. The forward R-CDT of \(s(\mathbf{x})\) with respect to \(r(\mathbf{x})\) is given by the measure preserving function \(\widehat{s}(t,\theta)\) that satisfies
\[\int_{-\infty}^{\widehat{s}(t,\theta)}\widetilde{s}(u,\theta)du=\int_{-\infty} ^{t}\widetilde{r}(u,\theta)du,\ \forall\theta\in[0,\pi] \tag{2.8}\]
As in the case of the CDT, a transformed signal in R-CDT space can be recovered via the following inverse formula [14],
\[s(\mathbf{x})=\mathbb{R}^{-1}\left(\frac{\partial\widehat{s}^{-1}(t,\theta)}{ \partial t}\widetilde{r}\left(\widehat{s}^{-1}(t,\theta),\theta\right)\right) \tag{2.9}\]
The process of calculating the R-CDT transform is shown in Figure 2. As with the CDT, the R-CDT has a couple of properties outlined below which will be of interest when classifying images. Let us give some important definitions first. Let \(G_{R}\) be the set defined as follows,
\[\mathcal{G}_{R}=\{g=(g^{\theta})_{\theta\in[0,\pi]}:g^{\theta}:\mathbb{R} \rightarrow\mathbb{R}\ \text{is a strictly increasing bijection}\ \forall\theta\in[0,\pi]\} \tag{2.10}\]
**Definition 2.1**.: Let \(\star:\mathcal{G}_{R}\times\mathcal{G}_{R}\rightarrow\mathcal{G}_{R}\) be an operator defined by
\[(g\star h)(\cdot,\theta):=(g^{\theta}\circ h^{\theta})(\cdot)\quad\forall \theta\in[0,\pi]\]
It is not hard to see that \((\mathcal{G}_{R},\star)\) is a group (see A.1).
**Definition 2.2**.: Let \(H_{R}\subseteq\mathcal{G}_{R}\). We call \(H_{R}\) a convex subgroup of \((\mathcal{G}_{R},\star)\) if \(H_{R}\) is a convex set and \((H_{R},\star)\) is a group.
**Property II-D.1**_(Composition)_: Let \(s(\mathbf{x})\) denote an appropriately normalized image and let \(\widetilde{s}(t,\theta)\) and \(\widetilde{s}(t,\theta)\) be the Radon transform and the R-CDT transform of \(s(\mathbf{x})\), respectively. The R-CDT transform of \(s_{g^{\theta}}=\mathscr{R}^{-1}\left(\left(g^{\theta}\right)^{\prime} \widetilde{s}\circ g^{\theta}\right)\) is given by
\[\widehat{s}_{g^{\theta}}=(g^{\theta})^{-1}\circ\widehat{s}, \tag{2.11}\]
Figure 2. The process of calculating the Radon cumulative distribution transform (R-CDT) of an image \(s(\mathbf{x})\) (defined as a 2-dimensional probability density function). The first step is to apply the Radon transform on \(s(\mathbf{x})\) to obtain \(\widetilde{s}(t,\theta)\). The R-CDT \(\widehat{s}(t,\theta)\) is then obtained by applying the CDT over the \(t\) dimension of \(\widetilde{s}(t,\theta),\ \forall\theta\).
where \(\big{(}g^{\theta}\big{)}^{\prime}=dg^{\theta}(t)/dt,\)\(\widetilde{s}\circ g^{\theta}:=\widetilde{s}(g^{\theta}(t),\theta),\) and \((g^{\theta})^{-1}\circ\widehat{s}=(g^{\theta})^{-1}(\widehat{s}(t,\theta))\). Here for each fixed \(\theta,\)\(g^{\theta}\) can be thought of an increasing and differentiable function with respect to \(t\). The above equation hence follows from the composition property for 1D CDT. For a proof, see [14].
The R-CDT composition property implies that, variations along both independent and dependent axis directions in an image, caused by applying \(g^{\theta}(t)\) to the independent \(t\) variable of its Radon transform, become changes solely along the dependent variable in R-CDT space.
**Property II-D.2**: _(Embedding)_: R-CDT induces an isometric embedding between the space of images with sliced-Wasserstein metric and the space of their R-CDT transforms with a weighted-Euclidean metric, i.e.,
\[SW_{2}^{2}(s_{1},s_{2})=\Big{|}\big{|}(\widehat{s}_{1}-\widehat{s}_{2})\, \sqrt{\widetilde{r}}\Big{|}\Big{|}_{L^{2}(\Omega_{\widetilde{r}})}^{2} \tag{2.12}\]
for all images \(s_{1}\) and \(s_{2}\). For a proof, see [14].
As the case with the 1D CDT shown above, the property above naturally links the R-CDT and sliced Wasserstein distances for PDFs and affords us a simple means of computing similarity among images [14]. We remark that throughout this manuscript we use the notation \(\widehat{s}\) for both CDT or R-CDT transforms of a signal or image \(s\) with respect to a fixed reference signal or image \(r\), if a reference is not specified.
### Signed Cumulative Distribution Transform
This section is a brief description of the SCDT (for details, see [2]). Fix a reference signal \(r\) that is a positive probability density. Let \(s\) be a non-negative signal with \(\|s\|_{1}=1.\) The cumulative distribution function \(F_{s}\) of \(s\) is defined as
\[F_{s}(x):=\int_{-\infty}^{x}s(t)dt. \tag{2.13}\]
The Cumulative Distribution Transform \(\mathcal{C}(s)\) of \(s\) with respect to \(r\), as defined in eq. (2.1) (which implies \(F_{s}(\widehat{s}(t))=F_{r}(t)\)) [21], can also be written as (see [2])
\[\widehat{s}:=F_{s}{}^{\dagger}\circ F_{r}, \tag{2.14}\]
where \(F_{s}{}^{\dagger}\) is the generalized inverse of \(F_{s}\) defined by
\[F^{\dagger}(y):=\inf\{x:F(x)>y\}.\]
Note that the generalized inverse comes in play here, because the CDF \(F_{s}\) of \(s\) need not be invertible. Now, for a signed signal \(s\), the transform is defined by utilizing the Jordan decomposition of \(s\), namely
\[s^{+}(t)=\max\{0,s(t)\},\ s^{-}(t)=\max\{0,-s(t)\} \tag{2.15}\]
Then, the SCDT of \(s\) with respect to \(r\) is defined as
\[\widehat{s}=\mathbb{T}_{r}(s):=\Big{(}(s^{+})^{\star},\|s^{+}\|_{1},(s^{-})^{ \star},\|s^{-}\|_{1}\Big{)}, \tag{2.16}\]
where
\[(s^{\pm})^{\star}:=\begin{cases}\widehat{\frac{s^{\pm}}{\|s^{\pm}\|_{1}}}& \text{if $s^{\pm}$ is non-trivial}\\ 0&\text{if $s^{\pm}=0$}\,.\end{cases} \tag{2.17}\]
Let \(\mathcal{N}:=\{f:\mathbb{R}\rightarrow\mathbb{R}\text{ non-decreasing }\}\times\mathbb{R}^{+}.\) Then, for a fixed reference \(r,\)
\[\mathbb{T}_{r}:L^{1}(\mathbb{R})\rightarrow\mathcal{N}\times\mathcal{N}\]
is a bijection, where for \((f,u,g,v)\in\mathcal{N}\times\mathcal{N}\) the inverse transform is given as,
\[(f,u,g,v)\longrightarrow\frac{u}{\|r\|_{1}}r(f^{\dagger}(\cdot))(f^{\dagger}) ^{\prime}(\cdot)-\frac{v}{\|r\|_{1}}r(g^{\dagger}(\cdot))(g^{\dagger})^{\prime }(\cdot) \tag{2.18}\]
provided \(f^{\dagger}\) and \(g^{\dagger}\) are differentiatble.
Note that, the SCDT expands on the concept of the CDT by accommodating functions with both positive and negative values. By splitting a signed signal into its positive and negative components, the SCDT allows for the application of the CDT independently to each part. This enables the transformation of signed signals while preserving important properties and characteristics. The introduction of the SCDT also introduces the concept of the generalized inverse. The generalized inverse behaves similarly to the regular inverse for invertible functions, however, it extends to situations where functions may not be invertible. This becomes necessary in the case of the SCDT, given that the Jordan decomposition used above (see (2.15)) yields two non-negative signals \(s^{+}\) and \(s^{-}\) which might contain zeroes, and hence their cumulation (an analogue of CDF, for functions that are not probability distributions (see [2])) stated in equation (2.13) might not be stricly increasing and thus, might not admit an inverse. Therefore, the generalized inverse, is valuable in situations where a direct inverse may not exist, still allowing for the recovery and analysis of signals even when invertibility is not guaranteed.
By incorporating the SCDT and the notion of the generalized inverse, the framework becomes more versatile and applicable to a broader range of signals. Lets now see some of the important properties of SCDT, that are a more generalized version of the properties of the CDT.
**Lemma 2.3**.: _(Composition Property) [2] Let \(s\in L^{1}(\mathbb{R})\), and let \(g:\mathbb{R}\rightarrow\mathbb{R}\) be a strictly increasing surjection. Consider \(s_{g}\) given by \(s_{g}(t)=g^{\prime}(t)(s(g(t))\). Then, the SCDT of \(s_{g}\) satisfies_
\[\widehat{s}_{g}=\Big{(}g^{-1}\circ(s^{+})^{\star},\|s^{+}\|_{1},g^{-1}\circ(s^ {-})^{\star},\|s^{-}\|_{1}\Big{)}.\]
In order to state the second lemma, we require a metric on the native space. This metric is a generalization of Wasserstein distance [31] defined for probability densities, to non-normalized signed signals.
**Definition 2.4**.: (Signed Wasserstein Metric) Let \(r,s\in L^{1}(\mathbb{R})\) such that \(\int r(t)|t|^{2}dt<\infty\) and \(\int s(t)|t|^{2}dt<\infty\), then
\[D_{\mathcal{S}}^{2}(r,s):=D_{W^{2}}^{2}\left(r^{+},s^{+}\right)+D_{W^{2}}^{2} \left(r^{-},s^{-}\right) \tag{2.19}\]
where \(D_{W^{2}}^{2}\left(p,q\right)=d_{W^{2}}^{2}\left(\frac{p}{\|p\|_{1}},\frac{q} {\|q\|_{1}}\right)+\left(\|p\|_{1}-\|q\|_{1}\right)^{2}\) and \(d_{W^{2}}^{2}(\cdot,\cdot)\) is the usual Wasserstein distance [31].
Utilizing the SCDT and the metric \(D_{S}\), the following result is a generalization of a well-known isometry.
**Lemma 2.5**.: _(Isometry) [2] Let \(s_{1},s_{2}\in L^{1}(\mathbb{R})\) such that \(\int s_{1}(t)|t|^{2}dt<\infty\) and \(\int s_{2}(t)|t|^{2}dt<\infty\). Then,_
\[D_{S}^{2}(s_{1},s_{2}) =\|\widehat{s_{1}}-\widehat{s_{2}}\|_{(L^{2}(r)\times\mathbb{R}) ^{2}}^{2}\] \[:=\|(s_{1}^{+})^{\star}-(s_{2}^{+})^{\star}\|_{L^{2}(r)}^{2}+ \left(\|s_{1}^{+}\|_{1}-\|s_{2}^{+}\|_{1}\right)^{2}\] \[+\|(s_{1}^{-})^{\star}-(s_{2}^{-})^{\star}\|_{L^{2}(r)}^{2}+ \left(\|s_{1}^{-}\|_{1}-\|s_{2}^{-}\|_{1}\right)^{2},\]
_where \(\|\cdot\|_{L^{2}(r)}\) is the norm defined by_
\[\|f\|_{L^{2}(r)}^{2}:=\int|f(x)|^{2}\,r(x)\,dx.\]
Now that we have all the preliminaries in place, we define the Radon Signed Cumulative Distribution Transform and see some of its interesting properties that will later help us in classification of signed signals in 2D.
## 3. Radon Signed Cumulative Distribution Transform
Often many image pre-processing steps like background subtraction leave images having both positive and negative values. An example of such image is shown in Figure 3. Here we expand the definition of the R-CDT provided above (see section 2.4) to allow us to deal with signed images. The idea can be simply summarized as the application of the SCDT to the Radon transform of a signed function. We describe below the mathematical properties of this new representation and show how they can be used to obtain classifiers which are at the same time accurate, closed form, and cheap to compute.
**Definition 3.1**.: (RSCDT) Let \(r:\mathbb{R}^{2}\to\mathbb{R}\) be the positive reference image and \(s:\mathbb{R}^{2}\to\mathbb{R}\) be the signed image, then Radon Signed CDT of \(s\) with respect to \(r\) is
Figure 3. The process of obtaining the Radon Signed Cumulative Distribution Transform (R-SCDT) of a grayscale image with background is depicted. In the first step, a filtering technique is applied to suppress the background as well as to enhance the features of interest in the image. The filtered image contains negative pixel values as shown in the figure. To calculate the R-SCDT (\(\widehat{s}(t,\theta)\)) of the filtered image, a two-step process is followed. First, the Radon transform is applied to the filtered image which computes the line integrals at different angles. Next, the signed cumulative distribution transform (SCDT) is applied to each projection obtained from the Radon transform.
a composition of the Radon transform and SCDT for each projection angle. Formally, we first take the Radon transform of both the reference and the signal as,
\[\widetilde{r}(t,\theta)=\mathcal{R}(r(\mathbf{x}))\quad\text{ and }\quad \widetilde{s}(t,\theta)=\mathcal{R}(s(\mathbf{x})).\]
Finally, the RSCDT of \(s\) with respect to the reference \(r\), for each \(\theta\in[0,\pi]\) is given as,
\[\widehat{s}(\cdot,\theta)=\mathbb{T}_{\tilde{r}(\cdot,\theta)}(\tilde{s}( \cdot,\theta))=\Big{(}(\widetilde{s}^{+})^{\star}(\cdot,\theta),\|\widetilde{s }^{+}(\cdot,\theta)\|_{1},(\widetilde{s}^{-})^{\star}(\cdot,\theta),\| \widetilde{s}^{-}(\cdot,\theta)\|_{1}\Big{)} \tag{10}\]
where \((\cdot)^{\star}\) and \(\mathbb{T}_{\tilde{r}(\cdot,\theta)}\) are as defined in (17) and (18), respectively.
As in the case of its family transforms, a transformed image in the RSCDT space can be recovered via following inverse formula
\[s=\mathcal{R}^{-1}([\mathbb{T}_{\widetilde{r}(\cdot,\theta)}^{-1}(\widehat{s }(\cdot,\theta))]_{\theta\in[0,\pi]})\]
where \(\mathcal{R}^{-1}\) is the inverse of the Radon transform (see (6)) and \(\mathbb{T}^{-1}\) is the inverse of the SCDT (see (18)).
### Properties of RSCDT
Like its family transforms, RSCDT has a few properties outlined below which will be of interest when classifying images.
**Proposition 3.2**.: (Composition Property) Let \(s(\mathbf{x})\) denote a signed image and let \(\tilde{s}(t,\theta)\) and \(\widehat{s}(t,\theta)\) be the Radon transform and RSCDT of \(s(\mathbf{x})\) respectively. For \(g\in\mathcal{G}_{R}\) and for each \(\theta\in[0,\pi]\), define
\[s_{g}=\mathcal{R}^{-1}((g^{\theta})^{\prime}\widetilde{s}\circ g^{\theta}) \tag{11}\]
where \((g^{\theta})^{\prime}(t)=dg^{\theta}(t)/dt,\ \tilde{s}\circ g^{\theta}(\cdot):= \tilde{s}(g^{\theta}(\cdot),\theta).\) Then the RSCDT of \(s_{g}\) is given by
\[\widehat{s}_{g}(t,\theta)=\Big{(}(g^{\theta})^{-1}\circ(\widetilde{s}^{+})^{ \star}(t,\theta),\|\widetilde{s}^{+}(\cdot,\theta)\|_{1},(g^{\theta})^{-1} \circ(\widetilde{s}^{-})^{\star}(t,\theta),\|\widetilde{s}^{-}(\cdot,\theta) \|_{1}\Big{)} \tag{12}\]
Like in the previous versions of the transform, the composition property of the RSCDT implies that variations along both the independent and dependent axis directions in an image, caused by \(g^{\theta}(t)\) to the independent variable in the Radon space becomes solely along the dependant variable in the RSCDT space. The following corollary, gives an explicit example of the composition property for an easy to understand variation of translation in the independent variables.
**Corollary 3.3**.: Let \(s(\mathbf{x})\) denote a signed image and let \(\tilde{s}(t,\theta)\) and \(\widehat{s}(t,\theta)\) be the Radon transform and RSCDT of \(s(\mathbf{x})\) respectively. Let
\[s_{1}(x_{1},x_{2})=s(x_{1}-x_{0},y_{1}-y_{0}),\]
Then the RSCDT \(\widehat{s_{1}}(\cdot,\theta)\) of \(s_{1}\) for every \(\theta\in[0,\pi]\) is given by
\[\widehat{s_{1}}(t,\theta)=\Big{(}(g^{\theta})^{-1}\circ(\widetilde{s}^{+})^{ \star}(t,\theta),\|\widetilde{s}^{+}(\cdot,\theta)\|_{1},(g^{\theta})^{-1} \circ(\widetilde{s}^{-})^{\star}(t,\theta),\|\widetilde{s}^{-}(\cdot,\theta) \|_{1}.\Big{)} \tag{13}\]
where, \(g^{\theta}(t)=t-x_{0}\cos\theta-y_{0}\sin\theta\).
In certain applications data sets can be rendered convex by family transforms of RSCDT in the respective transform domains. For example, we saw in case of 1D in [21] sets generated by translations of a template signal can have a complex geometry in the signal domain. However, they have a very simple convex structure in the transform domain. This convexification property is useful in classification problems since two
disjoint convex data sets can be separated by a linear classifier. The following convexity property is a generalization of the convexity property that was proved in [2] for signed signals in 1D.
**Proposition 3.4**.: (Convexification Property) Let \(\phi(\mathbf{x})\) denote a signed image and let \(\tilde{\phi}(t,\theta)\) and \(\widehat{\phi}(t,\theta)\) be the Radon transform and RSCDT of \(\phi(\mathbf{x})\) respectively. Let \(H_{R}\subseteq\mathcal{G}_{R}\). Consider the generative model,
\[\mathbb{S}_{\phi,H_{R}}:=\{\phi_{g}:\phi_{g}=\mathcal{R}^{-1}((g^{\theta})^{ \prime}\widetilde{\phi}\circ g^{\theta}),\;g\in H_{R}\} \tag{3.5}\]
If \(H_{R}^{-1}\) is a convex set then \(\widehat{\mathbb{S}}_{\phi,H_{R}}:=\{\widehat{\phi}_{g}:\phi_{g}\in\mathbb{S }_{\phi,H_{R}}\}\) is convex.
Next is a corollary to the convexification property above that is based on the fact that if \(G\) is a group then \(G=G^{-1}\).
**Corollary 3.5**.: Let \(\phi(\mathbf{x})\) be as in the above proposition. Let \(H_{R}\subseteq\mathcal{G}_{R}\) such that \(H_{R}\) is a convex group. If \(H_{R}\) is convex, then \(\widehat{\mathbb{S}}_{\phi,H_{R}}:=\{\widehat{\phi}_{g}:\phi_{g}\in\mathbb{S }_{\phi,H_{R}}\}\) (see (3.5)) is convex.
### Metric Structure
The Wasserstein distance in the space \(\mathcal{P}(\mathbb{R})\) is intimately related to the \(L^{2}\) distance in the transport transform domain as shown in [1, 30, 32]. We then saw a Wasserstein like metric defined in [2] for signed signals and its relationship with \(L^{2}\) distance in the transform domain for 1D signed signals. This relation has proven to be useful in some applications since it render certain optimization problems involving the Wasserstein like distances into standard least squares optimization problems. In this section, we generalize the Wasserstein like distance defined in [2] for signed measures and the sliced Wasserstein distance [14] to define a metric space structure on the space of signed signals (images) in 2D. We then see an analogue of the isometry [2] between the metric on signed images and an analogue of the \(L^{2}\) distance in the transform domain.
**Definition 3.6**.: (Signed Sliced-Wasserstein Distance) For \(s_{1},s_{2}:\mathbb{R}^{2}\rightarrow\mathbb{R}\) the distance is defined as
\[D(s_{1},s_{2}): =\Big{(}\int_{0}^{\pi}D_{S}^{2}\left(\widetilde{s_{1}}(\cdot, \theta),\widetilde{s_{2}}(\cdot,\theta)\right)d\theta\Big{)}^{\frac{1}{2}}\] \[=\Bigg{(}\int_{0}^{\pi}D_{W^{2}}^{2}\left(\widetilde{s_{1}}^{+}( \cdot,\theta),\widetilde{s_{2}}^{+}(\cdot,\theta)\right)d\theta+\int_{0}^{ \pi}D_{W^{2}}^{2}\left(\widetilde{s_{1}}^{-}(\cdot,\theta),\widetilde{s_{2}} ^{-}(\cdot,\theta)\right)d\theta\Bigg{)}^{\frac{1}{2}},\]
where \(D_{S}\) and \(D_{W^{2}}\) are as defined in (2.19).
**Proposition 3.7**.: (Isometry/Embedding) RSCDT induces an isometric embedding between the space of images with Signed Sliced-Wasserstein metric defined above and the space of their RSCDT transforms with a Euclidean-type metric,
\[D(s_{1},s_{2})=\Bigg{(}\int_{\theta=0}^{\pi}\|\widehat{s_{1}}(\cdot,\theta)- \widehat{s_{2}}(\cdot,\theta)\|_{(L^{2}(\widetilde{r}(\cdot,\theta))\times \mathbb{R})^{2}}^{2}\,d\theta\Bigg{)}^{\frac{1}{2}}, \tag{3.6}\]
A compact notation for the above equation would be,
\[D(s_{1},s_{2})=\|\widehat{s_{1}}-\widehat{s_{2}}\|_{(L^{2}(\widetilde{r}) \times L^{2}[0,\pi])^{2}} \tag{3.7}\]
where
\[\left\|\widehat{s}\right\|_{(L^{2}(\widetilde{r})\times L^{2}[0, \pi])^{2}}^{2} :=\int_{0}^{\pi}\left\|\widehat{s}(\cdot,\theta)\right\|_{(L^{2}( \widetilde{r}(\cdot,\theta))\times\mathbb{R})^{2}}^{2}d\theta\] \[=\int_{0}^{\pi}\left(\,\int_{\mathbb{R}}\left|\widehat{s}^{+}(t, \theta)\right|^{2}\widetilde{r}(t,\theta)\,dt+\|\widetilde{s}^{+}(\cdot, \theta)\|_{1}^{2}\right.\] \[+\int_{\mathbb{R}}|\widehat{s}^{-}(t,\theta)|^{2}\,\widetilde{r} (t,\theta)\,dt+\|\widetilde{s}^{-}(\cdot,\theta)\|_{1}^{2}\right)d\theta\]
## 4 Image Classification using RSCDT
Here we discuss a mathematical model-based problem statement for the type of classification problems we discuss in this paper. Furthermore, we illustrate how the composition and convexification properties of the RSCDT play a crucial role in facilitating the classification of images.
### Signal Class Model and Problem Statement
In numerous applications, we focus on classifying images that are instances of a certain prototype (or template) observed under some often unknown deformation patterns. Let's consider the problem of classifying handwritten digits, such as the MNIST dataset [18]. In such datasets, it is reasonable to assume that each observed digit image can be considered as an instance of a template (or templates) subjected to unknown deformations or variations. For instance, a suitable model for a particular class in the dataset, like the digit 1, would involve a fixed pattern for the digit 1 but with different translations, meaning the digit can appear randomly positioned within the image's field of view. Alternatively, the digit could also exhibit variations in size or slight nonrigid deformations. The following mathematical model for image (2D signals) classes formalize these concepts.
**Definition 4.1** (2D signal class model).: Let \(\mathcal{G}_{R}\subset\mathcal{T}\) be the set of confounds. The 2D mass (image intensity) preserving class model for the \(k^{\mbox{th}}\) class is defined to be the set
\[\mathbb{S}^{(k)}=\left\{s_{j}^{(k)}|s_{j}^{(k)}=\mathcal{R}^{-1}\left(\left(g_ {j}^{\theta}\right)^{\prime}\widetilde{\varphi}^{(k)}\circ g_{j}^{\theta} \right),\forall g_{j}\in\mathcal{G}_{R}\right\}, \tag{4.1}\]
where \(\varphi^{(k)}\) is the template pattern of class \(k\) and \(s_{j}^{(k)}\) is the \(j\)-th image from that class. Given the signal class model, the mathematical description of the classification problem is defined as:
**Definition 4.2** (Classification problem).: Let \(\mathcal{G}_{R}\subset\mathcal{T}\) denotes the set of confounds and \(\mathbb{S}^{(k)}\) be defined as in equation (4.1). Given training samples \(\{s_{1}^{(1)},s_{2}^{(1)},\cdots\}\) (class 1), \(\{s_{1}^{(2)},s_{2}^{(2)},\cdots\}\) (class 2), \(\cdots\) as training data, determine the class \((k)\) of an unknown image \(s\).
### Proposed Solution
The signal class model specified in equation (4.1) typically produces signal classes that are nonconvex. However, the RSCDT can simplify the data geometry and thereby simplify the classification problem. Hence, following the
methodology proposed in [27], we employ the composition property of the RSCDT on \(s_{j}^{(k)}\) in eq. (4.1), which yields the signal class model in the transform domain as:
\[\widehat{\mathbb{S}}^{(k)} = \{\widehat{s}_{j}^{(k)}|\widehat{s}_{j}^{(k)}=\left(g_{j}^{\theta} \right)^{-1}\circ\widehat{\varphi}^{(k)},\forall g_{j}\in\mathcal{G}_{R}\}. \tag{4.2}\]
By applying the convexification property of the RSCDT, as described in Proposition 3.4, it is evident that the class model given in eq. (4.2) forms a convex set if \(\mathcal{G}_{R}\) is a convex group [27]. Furthermore, since the RSCDT is a one-to-one mapping, it follows that if \(\mathbb{S}^{(k)}\cap\mathbb{S}^{(p)}=\varnothing\), then \(\widehat{\mathbb{S}}^{(k)}\cap\widehat{\mathbb{S}}^{(p)}=\varnothing\). Consequently, we define a subspace generated by the convex set \(\widehat{\mathbb{S}}^{(k)}\) as follows:
\[\widehat{\mathbb{V}}^{(k)}=\text{span}\left(\widehat{\mathbb{S}}^{(k)}\right) =\left\{\sum_{j\in\mathbf{J}}\alpha_{j}\widehat{s}_{j}^{(k)}|\alpha_{j}\in \mathbb{R},\mathbf{J}\text{ is finite}\right\}. \tag{4.3}\]
It can be demonstrated, under specific assumptions, that the convex space associated with a particular class in the transformed domain does not overlap with the subspace corresponding to a different class [27], i.e. \(\widehat{\mathbb{S}}^{(k)}\cap\widehat{\mathbb{V}}^{(p)}=\varnothing,k\neq p\). It follows from the analysis that for a test sample \(s\) generated according to the mathematical model for an unknown class \(k\), we have \(d^{2}(\widehat{s},\widehat{\mathbb{V}}^{(k)})\sim 0\), and \(d^{2}(\widehat{s},\widehat{\mathbb{V}}^{(p)})>0\) for \(p\neq k\). In this context, \(d^{2}(\cdot,\cdot)\) represents the Euclidean distance between \(\widehat{s}\) and the nearest point in \(\widehat{\mathbb{S}}^{(k)}\) or \(\widehat{\mathbb{V}}^{(k)}\). Therefore, the unknown class of the test image can be predicted by employing a nearest subspace search method in the RSCDT space:
\[\underset{k}{\text{argmin}}\;\;d^{2}(\widehat{s},\widehat{\mathbb{V}}^{(k)}). \tag{4.4}\]
#### 4.2.1 Training Algorithm
Based on the aforementioned analysis, we propose a non-iterative training algorithm following the approach outlined in [27]. The algorithm proceeds as follows: First, the transforms of the training samples are computed to obtain \(\widehat{\mathbb{S}}^{(k)}\) for all classes. Next, we approximate \(\widehat{\mathbb{V}}^{(k)}\) by taking the span of \(\widehat{\mathbb{S}}^{(k)}\) resulting in, \(\widehat{\mathbb{V}}^{(k)}=\text{span}\left\{\widehat{s}_{1}^{(k)},\widehat{s }_{2}^{(k)},...\right\}\). The set \(\left\{\widehat{s}_{1}^{(k)},\widehat{s}_{2}^{(k)},...\right\}\) is orthogonalized to obtain the set of basis vectors \(\{b_{1}^{(k)},b_{2}^{(k)},...\}\) that spans the space \(\widehat{\mathbb{V}}^{(k)}\) for class \(k\). Finally, the matrix \(B^{(k)}\) is formed with the computed basis vectors as its columns, i.e., \(B^{(k)}=\left[b_{1}^{(k)},b_{2}^{(k)},...\right]\). This matrix is later used for predicting the class label of unknown samples. It is important to note that while this algorithm is designed for classifying signal patterns that can be modeled as instances of a particular template (as defined in equation 4.1), the proposed RSCDT-NS method does not require the estimation of the template \(\varphi^{(k)}\) for each class. Instead, one can directly use the training samples for a specific class \(k\) to generate the matrix \(B^{(k)}\) using the aforementioned algorithm. In addition, it is important to note that the convexity results, and thus our ability to model signed image classes as vector spaces, is independent of the reference chosen to define the RSCDT.
#### 4.2.2 Testing Algorithm
Let's consider the problem of predicting the class of a test image \(s\) using the proposed RSCDT-NS method. Firstly, we obtain the RSCDT \(\widehat{s}\) of the test sample \(s\). Next, we estimate the distance between \(\widehat{s}\) and the subspace model for each class using \(d^{2}(\widehat{s},\widehat{\mathbb{V}}^{(k)})\sim\left\|\widehat{s}-B^{(k)}{ B^{(k)}}^{T}\widehat{s}\right\|_{L_{2}}^{2}\). The class of \(\widehat{s}\) is then estimated
by solving the optimization problem:
\[\underset{k}{\text{argmin}}\ \ \|\widehat{s}-A^{(k)}\widehat{s}\|_{L_{2}}^{2}, \tag{4.5}\]
where \(A^{(k)}=B^{(k)}{B^{(k)}}^{T}\) represents an orthogonal projection matrix onto the subspace spanned by the columns of \(B^{(k)}\).
## 5 Computational Results
We evaluated our proposed image classification method on three datasets, showcasing its effectiveness. In the first dataset, a simulated dataset comprising two classes, we demonstrate a scenario where the existing transport-based classification method (e.g., RCDT-NS) fails to differentiate between the two classes. However, our proposed method (RSCDT-NS) achieves 100% accuracy in classifying the two classes. Next, we present another scenario in which negative pixel values can arise during image classification, highlighting the need for a signed transport-based method to handle such situations effectively. In the third case, we applied our proposed method to a real dataset and observed that the signed transport-based method, RSCDT-NS, outperforms both the unsigned methods and state-of-the-art convolutional neural network-based algorithms.
### Result on simulated data
This study employs simulated data consisting of two categories of image data. The first category comprises three circular shapes with randomized sizes and positions. These circular shapes are characterized by two positive pixel values and one negative pixel value. The second category of image data features three circular shapes of arbitrary sizes and random positions. However, in contrast to the first category, two of these circular shapes exhibit negative pixel values, while the remaining one displays positive pixel values (refer to Figure 4). The simulation results, presented in Table 2, compare the performance of two transport-based techniques. The first method utilizes the RSCDT transform on images followed by the nearest subspace (NS) classification technique. The second method applies the RCDT transform on the image after taking the absolute value, as RCDT can only be used on positive distributions. From the results in Table 2, it is evident that the RSCDT-NS method achieves 100% accuracy in distinguishing the two classes. However, the RCDT-NS method fails to correctly classify the two groups and only achieves chance accuracy.
\begin{table}
\begin{tabular}{c c} \hline \hline Method & Accuracy (\%) \\ \hline RSCDT-NS & 100 \\ RCDT-NS & 49.00 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Comparative analysis of RSCDT-NS and RCDT-NS techniques on a simulated dataset**
### Result on 2D Geometric Shape Dataset
The proposed method has also been applied to a 2D geometric shape dataset [7]. This dataset consists of nine geometric shapes. Each shape within a particular class can be considered as a scaled, translated, and/or rotated version of a common template, while utilizing different background and foreground colors (refer to Figure 5). To recognize a shape, the images have been converted to grayscale and then difference of Gaussian (DoG) [8] has been calculated in order to enhance the edge feature. The calculation of DoG introduces negative pixel values. As the RCDT-based transport method [27] cannot directly be implemented on images with negative pixel values, we have employed the proposed RSCDT-based nearest subspace method (RSCDT-NS) to classify the different shapes. The achieved results using RSCDT-NS have been compared with other methods, as presented in Table 3. It is observed that RSCDT based nearest subspace (RSCDT-NS) method outperforms all other techniques and provides an accuracy of 94.88%. Other transport based method RCDT-NS has also been employed directly on raw images as well as on filtered images after taking the absolute value in order to compare with the proposed method. The performance of both the methods are found low. Additionally, the performances of several state-of-the-art convolutional neural network (CNN) based frameworks, say Densenet121[11], ResNet-18[10], ShuffleNet[19] and VGG-16[29] have been also evaluated. In terms of classification accuracy, the best performance among the CNN-based networks has been obtained through VGG-16 using 200 training images per class which is 79.11%. On the other hand, the transport-based method proposed for the signed distribution RSCDT-NS can achieve test accuracy 83.88% using only 20 images per class in training phase and can provide 94.88% test accuracy using only 200 images per class in training phase.
Figure 4: (a) Simulated images from class 01, showcasing three circular shapes with randomized sizes and positions. These circular shapes are distinguished by two positive pixel values and one negative pixel value. (b) Simulated images from class 02, featuring three circular shapes with randomized sizes and positions. Unlike class 01, two of these circular shapes exhibit negative pixel values, while the remaining one displays positive pixel values.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c}{No of training samples per class} \\ & 10 & 20 & 50 & 100 & 200 \\ \hline RSCDT-NS & 65.33 & 83.88 & 93.11 & 92.0 & 94.88 \\ RCDT-NS (Filtered image) & 51.44 & 56.77 & 78.77 & 80.22 & 82.55 \\ RCDT-NS (Raw image) & 12.88 & 15.11 & 15.00 & 16.44 & 19.66 \\ DenseNet121 & 61.22 & 63.44 & 63.33 & 63.56 & 61.78 \\ ResNet-18 & 58.83 & 59.87 & 64.00 & 65.11 & 58.33 \\ ShuffleNet & 41.56 & 44.22 & 42.67 & 44.42 & 44.22 \\ VGG16 & 60.44 & 63.94 & 69.67 & 76.33 & 79.11 \\ \hline \hline \end{tabular}
\end{table}
Table 3. **Comparative analysis of different transport based transformations and classification techniques on 2D geometric shape dataset.**
Figure 5. (a) Sample images from a 2D geometric pattern dataset. Each row in the figure represents images from a specific class. (b) Images are converted to grayscale and then Difference of Gaussian has been obtained to enhance the edge features in different shapes. This process results in the presence of negative pixel values.
### Result on sign language dataset
Finally, we meticulously evaluate the performance of our proposed RSCDT-NS method on a real-world sign language dataset [22], which encompasses images of 24 distinct classes of static signs, each representing a specific sign in sign language. Our primary objective in this study is to thoroughly assess the effectiveness of the RSCDT-NS method under various training scenarios.
To gain a deeper understanding of the dataset, we present a visualization of sample images from each class in Figure6(a). Upon closer inspection, we observe that each image in the dataset contains background pixel information, which can potentially hinder the classification task when employing available transport-based methods such as RCDT-NS. In order to mitigate the influence of the background and enhance the discriminative power of the images, we first convert the images to grayscale and subsequently calculate the difference of Gaussian(DoG) in order to enhance the edge features [8]. This process introduces negative pixel values in the images as illustrated in Figure 6(b). We then apply our proposed transport-based method, RSCDT-NS, specifically developed to handle images with negative pixel values. This preprocessing step significantly improves the suitability of the images for subsequent analysis.
Results in table 4 show that the RSCDT-NS method outperforms the RCDT-NS method, regardless of whether the RCDT is directly applied to the raw images or the RCDT-NS is applied to the absolute filtered images. This robust performance demonstrates the superior capability of the RSCDT-NS method in effectively capturing the discriminative features inherent in the sign language dataset.
Furthermore, our comprehensive evaluation illustrates that the performance of the RSCDT-NS method surpasses that of state-of-the-art convolutional neural network (CNN) based methods. Despite CNN based methods being a popular choice for image classification tasks, it exhibits subpar results when trained with a limited number of images per class. The best classification accuracy using CNN based technique has been achieved through ResNet-18 (69.79%). In stark contrast, the RSCDT-NS method showcases its robustness and efficacy by achieving an impressive accuracy of 91.46% using the same number of training images, firmly establishing its superiority in the field of sign language recognition.
These results provide compelling evidence that the proposed RSCDT-NS method holds significant potential for advancing signed image recognition systems, surpassing both available transport-based methods and cutting-edge deep learning approaches. By leveraging the distinctive characteristics of the sign language dataset, the RSCDT-NS method demonstrates its prowess in extracting and utilizing discriminative features, ultimately leading to highly accurate and reliable sign language recognition.
## 6 Summary, Discussion, and Conclusion
Image representation methods are important components of modern automated image analysis and classification methods. Classification problems, for example, can be difficult to solve in some representations, while dramatically simpler to solve in others. While Fourier and Wavelet transforms have been instrumental in allowing for the simple and effective solutions of numerous filtering, compression, and estimation problems, when it comes to classification, methods based on ad hoc feature extraction and deep learning have taken center stage given their simplicity and effectiveness. The lack of
theoretical foundations for these methods, however, has helped slow the progress of such methodology in recent years particular in terms of performance given limited training sets, interpretability, and assurance guarantees.
Recently methods for representing images using optimal transport have emerged [1, 14, 17], and allowed for the simple non iterative (closed form) solution of a certain category of image classification problems [27, 28]. Up until now, however, the methodology was limited to images that can be interpreted as probability density functions: they must be positive functions and integrate to 1. Here we have extended the previously introduced R-CDT method [14] by combining the R-CDT [14] and SCDT [2] methods and introduced the Radon Signed Cumulative Distribution Transform
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c}{No of training samples per class} \\ & 4 & 8 & 12 & 16 & 20 \\ \hline RSCDT-NS & 43.75 & 65.42 & 77.71 & 83.75 & 91.46 \\ RCDT-NS(Filtered image) & 40.41 & 55.66 & 66.67 & 76.88 & 83.96 \\ RCDT-NS (Raw image) & 22.91 & 30.83 & 30.62 & 48.33 & 45.83 \\ DenseNet121 & 48.75 & 51.25 & 49.58 & 52.92 & 54.79 \\ ResNet-18 & 43.33 & 62.29 & 65.62 & 69.38 & 69.79 \\ ShuffleNet & 12.29 & 30.00 & 27.92 & 47.52 & 56.71 \\ VGG16 & 45.83 & 53.12 & 57.08 & 61.04 & 60.83 \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Performance Evaluation of Different Transport-Based Transformations and Classification Techniques on a real sign language dataset**
Figure 6: (a) Images from a sign language dataset, comprising 24 types of static signs.(b) Corresponding images after undergoing difference of Gaussian (DoG) process. This operation leads to the emergence of negative pixel values.
(RSCDT). Unlike the previous RCDT method, the RSCDT now permits one to use the technique for general (signed) functions, as opposed to only positive functions that integrate to one. We described many properties of the RSCDT method (including composition and convexity of image classes, and a new mathematical distance), and described how to use these in designing simple (closed form), and accurate image classification methods using the nearest subspace technique [27]. Comparisons with state of the art deep learning methods for image classification revealed that the newly proposed RSCDT representation method, when combined with nearest subspace methods [27], can yield superior accuracy. We further note that additional accuracy gains can be obtained by enhancing the nearest subspace method with additional subspace dimensions to encode affine transformations [28] as well as utilizing multiple "local" subspaces to enhance the accuracy of each individual class [26]. The combination of these enhanced classification methods with the newly introduced RSCDT method will be the subject of future work.
In summary, we believe that the RSCDT method introduced here will facilitate the analysis of signed functions (images) in a number of applications. By allowing for invertible transformation of signed images, the RSCDT representation method does not involve information loss. When applied to signed images that can be modeled according to the class model proposed in equation (4.1), the transform can render image classes convex, and thus easy to partition.
## 7 Acknowledgements
The authors dedicate this paper to Prof. Akram Aldroubi, who has been a teacher, mentor and friend to each one of us. His expertise, wisdom, and willingness to share his knowledge have greatly contributed to our development in the field. We are truly fortunate to have had the opportunity to learn from him and to have him as a guiding force in our academic endeavors.
Authors also acknowledge funding from the NIH (GM130825) and ONR (N000142212505) for supporting this work.
## References
* [1]A. Aldroubi, S. Li, and G. K. Rohde, _Partitioning signal classes using transport transforms for data analysis and machine learning_, arXiv preprint arXiv:2008.03452, (2020).
* [2]A. Aldroubi, R. D. Martin, I. Medri, G. K. Rohde, and S. Thareja, _The signed cumulative distribution transform for 1-d signal analysis and classification_, Foundations of Data Science, 4 (2022), pp. 137-163.
* [3]M. Basu, _Gaussian-based edge-detection methods-a survey_, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 32 (2002), pp. 252-260.
* [4]R. N. Bracewell, _The Fourier transform and its applications_, McGraw-Hill Series in Electrical Engineering. Circuits and Systems, McGraw-Hill Book Co., New York, third ed., 1986.
* [5]R. W. Brown, Y.-C. N. Cheng, E. M. Haacke, M. R. Thompson, and R. Venkatesan, _Magnetic resonance imaging: physical principles and sequence design_, John Wiley & Sons, 2014.
* [6]L. Cattell, C. H. Meyer, F. H. Epstein, and G. K. Rohde, _Reconstructing high-resolution cardiac mr movies from under-sampled frames_, in Asilomar conference on signals, systems, and computers, 2017.
* [7]A. El Korchi and Y. Ghanou, _2d geometric shapes dataset-for machine learning and pattern recognition_, Data in Brief, 32 (2020), p. 106090.
* [8]R. C. Gonzalez and R. E. Woods, _Digital image processing, vol. 21_, 2011.
* [9]N. Hamilton, _Quantification and its applications in fluorescent microscopy imaging_, Traffic, 10 (2009), pp. 951-961.
* [10]K. He, X. Zhang, S. Ren, and J. Sun, _Deep residual learning for image recognition_, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
* [11]G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, _Densely connected convolutional networks_, in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4700-4708.
* [12]Imaging and data science lab, _PyTransKit_. [https://github.com/rohdelab/PyTransKit](https://github.com/rohdelab/PyTransKit).
* [13]U. Jung and B.-H. Koh, _Wavelet energy-based visualization and classification of high-dimensional signal for bearing fault detection_, Knowledge and Information Systems, 44 (2015), pp. 197-215.
* [14]S. Kolouri, S. R. Park, and G. K. Rohde, _The Radon cumulative distribution transform and its application to image classification_, IEEE transactions on image processing, 25 (2016), pp. 920-934.
* [15]S. Kolouri, S. R. Park, M. Thorpe, D. Slepcev, and G. K. Rohde, _Optimal mass transport: Signal processing and machine-learning applications_, IEEE Signal Processing Magazine, 34 (2017), pp. 43-59.
* [16]S. Kolouri and G. Rohde, _Transport-based single frame super resolution of very low resolution face images_, in Proc. IEEE CVPR, 2015, pp. 4876-4884.
* [17]S. Kolouri, A. B. Tosun, J. A. Ozolek, and G. K. Rohde, _A continuous linear optimal transport approach for pattern analysis in image datasets_, Pattern Recognition, 51 (2016), pp. 453-462.
* [18]Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, _Gradient-based learning applied to document recognition_, Proceedings of the IEEE, 86 (1998), pp. 2278-2324.
* [19]N. Ma, X. Zhang, H.-T. Zheng, and J. Sun, _Shufflenet v2: Practical guidelines for efficient cnn architecture design_, in Proceedings of the European conference on computer vision (ECCV), 2018, pp. 116-131.
* [20]F. Natterer, _The mathematics of computerized tomography_, SIAM, 2001.
* [21]S. R. Park, S. Kolouri, S. Kundu, and G. K. Rohde, _The cumulative distribution transform and linear pattern classification_, Applied and Computational Harmonic Analysis, (2017).
* [22]R. F. Pinto, C. D. Borges, A. M. Almeida, and I. C. Paula, _Static hand gesture recognition based on convolutional neural networks_, Journal of Electrical and Computer Engineering, 2019 (2019), pp. 1-12.
* [23]E. T. Quinto, _An introduction to x-ray tomography and radon transforms_, in Proceedings of symposia in Applied Mathematics, vol. 63, 2006, p. 1.
* [24]J. Radon, _On the determination of functions from their integrals along certain manifolds_, Ber. Verh, Sachs Akad Wiss., 69 (1917), pp. 262-277.
* [25]A. H. M. Rubaiyat, K. Hallam, J. Nichols, M. Hutchinson, S. Li, and G. Rohde, _Parametric signal estimation using the cumulative distribution transform_, IEEE Transactions on Signal Processing, (2020).
* [26]A. H. M. Rubaiyat, S. Li, X. Yin, M. S. E. Rabbi, Y. Zhuang, and G. K. Rohde, _End-to-end signal classification in signed cumulative distribution transform space_, arXiv preprint arXiv:2205.00348, (2022).
* [27]M. Shifat-E-Rabbi, X. Yin, A. H. M. Rubaiyat, S. Li, S. Kolouri, A. Aldroubi, J. M. Nichols, and G. K. Rohde, _Radon cumulative distribution transform subspace modeling for image classification_, Journal of Mathematical Imaging and Vision, 63 (2021), pp. 1185-1203.
* [28]M. Shifat-E-Rabbi, Y. Zhuang, S. Li, A. H. M. Rubaiyat, X. Yin, and G. K. Rohde, _Invariance encoding in sliced-wasserstein space for image classification with limited training data_, Pattern Recognition, 137 (2023), p. 109268.
* [29]K. Simonyan and A. Zisserman, _Very deep convolutional networks for large-scale image recognition_, arXiv preprint arXiv:1409.1556, (2014).
* [30]M. Thorpe, _Introduction to optimal transport_. [https://www.math.cmu.edu/~mthorpe/OTNotes](https://www.math.cmu.edu/~mthorpe/OTNotes).
* [31]C. Villani, _Topics in optimal transportation_, no. 58, American Mathematical Soc., 2003.
* [32], _Optimal transport: old and new_, vol. 338, Springer Science & Business Media, 2008.
* [33]W. Wang, D. Slepcev, S. Basu, J. A. Ozolek, and G. K. Rohde, _A linear optimal transportation framework for quantifying and visualizing variations in sets of images_, International journal of computer vision, 101 (2013), pp. 254-269.
* [34]Y. Zhuang, S. Li, X. Yin, A. H. M. Rubaiyat, G. K. Rohde, et al., _Local sliced-wasserstein feature sets for illumination-invariant face recognition_, arXiv preprint arXiv:2202.10642, (2022).
## Appendix A Proof that \(\mathcal{G}_{R}\) is a group.
Proof.: Here we show that \((\mathcal{G}_{R},\star)\) is a group. Recall, from equation (10),
\[\mathcal{G}_{R}=\{g=(g^{\theta})_{\theta\in[0,\pi]}:g^{\theta}:\mathbb{R}\to \mathbb{R}\text{ is a strictly increasing bijection }\forall\theta\in[0,\pi]\}\]
and the operation \(\star\) is defined as, \(\star:\mathcal{G}_{R}\times\mathcal{G}_{R}\to\mathcal{G}_{R}\), \((g\star h)(\cdot,\theta):=(g^{\theta}\circ h^{\theta})(\cdot)\) for all \(\theta\in[0,\pi)\) and \(g,h\in\mathcal{G}_{R}.\) Now, let's look at the individual group properties,
* Closure : for any \(g,h\in\mathcal{G}_{R}\), \(g\star h\) is component wise composition of strictly increasing functions for each \(\theta\in[0,\pi]\), hence \(g\star h\) is also a strictly increasing surjection. Thus, \(g\star h\in\mathcal{G}_{R}\).
* Associativity : Composition of functions is always an associative operation.
* Identity : \(I=(I^{\theta})_{\theta\in[0,\pi]}\) such that \(I^{\theta}(t)=t\), for all \(t\in\mathbb{R}\) and \(\theta\in[0,\pi]\).
* Inverse : Note that if \(\alpha\) is a strictly increasing bijection then so is \(\alpha^{-1}.\) So, for any \(g=(g^{\theta})_{\theta\in[0,\pi]}\in\mathcal{G}_{R}\), define \(g^{-1}=((g^{\theta})^{-1})_{\theta\in[0,\pi]}\) to be the inverse of \(g\). Clearly, \(g\star g^{-1}=(g^{\theta}\circ(g^{\theta})^{-1})_{\theta\in[0,\pi]}=(I^{\theta })_{\theta\in[0,\pi]}=I\).
### Proof of Composition Property (see Proposition 3.2).
Proof.: Let \(s_{g}=\mathcal{R}^{-1}((g^{\theta})^{\prime}\widetilde{s}\circ g^{\theta})\) where \(g\in\mathcal{G}_{R}\) and the composition operation is in terms of the first variable of \(\widetilde{s}(\cdot,\theta)\). Then by the definition of \(\mathcal{R}\), we have \(\widetilde{s_{g}}=(g^{\theta})^{\prime}\widetilde{s}\circ g^{\theta}\), which implies the following in terms of the corresponding CDFs: \(F_{\widetilde{s}_{g}(\cdot,\theta)}=F_{\widetilde{s}(\cdot,\theta)}\circ g^{ \theta},\forall\theta\). Now the goal is to evaluate SCDT for each \(\widetilde{s_{g}}(\cdot,\theta)\), which we denote \(\widetilde{s}_{g^{\theta}}(\cdot)\) for simplicity. Similarly we denote \((\widetilde{s_{g}})^{\pm}(\cdot,\theta)\) as \(\widetilde{s}_{g^{\theta}}^{\pm}(\cdot)\). First note that
\[\|\widetilde{s}_{g^{\theta}}^{\pm}\|_{1}=\|\widetilde{s}^{\pm}(\cdot,\theta) \|_{1},\forall\theta. \tag{12}\]
Indeed, since \(g^{\theta}\) is strictly increasing, we have
\[\widetilde{s}_{g^{\theta}}^{\pm}=(g^{\theta})^{\prime}\widetilde{s}^{\pm} \circ g^{\theta}, \tag{13}\]
and by the change of variables formula
\[\|\widetilde{s}_{g^{\theta}}^{\pm}\|_{1}=\int_{\mathbb{R}}\widetilde{s}_{g^{ \theta}}^{\pm}(t)\,dt=\int_{\mathbb{R}}(g^{\theta}(t))^{\prime}\widetilde{s}^{ \pm}(g^{\theta}(t),\theta)\,dt=\int_{\mathbb{R}}\widetilde{s}^{\pm}(u,\theta) \,du=\|\widetilde{s}^{\pm}(\cdot,\theta)\|_{1}.\]
Similarly one can show that
\[F_{\widetilde{s}_{g^{\theta}}^{\pm}}=F_{\widetilde{s}^{\pm}(\cdot,\theta)} \circ g^{\theta},\forall\theta. \tag{14}\]
Then by definition of the SCDT in (2.16), the composition property follows form equations (A.1) and (A.3) proved above. Indeed,
\[\widehat{s}_{g} =\Big{(}(\widetilde{s}_{g^{\theta}}^{+})^{\star},\|\widetilde{s}_{ g^{\theta}}^{+}\|_{1},(\widetilde{s}_{g^{\theta}})^{\star},\|\widetilde{s}_{g^{ \theta}}^{\cdot}\|_{1}\Big{)}\] \[=\Big{(}(F_{\frac{\widetilde{s}^{+}}{\widetilde{s}^{+}}_{g^{ \theta}}}^{\dagger}\circ F_{\frac{\widetilde{\widetilde{\tau}}}{\widetilde{ \widetilde{\tau}}\|_{1}}},\|\widetilde{s}_{g^{\theta}}^{+}\|_{1},F_{\frac{ \widetilde{\widetilde{\tau}}}{\widetilde{\widetilde{\tau}}\|_{1}}}^{\dagger} \circ F_{\frac{\widetilde{\widetilde{\tau}}}{\widetilde{\widetilde{\tau}}\|_{ 1}}},\|\widetilde{s}_{g^{\theta}}^{-}\|_{1}\Big{)}\] \[=\Big{(}(F_{\frac{\widetilde{\tau}^{+}}{\widetilde{\widetilde{ \tau}}\|^{\widetilde{\tau}}\|_{1}}}\circ g^{\theta})^{\dagger}\circ F_{\frac{ \widetilde{\widetilde{\tau}}}{\widetilde{\widetilde{\tau}}\|_{1}}},\| \widetilde{s}^{+}\|_{1},\big{(}F_{\frac{\widetilde{\widetilde{\tau}}}{ \widetilde{\widetilde{\tau}}\|_{1}}}-\circ g^{\theta}\big{)}^{\dagger}\circ F_ {\frac{\widetilde{\widetilde{\tau}}}{\widetilde{\widetilde{\tau}}\|_{1}}},\| \widetilde{s}^{-}\|_{1}\Big{)}\] \[=\Big{(}(g^{\theta})^{-1}\circ F_{\frac{\widetilde{\widetilde{ \tau}}^{+}}{\widetilde{\widetilde{\tau}}\|_{1}}}^{\dagger}\circ F_{\frac{ \widetilde{\widetilde{\tau}}}{\widetilde{\widetilde{\tau}}\|_{1}}},\| \widetilde{s}^{+}\|_{1},(g^{\theta})^{-1}\circ F_{\frac{\widetilde{\widetilde{ \tau}}}{\widetilde{\widetilde{\tau}}\|_{1}}}^{\dagger}\circ F_{\frac{ \widetilde{\widetilde{\tau}}}{\widetilde{\widetilde{\tau}}\|_{1}}},\| \widetilde{s}^{-}\|_{1}\Big{)}\] \[=\Big{(}(g^{\theta})^{-1}\circ(\widetilde{s}^{+})^{\star},\| \widetilde{s}^{+}\|_{1},(g^{\theta})^{-1}\circ(\widetilde{s}^{-})^{\star},\| \widetilde{s}^{-}\|_{1}\Big{)},\]
where the second to the last equality follows from the fact that \(g^{\theta}\) is strictly increasing.
### Proof of the Convexification Property (see Proposition 3.4)
Proof.: Assume \(H_{R}^{-1}\) is a convex set. We show that \(\widehat{S}_{\phi,H_{R}}\) is convex. To that end, let \(\widehat{\phi}_{g},\widehat{\phi}_{h}\in\widehat{S}_{\phi,H_{R}}\).
**Remark A.1**.: For the set of equations below, note that \(g,h\in H_{R},\) i.e. \(g:=(g^{\theta})_{\theta\in[0,\pi]}\) and \(h:=(h^{\theta})_{\theta\in[0,\pi]}.\) The equality below holds true for all \(\theta\in[0,\pi]\) because the operations are done component wise. To simplify our calculations here, we avoid the cumbersome \(\theta\) notation.
For \(\lambda\in[0,1],\) consider the convex combination,
\[\lambda\widehat{\phi}_{g}+(1-\lambda)\widehat{\phi}_{h} =\lambda\Big{(}(\widetilde{\phi}_{g}^{+})^{\star},\|\widetilde{ \phi}_{g}^{+}\|_{1},(\widetilde{\phi}_{g}^{-})^{\star},\|\widetilde{\phi}_{g} ^{-}\|\Big{)}+(1-\lambda)\Big{(}(\widetilde{\phi}_{h}^{+})^{\star},\| \widetilde{\phi}_{h}^{+}\|_{1},(\widetilde{\phi}_{h}^{-})^{\star},\|\widetilde {\phi}_{h}^{-}\|\Big{)}\] \[=\Big{(}\lambda(\widetilde{\phi}_{g}^{+})^{\star}+(1-\lambda)( \widetilde{\phi}_{h}^{+})^{\star},\lambda\|\widetilde{\phi}_{g}^{+}\|+(1- \lambda)\|\widetilde{\phi}_{h}^{+}\|,\] \[\lambda(\widetilde{\phi}_{h}^{-})^{\star}+(1-\lambda)( \widetilde{\phi}_{h}^{-})^{\star},\lambda\|\widetilde{\phi}_{g}^{-}\|+(1- \lambda)\|\widetilde{\phi}_{h}^{-}\|\Big{)}\] \[=\Big{(}\lambda(\widetilde{\phi}_{g}^{+})^{\star}+(1-\lambda)( \widetilde{\phi}_{h}^{+})^{\star},\|\widetilde{\phi}^{+}\|,\lambda( \widetilde{\phi}_{h}^{-})^{\star}+(1-\lambda)(\widetilde{\phi}_{h}^{-})^{ \star},\|\widetilde{\phi}^{-}\|\Big{)}\] \[=\Big{(}\lambda(g^{-1}\circ(\widetilde{\phi}^{+})^{\star})+(1- \lambda)(h^{-1}\circ(\widetilde{\phi}^{+})^{\star}),\|\widetilde{\phi}^{+}\|,\] \[\lambda(g^{-1}\circ(\widetilde{\phi}^{-})^{\star})+(1-\lambda)(h ^{-1}\circ(\widetilde{\phi}^{-})^{\star}),\|\widetilde{\phi}^{-}\|\Big{)}\] \[=\Big{(}(\lambda g^{-1}+(1-\lambda)h^{-1})\circ(\widetilde{\phi} ^{+})^{\star},\|\widetilde{\phi}^{+}\|,(\lambda g^{-1}+(1-\lambda)h^{-1}) \circ(\widetilde{\phi}^{-})^{\star},\|\widetilde{\phi}^{-}\|\Big{)}\] \[=\Big{(}p^{-1}\circ(\widetilde{\phi}^{+})^{\star},\|\widetilde{ \phi}^{+}\|,p^{-1}\circ(\widetilde{\phi}^{-})^{\star},\|\widetilde{\phi}^{-}\| \Big{)}\]
where \(p^{-1}=\lambda g^{-1}+(1-\lambda)h^{-1}\in H_{R}\) and the norms \(\|(\widetilde{\phi})_{g}^{\pm}\|_{1}=\|(\widetilde{\phi})_{h}^{\pm}\|_{1}=\|( \widetilde{\phi})^{\pm}\|_{1},\) as shown in the proof of Composition property A.2 above. Thus, \(\lambda\widehat{\phi}_{g}+(1-\lambda)\widehat{\phi}_{h}\in\widehat{S}_{\phi,H_{ R}}\)
Proof that \(D(\cdot,\cdot)\) is a metric (see def. 3.6) and \(D\) induces and isometry (see prop. 3.7)
Proof.: Here we show that \(D\) is a metric. Recall by definition 3.6,
\[D(s_{1},s_{2}): =\Bigg{(}\int_{0}^{\pi}D_{S}^{2}\left(\widetilde{s_{1}}(\cdot, \theta),\widetilde{s_{2}}(\cdot,\theta)\right)d\theta\Bigg{)}^{\frac{1}{2}}\] \[=\Bigg{(}\int_{0}^{\pi}D_{W^{2}}^{2}\left(\widetilde{s_{1}}^{+}( \cdot,\theta),\widetilde{s_{2}}^{+}(\cdot,\theta)\right)d\theta+\int_{0}^{\pi }D_{W^{2}}^{2}\left(\widetilde{s_{1}}^{-}(\cdot,\theta),\widetilde{s_{2}}^{-}( \cdot,\theta)\right)d\theta\Bigg{)}^{\frac{1}{2}},\]
where \(D_{S}\) and \(D_{W^{2}}\) are as defined in (2.19).
* Negativity (\(D(\cdot,\cdot)\geq 0\)) : Note that \(D_{S}^{2}(\cdot,\cdot)\geq 0\), therefore, \(D(\cdot,\cdot)\geq 0\).
* \(D(s_{1},s_{2})=0\implies s_{1}=s_{2}:\) Assume, \(D(s_{1},s_{2})=0\). This implies, \(\int_{0}^{\pi}D_{S}^{2}\left(\widetilde{s_{1}}(\cdot,\theta),\widetilde{s_{2} }(\cdot,\theta)\right)d\theta=0\). Since, \(D_{S}^{2}(\cdot,\cdot)\geq 0\), therefore, \(D_{S}\left(\widetilde{s_{1}}(\cdot,\theta),\widetilde{s_{2}}(\cdot,\theta) \right)=0\quad\forall\theta\in[0,\pi].\) Hence, \(\widetilde{s_{1}}(\cdot,\theta)=\widetilde{s_{2}}(\cdot,\theta)\quad\forall \theta\in[0,\pi]\) which further implies, \(s_{1}=s_{2}\).
* Symmetry (\(D(s_{1},s_{2})=D(s_{2},s_{1})\)) : This follows from the fact that \(D_{S}(\cdot,\cdot)\) is a metric (see [2]) and hence symmetric. Therefore, \(D_{S}^{2}(\widetilde{s_{1}}(\cdot,\theta),\widetilde{s_{2}}(\cdot,\theta))=D_ {S}^{2}(\widetilde{s_{2}}(\cdot,\theta),\widetilde{s_{1}}(\cdot,\theta))\quad \forall\theta\in[0,\pi].\) Integrating both sides with respect to \(\theta\in[0,\pi]\), we have \(D(s_{1},s_{2})=D(s_{2},s_{1})\).
* Triangle inequality (\(D(s_{1},s_{2})\leq D(s_{1},r)+D(r,s_{2})\)) : By definition, \[D^{2}(s_{1},r) =\int_{0}^{\pi}D_{S}^{2}(\widetilde{s_{1}}(\cdot,\theta), \widetilde{r}(\cdot,\theta))\,d\theta\] \[=\int_{0}^{\pi}\|\widehat{s_{1}}(\cdot,\theta)-\widehat{r}(\cdot,\theta)\|_{L^{2}(\widetilde{r}(\cdot,\theta))}^{2}\,d\theta\] \[=\int_{0}^{\pi}\int_{\mathbb{R}}(\widehat{s_{1}}(t,\theta)-t)^{2 }\widetilde{r}(t,\theta)\,dt\,d\theta\] where the second equality follows from the isometry of \(D_{S}(\cdot,\cdot)\)[2]. Similarily, \[D^{2}(s_{2},r)=\int_{0}^{\pi}\int_{\mathbb{R}}(\widehat{s_{2}}(t,\theta)-t)^{ 2}\widetilde{r}(t,\theta)\,dt\,d\theta\] (A.4) Let \(h(t,\theta)\) be the RSCDT of \(s_{2}\) with respect to \(s_{1}\), i.e. \[D^{2}(s_{2},s_{1})=\int_{0}^{\pi}\int_{\mathbb{R}}(h(t,\theta)-t)^{2} \widetilde{s_{1}}(t,\theta)\,dt\,d\theta\] (A.5) Following the composition property, we have \(h(\cdot,\theta)\circ\widehat{s_{1}}(\cdot,\theta)=\widehat{s_{2}}(\cdot,\theta)\) and \(\widetilde{r}(\cdot,\theta)=\widehat{s_{1}}^{\prime}(\cdot,\theta)\widetilde{s _{1}}(\widehat{s_{1}}(\cdot,\theta),\theta)\) for every \(\theta\in[0,\pi].\) Using change of variables
\[t=\widehat{s}(u,\theta)\text{ in (A.5),}\] \[D^{2}(s_{2},s_{1}) =\int_{0}^{\pi}\int_{\mathbb{R}}(h(\widehat{s_{1}}(u,\theta),\theta )-\widehat{s_{1}}(u,\theta))^{2}\widehat{s_{1}}(\widehat{s_{1}}(u,\theta), \theta)\,\frac{d\widehat{s_{1}}(u,\theta)}{du}du\,d\theta\] \[=\int_{0}^{\pi}\int_{\mathbb{R}}(\widehat{s_{2}}(u,\theta)- \widehat{s_{1}}(u,\theta))^{2}\widetilde{r}(u,\theta)\,du\,d\theta\] \[=\|\widehat{s_{2}}-\widehat{s_{1}}\|_{(L^{2}(\widetilde{r})\times L ^{2}[0,\pi])^{2}}^{2}\] This establishes the isometry i.e. \(D(s_{2},s_{1})=\|\widehat{s_{2}}-\widehat{s_{1}}\|_{(L^{2}(\widetilde{r}) \times L^{2}[0,\pi])^{2}}\). Using this relation, we finally have the required triangle inequality for the metric,
\[D(s_{2},s_{1}) =\|\widehat{s_{2}}-\widehat{s_{1}}\|_{(L^{2}(\widetilde{r}) \times L^{2}[0,\pi])^{2}}\] \[\leq\|\widehat{s_{2}}-I\|_{(L^{2}(\widetilde{r})\times L^{2}[0, \pi])^{2}}+\|I-\widehat{s_{1}}\|_{(L^{2}(\widetilde{r})\times L^{2}[0,\pi])^{ 2}}\] \[=D(s_{2},r)+D(r,s_{1})\]
|
2306.05203 | A cognitive process approach to modeling gap acceptance in overtaking | Driving automation holds significant potential for enhancing traffic safety.
However, effectively handling interactions with human drivers in mixed traffic
remains a challenging task. Several models exist that attempt to capture human
behavior in traffic interactions, often focusing on gap acceptance. However, it
is not clear how models of an individual driver's gap acceptance can be
translated to dynamic human-AV interactions in the context of high-speed
scenarios like overtaking. In this study, we address this issue by employing a
cognitive process approach to describe the dynamic interactions by the oncoming
vehicle during overtaking maneuvers. Our findings reveal that by incorporating
an initial decision-making bias dependent on the initial velocity into existing
drift-diffusion models, we can accurately describe the qualitative patterns of
overtaking gap acceptance observed previously. Our results demonstrate the
potential of the cognitive process approach in modeling human overtaking
behavior when the oncoming vehicle is an AV. To this end, this study
contributes to the development of effective strategies for ensuring safe and
efficient overtaking interactions between human drivers and AVs. | Samir H. A. Mohammad, Haneen Farah, Arkady Zgonnikov | 2023-06-08T13:59:09Z | http://arxiv.org/abs/2306.05203v1 | # A cognitive process approach to modeling gap acceptance in overtaking
###### Abstract
Driving automation holds significant potential for enhancing traffic safety. However, effectively handling interactions with human drivers in mixed traffic remains a challenging task. Several models exist that attempt to capture human behavior in traffic interactions, often focusing on gap acceptance. However, it is not clear how models of an individual driver's gap acceptance can be translated to dynamic human-AV interactions in the context of high-speed scenarios like overtaking. In this study, we address this issue by employing a cognitive process approach to describe the dynamic interactions by the oncoming vehicle during overtaking maneuvers. Our findings reveal that by incorporating an initial decision-making bias dependent on the initial velocity into existing drift-diffusion models, we can accurately describe the qualitative patterns of overtaking gap acceptance observed previously. Our results demonstrate the potential of the cognitive process approach in modeling human overtaking behavior when the oncoming vehicle is an AV. To this end, this study contributes to the development of effective strategies for ensuring safe and efficient overtaking interactions between human drivers and AVs.
## I Introduction
Driving automation has the potential to enhance traffic safety [1]. However, as the road will continue to consist of mixed traffic in the foreseeable future, effectively handling interactions between automated vehicles (AVs) and human drivers remains a significant challenge. Understanding how humans behave in these interactions is crucial to address this problem, particularly in high-stakes scenarios such as overtaking maneuvers [2].
Many models of human behavior in traffic interactions have been proposed, with a particular focus on gap acceptance as a key aspect of the interaction (e.g., [3, 4]). These models have provided valuable insights into human driver behavior. However, the translation of individual driver gap acceptance models to human-AV interactions remains unclear. While recent research has started addressing this issue (e.g., [5, 6], it has primarily focused on low-speed interactions, limiting its applicability to high-speed scenarios like overtaking.
Therefore, there is a need to gain a better understanding of the interactions between oncoming AVs and human drivers during overtaking maneuvers. In our study, through a conceptual analysis of interactions during overtaking we investigate _what_ aspects of the overtaking process are critical to the dynamic interactions (section II) and _how_ to model these by assessing existing gap acceptance models (section III). Finally, we provide a proof of concept of dynamic modeling of overtaking using the most suitable identified approach -- cognitive process modeling (section IV).
## II Conceptual analysis of the overtaking interaction
To provide a road map for modeling human-AV interactions during overtaking, in this section we formulate requirements for the models based on the conceptual analysis of the overtaking maneuver.
When considering overtaking a single vehicle, three strategies can be applied: 'piggy-backing' (closely following another vehicle that overtakes the vehicle ahead), 'flying' (constant-speed overtaking) and 'accelerating' (slowing down behind the lead vehicle and then accelerate) [7]. The scope of this study focuses on the latter as it is the commonly observed overtaking strategy [8].
The accelerating overtaking maneuver has been extensively analyzed by Hegeman et al. ([7]). In our study, we use the twenty sub-tasks that follow from their analysis. To investigate these sub-tasks' interactive nature (explicit, implicit, or neither type of communication), we used the framework by Markkula et al. ([5]) by mapping the sub-tasks to interactive behaviors. Their proposed taxonomy describes seven non-mutually exclusive types of interactive behavior, of which three are related to moving in the traffic situation, another three to perceiving the traffic situation, and one to appreciating the traffic situation. In this framework, a distinction is made between _implicit_ and _explicit_ communication in interactions. Explicit communication (for example, the use of hand gestures or external human machine interfaces) only affects the other traffic participants' movement and perception of the traffic situation. While implicit communication, such as making eye contact or accelerating to insist on the right of way, affects the movement and perception of both the ego vehicle and the other traffic participants. The nature of the interactions during overtaking provide a basis for the requirements for modeling of human-AV interactions.
Out of the 20 overtaking sub-tasks identified by Hegeman et al. [7], we classified 16 as interactive (Figure 1). Half of them relate to implicit communication, while the other half equally relates to either explicit or neither type of communication. Our finding that implicit communication is more present in the overtaking maneuver is in accordance with previous studies that show that explicit communication is rarely used by human road users [7, 9].
The effect of implicit communication such as motion dynamics on overtaking behavior plays a key role in gap
acceptance. As an example, the speed of the oncoming vehicle significantly affects the critical gap (i.e. the gap where the probability to overtake or to stay is equal) [3]. Furthermore, to go from constant-speed scenarios to dynamic scenarios, dynamic interactions such as acceleration or deceleration by the oncoming vehicle should be considered as well. Here, we assume that these dynamics also affect the gap acceptance if perceptual thresholds are exceeded [10, 11]. Human drivers exhibit adaptive behavior in response to changing road conditions and the behavior of other vehicles [12]. Given that the overtaking decision process typically spans a duration of approximately 1 to 3 seconds on average [13], there might be dynamic changes in the environment within this time frame. As a result, incorporating models that capture response times can offer valuable insights into human behavior. Other key factors such as road geometry, driving style and driver's demographics should be considered as well as they affect human overtaking behavior [3, 14, 15].
To summarize, the requirements for modeling gap acceptance in human-AV overtaking interaction are:
1. Describing _dynamic_ interactions (i.e. taking into account AV's motion dynamics)
2. Describing human's response time (the moment of accepting gap)
3. Possibility of incorporating other key factors affecting gap acceptance (i.e. driver's demographic characteristics, driving style, and road geometry)
## III Assessment of gap acceptance models
As we argued in the previous section, gap acceptance is a key element of the overtaking process, and therefore human-AV interaction models should incorporate it. Existing models of gap acceptance (not only in overtaking but also several other traffic scenarios such as entering intersections [16], lane-changing [17], and pedestrian crossing [18]) can be based on: logistic regression [3, 19], machine learning [20, 21], algorithmic modeling [4, 22], agent-based modeling [23, 24] and cognitive modeling [25, 26].
We assessed different classes of gap acceptance models according to the three criteria mentioned in the previous section (Table I). Following the approaches listed in Table I, we found that two approaches are promising in modeling human decision making during overtaking interactions with AVs. These are agent-based/game-theoretic models and cognitive models.
Agent-based models describe interactions between traffic participants as agents in a game-theoretic setting (e.g. [23, 24]). They are able to handle dynamic changes of vehicle dynamics during the interactions. However, implementing response time in game-theoretic models has so far only been done indirectly by using receding horizon control [27]. Then to also include additional factors these need to be combined with extended agent-based models [24]. Furthermore, game-theoretic assumptions of agents having perfect knowledge of each other and not needing to communicate may not hold in real-world scenarios [28]. Game-theoretic models also rely on assumptions about payoff functions that may not reflect human decision making.
Cognitive models of gap acceptance describe cognitive processes that underlie human decision making in traffic, building up on fundamental research in cognitive psychology and neuroscience [29, 30]. One class of cognitive models that is becoming increasingly popular in modeling traffic interactions is drift-diffusion models (DDMs, e.g. [25, 26]). DDMs naturally capture response times [29] and can incorporate changes in vehicle dynamics during the interaction [25, 26]. Furthermore, in comparison to agent-based models, the DDM framework provides a simpler approach to incorporating the factors affecting gap acceptance (age and gender, and road geometry) by adjusting model parameters [31]. Challenges include representing these factors simultaneously and incorporating them in advanced DDM models that are needed to model dynamic interactions. Despite these challenges, we conclude that cognitive models and the DDM in particular hold promise for more realistic models of overtaking in human-AV interactions.
Earlier work showed that the DDM holds promise in
\begin{table}
\begin{tabular}{l|l|l|l}
**Model type** & _Dynamic_ & _Response time_ & _Influencing factors_ \\ \hline Algorithmic & no & no & no \\ Agent-based & yes & indirect & indirect \\ Cognitive & yes & yes & indirect \\ Logistic & no & no & yes \\ Machine learning & indirect & no & yes \\ \end{tabular}
\end{table} TABLE I: Assessment of gap acceptance models
Fig. 1: Mapping of the overtaking sub-tasks’ [7] interactive nature [5]. Sub-tasks with recurring instances are indicated within parentheses.
accurately predicting gap acceptance in left-turn decisions [6, 26]. However, in contrast to the left-turn decision that are made at low speeds, in overtaking the human driver is already at a relatively high speed when initiating the decision-making process. Furthermore, in overtaking multiple sources of dynamic evidence may affect gap acceptance, considering both the presence of the lead vehicle and the oncoming AV. DDMs used in other traffic scenarios such as unprotected left-turns [6, 26] and pedestrian crossing [25] therefore may need to be adapted to accommodate the ego vehicle's initial speed and the lead vehicle's presence.
## IV Modeling human decision making in overtaking: A proof-of-concept
To investigate the feasibility of the cognitive modeling approach for overtaking, here we test several version of the drift-diffusion model using the data on human overtaking decisions previously collected in a driving simulator [13]. The model fitting and simulation code used in this case study is available online.
### _Dataset_
A prerequisite of cognitive process modeling using the drift-diffusion models is measuring the response time. Sevenster et al. [13] offered a simple way of measuring accepted and rejected response times in overtaking, and explored the effect of two situation-specific factors (distance gap and ego-vehicle velocity) on the response times measured in a driving simulator experiment. The measures of Sevenster et al. ([13]) included 2097 overtaking decisions collected from 25 participants, with varying initial gap to the oncoming vehicle (160 or 220 meters) and the initial ego-vehicle velocity as a free variable. It included the decision outcome and the corresponding response time as the dependent variables. To be able to model this dataset, we filtered it by removing any measures with unrealistic response times, missing values, and null values. The remaining data (N=1758) was used for further analysis.
The continuous nature of the free initial ego-vehicle velocity variable impedes model fitting using existing fitting tools such as _pyddm_[32]. Therefore, in this study, this variable has been clustered into three initial velocities, and by this transforming the problem to a 2x3 factorial design (2 initial distance conditions, 3 initial velocity conditions). We have opted to exclude measures relating to the lead vehicle such as following distance, since clustering these as well would significantly reduce the amount of data for each set of conditions.
Based on their data, Sevenster et al. [13] highlighted the following relationships between the initial setup of the overtaking scenario and the resulting human behavior and response times:
* Probability of accepting the gap increases with initial distance to the oncoming vehicle.
* Probability of accepting the gap increases with initial velocity of the ego vehicle.
* Response times in rejected gaps are on average higher than in accepted gaps.
* Response times in both accepted in rejected gaps increase with initial distance.
* Response times in accepted gaps decrease with initial velocity.
* Response times of rejected gaps remain constant regardless of the initial velocity.
In what follows, we evaluate how well different candidate cognitive models can capture human behavior according to these findings.
### _Cognitive modeling_
#### Iv-B1 Basic drift-diffusion model and its applications to traffic
We employed the drift-diffusion modeling framework [29] to explain participants' behavior and response times in our experiment. This framework is based on evidence accumulation, where humans integrate relevant perceptual information over time (Figure 2). Accumulation is a noisy process that continues until the evidence in favor of one alternative reaches a predetermined boundary. Despite its simplicity, DDMs have been successful in explaining various behavioral effects of decision context on outcomes and response times [30].
Mathematically, the drift-diffusion model represents the choice between two options as a random process, where evidence \(x\) accumulates based on a drift rate \(s(t)\) (momentary evidence favoring one option over the other) and diffusion (random noise \(\varepsilon(t)\)):
\[\frac{dx}{dt}=s(t)+\varepsilon(t). \tag{1}\]
Accumulation stops when the accumulated evidence crosses an upper \(x=b(t)\) or lower decision boundary \(x=-b(t)\).
Recent applications of DDM to gap acceptance [25, 26] consider the drift rate \(s(t)\) to capture dynamically changing gap sizes and time-varying decision boundaries \(b(t)\) to reflect
Fig. 2: Visualization of gap acceptance decision making in overtaking. Depending on the gap to the oncoming vehicle (blue), the human driver of the ego vehicle (yellow) can decided either to reject the gap and stay in the lane (red trajectory) or to accept the gap (green trajectory) and overtake the slow lead vehicle. According to the drift-diffusion model, this decision can be represented as bounded accumulation of noisy evidence over time.
choice urgency. Such models were able to capture decision outcomes and response times of human decision makers. However, they cannot be directly used for our overtaking scenario because they do not incorporate the initial velocity that the human driver has at the start of the decision. As previous studies have shown, this velocity affects the decision and therefore it needs to be incorporated in one of the components of the DDM.
#### Iii-B2 Drift-diffusion model of overtaking
Here, we build upon the previously proposed left-turn gap acceptance model [26] by incorporating the initial velocity of the ego vehicle in the different components of the model (drift rate, decision boundary, initial decision bias). We then investigate which of the resulting 8 versions of the model better describes the data of Sevenster et al. [13].
Each of the tested models includes four main components. First, the drift rate \(s(t)\) is a function of time-to-arrival (\(TTA\)) and distance \(d\) between the ego vehicle and the oncoming vehicle and possibly the initial velocity of the ego vehicle \(v_{0}\)
\[s(t)=\alpha(TTA(t)+\beta d(t)-\theta_{s}) \tag{2}\]
\[s(t)=\alpha(TTA(t)+\beta d(t)+\gamma v_{0}-\theta_{s}), \tag{3}\]
where \(\alpha>0\), \(\beta>0\), \(\gamma>0\) and \(\theta_{s}>0\) are free parameters. We define \(x\) as a measure of _relative_ evidence, with positive values indicating support for the "Overtake" decision and negative values favoring the "Stay" decision at a given moment \(t\). Intuitively, as the gap between the decision maker and the oncoming vehicle (a combination of \(d\) and \(TTA\)) increases (e.g., when the opposing vehicle decelerates) relative to a critical value \(\theta_{s}\), the drift rate becomes more positive. This implies a higher likelihood of the decision maker leaning towards the Overtake decision. Conversely, they are more likely to arrive to the Stay decision when the drift rate becomes more negative. As the initial speed of the ego vehicle positively affects the probability of accepting the gap [13], these effects are amplified when including the initial velocity in the drift rate.
Second, the decision boundary collapses with either \(TTA(t)\), or with all the kinematic variables affecting the drift rate \(s(t)\).
\[b(t)=\pm\frac{b_{0}}{1+e^{-k(TTA(t)-\tau)}} \tag{4}\]
\[b(t)=\pm\frac{b_{0}}{1+e^{-k(TTA(t)+\beta d(t)-\theta_{s})}} \tag{5}\]
\[b(t)=\pm\frac{b_{0}}{1+e^{-k(TTA(t)+\beta d(t)+\gamma v_{0}-\theta_{s})}}. \tag{6}\]
Intuitively, with lower values of \(TTA\) and \(d\) the decision maker experiences stronger urgency to make the decision, which is reflected by boundary \(b(t)\) decreasing with the gap size (similar to [26]).
Third, the initial bias \(Z\) defines the starting position of the evidence accumulation process (i.e. \(x(t_{0})=Z\))
\[Z=C_{z} \tag{7}\]
\[Z=\frac{2b(t_{0})}{1+e^{-b_{z}(v_{0}-\theta_{z})}}-b(t_{0}), \tag{8}\]
where a value of \(Z<0\) indicates an initial bias towards the Stay decision, while \(Z>0\) indicates a bias towards the Overtake decision. This bias can be represented by a constant value \(C_{z}\) (Eq. (7)) or can vary based on the initial velocity \(v_{0}\) (Eq. (8)). In the latter case, relatively higher and lower initial speeds correspond to a bias towards the Overtake and Stay decision, respectively.
Fourth, for all models the non-decision time (the duration of the cognitive processes unrelated to decision-making, such as perceptual and motor delays) is assumed to follow the normal distribution
\[t^{ND}\in\mathcal{N}(\mu_{ND},\sigma_{ND}),\quad\mu_{ND}>0,\,\sigma_{ND}>0. \tag{9}\]
The eight model variants resulting from different combinations of the model components are shown in Table II. The odd-numbered models use a constant bias, while the even-numbered models use a bias depending on the initial speed. Models 1, 2, 5 and 6 have their drift rate depending on the \(TTA(t)\) and \(d(t)\), whereas Models 3, 4, 7 and 8 also include the initial speed. The decision boundaries of Models 1 to 4 decrease with the \(TTA\), while Models 5 to 8 use decision boundaries depending on all kinematic variables affecting their respective drift rate function. The simplest model (M6) contains 8 free parameters (\(\alpha\), \(\beta\), \(\theta_{s}\), \(b_{0}\), \(k\), \(Z\), \(h_{ND}\), \(\sigma_{ND}\)) and the most extensive model (M4) contains 11 free parameters (\(\alpha\), \(\beta\), \(\gamma\), \(\theta_{s}\), \(b_{0}\), \(k\), \(\tau\), \(\theta_{z}\), \(b_{z}\), \(\mu_{ND}\), \(\sigma_{ND}\)).
#### Iii-B3 Model fitting and evaluation
Our goal was to examine whether extended models could depict the behavior of the "average" participant in the dataset. Although it is possible to fit the model to each participant's data individually, providing insights into individual differences (see e.g. [26]), it requires a separate investigation beyond the scope of this study. Instead, we evaluated the models' qualitative match to the data reported in [13] according to the observations listed in the end of Section IV-A.
The fitting of the models involved utilizing the differential evolution optimization technique and Bayesian information criterion, as implemented in the _pyddm_ framework, a Python package specifically designed for DDM fitting [32].
#### Iii-B4 Comparing models and data
We found that the eight tested models differed substantially in regards to their qualitative match with the observed human behavior (Figure 3, Table III).
The models that did not include the ego vehicle's initial speed \(v_{0}\) in any of the components (M1 and M5) predictably could not capture the increase of probability of accepting the gap with \(v_{0}\). The other six models could all account for probability of accepting the gap, making it essential to consider response time as the measure that can help distinguish between candidate models further.
For response times, the results differ considerably between odd- and even-numbered models (Table III). The odd-numbered models, i.e. models with a constant initial bias, struggle to consistently describe the effect of initial velocity on response times (in both accepted and rejected gaps). On
the other hand, among the models that do include velocity-dependent initial bias, M8 captures 5 out of 6 qualitative patterns, and M2, M4 and M6 even describe them all.
The most successful models, M2, M4 and M6, contain respectively 10, 11 and 9 free parameters. The differences between these three models can be found in the decision boundary: decision boundaries of M2 and M4 collapse only with \(TTA(t)\), while M6's boundary collapses with \(TTA(t)\) and \(d(t)\). Furthermore, in contrast to the drift rate used in M4, M2 and M6 do not have the initial velocity included in theirs. Lastly, M6 reuses parameters of the drift rate in the boundary function, therefore consolidating the total amount of free parameters. Therefore, we conclude that M6 is the simplest model that can describe all qualitative patterns previously observed in human behavior. This model hypothesizes drift rate and decision boundary that both depend on the same linear combination of TTA and distance, and the decision bias that scales with the initial velocity of the ego vehicle. The resulting fitted model parameters for M6 were \(\alpha=0.07\), \(\beta=0.11,\ \theta_{s}=47,\ b_{0}=2.8,\ k=0.02,\ b_{z}=0.14,\ \theta_{z}=5.8\), \(\mu_{ND}=1.0,\ \sigma_{ND}=0.27\).
## V Discussion
Human decision-making in traffic involves high stakes, especially during overtaking where there is an increased risk of a head-on collision between two vehicles at high speed. Understanding and predicting human overtaking behavior can lead to safer interactions on the road. This paper makes a step towards such understanding by conceptually analyzing the overtaking process and applying the cognitive modeling approach to describe the dynamic decision-making process of human drivers in overtaking.
Our study highlights that drift-diffusion model can be transferred to complex traffic scenarios, such as the overtaking maneuver. This represents a major step forward compared to simpler traffic scenarios that have been modelled with drift-diffusion models before [6, 25, 26]. Potentially, decision making in other dynamic interactive maneuvers, such as merging from on-ramps and lane changing on highways, can be described as well by these models.
An important limitation of this study however is that our conceptual analysis only mapped the interactive behaviors of the ego vehicle. To fully conceptualize interactions in any traffic scenario, all traffic participants should be taken into account [5]. Furthermore, the role of the lead vehicle has not been explored thoroughly when modeling overtaking behavior even though empirical studies show that the lead vehicle's dynamics affect gap acceptance [3]. Lastly, the effect of driving characteristics, such as age and gender, on gap acceptance is not investigated in our proof-of-concept study due to the lack of existing datasets that measure response times in large samples of participants. Previous studies highlighted that such characteristics affect gap acceptance in overtaking [14], so addressing them in DDMs could be useful when modeling participant-specific overtaking behavior.
This paper goes beyond existing constant-speed gap acceptance studies in overtaking by providing DDM components that can potentially describe overtaking behavior when interacting with an incoming vehicle that changes its dynamics during the overtaking maneuver. Given the characteristic response times (1 to 3 seconds on average [13]), such dynamic changes can affect the ongoing gap acceptance decision. This represents an important potential point of influence for AVs to manage the interaction with the human-driven vehicles [6, 33, 34]. Future research should therefore examine how such dynamic interactions with an oncoming AV can be studied empirically and modelled.
Our work has potential practical applications for safer human-AV interactions in overtaking. Cognitive models like the DDM can be used to enhance training and validation of existing interactive-aware controllers [35] in the case only limited training and validation data are available [36]. Furthermore, models like DDM could be used for better predictions in human-AV interactions, which can benefit traffic safety. Firstly, the risk of head-on collisions can be reduced by AVs anticipation of overtaking behavior of other road users [37]. Secondly, traffic flow can also become more efficient through trajectory planning of AVs [12]. Further research is needed however on utilizing the potential of DDMs for behavior prediction in gap acceptance [38].
## VI Conclusion
This study shows the promise of using drift-diffusion models, a subset of cognitive process models, to predict human gap acceptance in overtaking. Our results can be used in future research to predict human overtaking behavior when dynamically interacting with an oncoming AV. We believe that this will help to understand how AVs could control their interaction strategy to contribute to safer and more efficient traffic. More generally, this study exemplifies how simple cognitive process models can help us to understand and possibly improve human-AV interactions in complex traffic scenarios.
|
2307.12456 | Information-theoretic Analysis of Test Data Sensitivity in Uncertainty | Bayesian inference is often utilized for uncertainty quantification tasks. A
recent analysis by Xu and Raginsky 2022 rigorously decomposed the predictive
uncertainty in Bayesian inference into two uncertainties, called aleatoric and
epistemic uncertainties, which represent the inherent randomness in the
data-generating process and the variability due to insufficient data,
respectively. They analyzed those uncertainties in an information-theoretic
way, assuming that the model is well-specified and treating the model's
parameters as latent variables. However, the existing information-theoretic
analysis of uncertainty cannot explain the widely believed property of
uncertainty, known as the sensitivity between the test and training data. It
implies that when test data are similar to training data in some sense, the
epistemic uncertainty should become small. In this work, we study such
uncertainty sensitivity using our novel decomposition method for the predictive
uncertainty. Our analysis successfully defines such sensitivity using
information-theoretic quantities. Furthermore, we extend the existing analysis
of Bayesian meta-learning and show the novel sensitivities among tasks for the
first time. | Futoshi Futami, Tomoharu Iwata | 2023-07-23T23:42:06Z | http://arxiv.org/abs/2307.12456v1 | # Information-theoretic Analysis of Test Data Sensitivity in Uncertainty
###### Abstract
Bayesian inference is often utilized for uncertainty quantification tasks. A recent analysis by [34] rigorously decomposed the predictive uncertainty in Bayesian inference into two uncertainties, called aleatoric and epistemic uncertainties, which represent the inherent randomness in the data-generating process and the variability due to insufficient data, respectively. They analyzed those uncertainties in an information-theoretic way, assuming that the model is well-specified and treating the model's parameters as latent variables. However, the existing information-theoretic analysis of uncertainty cannot explain the widely believed property of uncertainty, known as the sensitivity between the test and training data. It implies that when test data are similar to training data in some sense, the epistemic uncertainty should become small. In this work, we study such uncertainty sensitivity using our novel decomposition method for the predictive uncertainty. Our analysis successfully defines such sensitivity using information-theoretic quantities. Furthermore, we extend the existing analysis of Bayesian meta-learning and show the novel sensitivities among tasks for the first time.
## 1 Introduction
Uncertainty qualification for predictions of machine learning algorithms has become increasingly important. Such information is used in the detection of domain shifts [22], adversarial attacks [35], Bayesian optimization [14], and reinforcement learning [15]. The Bayesian inference is widely used in such applications since it represents uncertainties through a posterior distribution updated from the prior distribution using training data [5].
Existing studies [16, 4] often categorized uncertainties into two types; one is aleatoric uncertainty, which is caused by the noise inherent in the data-generating process. The other is called epistemic uncertainty, caused by the lack of data. Recently, [34] have rigorously decomposed the uncertainty of the prediction into
aleatoric and epistemic uncertainties by focusing on loss functions in supervised learning. The key idea of their analysis is that assuming the model is well-specified, model parameters are treated as latent variables and marginalized, similarly to Bayesian inference. Thus, they called the setting **Bayesian learning**. Under this setting, the optimal decision rule is obtained on the basis of the Bayesian posterior distribution.
Under the Bayesian learning setting, they proposed treating the aleatoric uncertainty as the Bayes risk since the noise inherent in the data-generating process is closely related to the fundamental difficulty of learning. As we introduce in Sec. 2, they defined the epistemic uncertainty as the excess risk obtained by subtracting the Bayes risk from the optimal risk under the optimal decision rule since such excess risk corresponds to the "loss due to lack of data". Then, they analyzed the epistemic uncertainty by studying the excess risk. For example, when the risk function is the log loss, the optimal risk becomes the entropy of the posterior predictive distribution, and the Bayes risk corresponds to the entropy of the data-generating distribution. Then, the excess risk is equivalent to **conditional mutual information (CMI)**, see Eq. (7) for details. Thus, the CMI corresponds to epistemic uncertainty. They showed that such CMI monotonically decreases with the training dataset size, a desirable property of epistemic uncertainty. As for other common loss functions, the excess risk can also be upper-bounded by the CMI. These settings have recently been extended to Bayesian meta-learning settings, where we assume a hyperprior distribution on prior distributions [27].
The limitation of these existing Bayesian learning analyses is that they cannot explain the widely believed geometric property of epistemic uncertainty: If the given test data point is similar to the training data in some sense, the uncertainty at such test data point should be small since there is sufficient information for the prediction. On the other hand, if the test data is less similar to the training data, the uncertainty should be large. This property is called the **sensitivity** between the test and training data points. Linear models and Gaussian processes exhibit this property [5] since the variance of their posterior predictive distribution explicitly depends on the distance between the test and training data under the given feature map. This sensitivity property has recently received attention in deep learning methods [19, 30]. However, existing analysis under Bayesian learning cannot explain such sensitivity for the test data. Since [34] analyzed the CMI by upper-bounding it using the mutual information (MI) between the training data and model parameters, and such MI does not contain the information of the test data point, see Eq. (9) for details.
In this paper, we continue the uncertainty analysis under Bayesian learning and aim to analyze the sensitivity between the test and training data points. To achieve this, we first present the novel decomposition of the CMI in Theorem 1. Using this, we formally define the sensitivity between the test and training data using an information-theoretic quantity. Then we provide the theoretical and numerical analyses of this quantity. We also apply our analysis to the meta-learning setting similarly to [27] and determined the sensitivity between the meta-training and meta-test tasks in Theorem 6. To the best of our knowledge, the sensitivity among tasks is presented for the first time.
Another contribution of this work is a new information-theoretic upper bound of the CMI, which includes the interaction between training data points in Corollary 1. Our new bound is tighter than the existing bound proposed by
[34]. Finally, we present a new exact characterization of the generalization error using our novel decomposition in Theorem 4 and show a new connection to the existing information-theoretic generalization error bounds under the frequentist setting [2].
## 2 Preliminaries
Here, we review the setting of **Bayesian learning** used by [34] and its extension to the meta-learning setting proposed by [27]. Capital letters such as \(X\) represent random variables, whereas lowercase letters such as \(x\) represent deterministic values.
### Bayesian Learning
We consider a supervised setting and denote input-output pairs by \(Z=(X,Y)\in\mathcal{Z}:=\mathcal{X}\times\mathcal{Y}\). Learners can access \(N\) training data, \(Z^{N}:=(Z_{1},\ldots,Z_{N})\) with \(Z_{n}:=(X_{n},Y_{n})\), which are independent and identically distributed (i.i.d.) samples from some underlying distribution. The goal of supervised learning is to use \(Z^{N}\) to predict the test target variable \(Y\) given the test input \(X\), independently drawn from the same distribution as the training data. For this purpose, we consider a parametric generative model. We assume that the underlying distribution belongs to a model class \(\{p(z|w):w\in\mathcal{W}\}\) with model parameter \(w\) in the set \(\mathcal{W}\). As pointed out by [27], [12], and [17], this setting implies that the model class is well-specified. In this paper, we assume that \(p(Z|W)=p(Y|X,W)p(X)\) for simplicity. This means that the input data \(X\) is independent of the model parameter.
In **Bayesian learning**, model parameters are treated as latent random variables following a prior distribution \(p(w)\). Conditioned on \(W=w\), the data are generated by \(p(Z|W=w)\). Thus, the joint distribution of the training data \(Z^{N}\), the test data \(Z\), and the model parameter \(W\) is given by
\[p(W,Z^{N},Z)\coloneqq p(W)p(Z|W)^{N}p(Z|W). \tag{1}\]
Since the training data points are i.i.d. samples, we can express \(p(Z|W)^{N}=p(Z^{N}|W)\). We express the Bayesian posterior as \(p(W|Z^{N})\) and the posterior predictive distribution as \(p(Y|X,Z^{N}):=\mathbb{E}_{p(W|Z^{N})}p(Y|X,W)\).
Next, we introduce action \(a\) and loss function \(l\) to measure the performance of supervised learning. We define \(\mathcal{A}\) as an action space and the loss function as \(l:\mathcal{Y}\times\mathcal{A}\rightarrow\mathbb{R}\). The loss of action \(a\in\mathcal{A}\) and the target variable \(y\) are written as \(l(y,a)\); for example, the log loss is given as \(l(y,q)=-\ln q(y)\), where \(q\) is the probability density of \(Y\) and \(\mathcal{A}\) is the set of all probability densities on \(Y\). The squared loss is given as \(l(y,a)=|y-a|^{2}\), where \(\mathcal{Y}=\mathcal{A}=\mathbb{R}\). Our goal is to infer the decision rule \(\psi:\mathcal{X}\times\mathcal{Z}^{N}\rightarrow\mathcal{A}\) that minimizes the expected loss \(\mathbb{E}_{p(Y,X,Z^{N})}[l(Y,\psi(X,Z^{N}))]\) among all decision rules. Following the previous work by [34], we define the infimum of the expected loss as the **Bayesian risk**:
\[R_{l}(Y|X,Z^{N})\coloneqq\inf_{\psi:\mathcal{X}\times\mathcal{Z}^{N} \rightarrow\mathcal{A}}\mathbb{E}_{p(Y,X,Z^{N})}[l(Y,\psi(X,Z^{N}))]. \tag{2}\]
For example, when the log loss is used, \(R_{\log}(Y|X,Z^{N})=H[Y|X,Z^{N}]\), where \(H[Y|X,Z^{N}]\) is the entropy of the posterior predictive distribution defined as
\[H[Y|X,Z^{N}]=\mathbb{E}_{p(Z^{N})p(X)}\mathbb{E}_{p(Y|X,Z^{N})}[-\log p(Y|X,Z^{N })]. \tag{3}\]
Thus, the Bayesian risk equals the test error under a posterior predictive distribution.
Next, we define a fundamental limit of learning as \(\phi:\mathcal{X}\times\mathcal{W}\rightarrow\mathcal{A}\), which takes the true parameter \(W\) instead of the training dataset \(Z^{N}\). Then, the corresponding risk is given as
\[R_{l}(Y|X,W):=\inf_{\phi:\mathcal{X}\times\mathcal{W}\rightarrow\mathcal{A}} \mathbb{E}_{p(Y,X,W)}[l(Y,\phi(X,W))]. \tag{4}\]
We cannot improve this risk by increasing the number of training data. Thus, \(R_{l}(Y|X,W)\) can be regarded as the **aleatoric uncertainty** since it expresses the fundamental difficulty of learning. In other words, this risk implies the inherent presence of randomness in the data-generating mechanism. When the log loss is used, \(R_{l}(Y|X,W)\) corresponds to the conditional entropy
\[R_{\log}(Y|X,W)=\mathbb{E}_{p(X)p(W)}H[Y|X,W]=\mathbb{E}_{p(Y,X,W)}[-\log p(Y |X,W)]. \tag{5}\]
Finally, we define the difference between the Bayesian risk and the fundamental limit of learning as the **minimum excess risk (MER)**:
\[\text{MER}_{l}(Y|X,Z^{N}):=R_{l}(Y|X,Z^{N})-R_{l}(Y|X,W). \tag{6}\]
This corresponds to the **epistemic uncertainty** since it is defined as the difference between the Bayesian risk and the fundamental limit of learning. Thus, MER implies the loss due to insufficient training data under the well-specified model assumption [34, 12]. When the log loss is used, MER is given as
\[\text{MER}_{\log}(Y|X,Z^{N})=I(W;Y|X,Z^{N}), \tag{7}\]
where \(I(W;Y|X,Z^{N})\) is the conditional mutual information (CMI). Other than the log loss, if the loss function satisfies the \(\sigma^{2}\) sub-Gaussian property conditioned on \((X,Z^{N})=(x,z^{N})\), [34] showed that
\[\text{MER}_{l}(Y|X,Z^{N})\leq\sqrt{2\sigma^{2}I(W;Y|X,Z^{N})}. \tag{8}\]
Thus, MER is upper-bounded by the square root of the CMI. Thus, understanding the CMI is crucial to understand MER\({}_{l}\). For this reason, we focus on the log loss in this paper.
MER has some desirable properties as the epistemic uncertainty. [34] proved that MER\({}_{l}\geq 0\) and it decreases as we increase \(N\). Moreover, they showed that MER\({}_{l}\) can be upper-bounded by the mutual information (MI) as follows:
**Lemma 1** ([34]).: _Under the joint distribution of Eq. (1), we obtain_
\[I(W;Y|X,Z^{N})\leq\frac{1}{N}I(W;Z^{N}). \tag{9}\]
In many practical settings, \(I(W;Z^{N})\) is upper-bounded by \(\mathcal{O}(\log N)\); thus, the CMI is bounded by \(\mathcal{O}(\log N/N)\). Therefore, it converges to \(0\) as \(\mathcal{O}(\log N/N)\) for the log loss and \(\mathcal{O}(\sqrt{\log N/N})\) for sub-Gaussian loss functions. It has been discussed that \(I(W;Z^{N})\) captures the sensitivity of the learned parameter and training dataset and is closely connected to the generalization error bound [33].
### Bayesian Meta-learning
In traditional Bayesian inference, the prior distribution is selected on the basis of prior knowledge about the task. When we specify the appropriate prior distribution for the given task, we expect that we can reduce the number of training data we need to meet accuracy requirements. In a Bayesian meta-learning setting, the prior distribution is automatically inferred by observing related tasks. We model the statistical relationship between different tasks using a hierarchical Bayesian model with a global latent variable \(U\) in the set \(\mathcal{U}\).
We observe \(M\) related tasks and aim to infer a suitable prior distribution for a new unknown task. Each meta-training dataset has \(N\) data points drawn i.i.d from \(p(Z|W=w_{m})\), where \(w_{m}\) is the task-specific parameter. We express the \(m\)-th meta-training dataset as \(Z^{N,(m)}=(Z_{1}^{(m)},\ldots,Z_{N}^{(m)})\). We assume that the parameter \(W_{m}\) is drawn i.i.d from the shared prior \(p(W|U)\) parametrized by the global latent variable \(U\). We assume the hyperprior distribution \(p(U)\) on \(U\). We express the meta-training dataset as \(Z^{NM}=(Z^{N,(1)},\ldots,Z^{N,(M)})\). We express the model parameters of the meta-training dataset as \(W^{M}=(W_{1},\cdots,W_{M})\). Finally, we have a new unknown task called the meta-test task generated using the meta-test task parameter \(W\). We assume that we can use the meta-test training data \(Z^{N}=(Z_{1},\cdots,Z_{N})\) and the meta-test input data \(X\).
With these settings, [27] analyzed the meta-learning in Bayesian learning introduced in Sec. 2.1. We consider the following joint distribution:
\[p(U,W^{M},Z^{NM},W,Z^{N},Z):=p(U)\underbrace{\left(p(W|U)p(Z^{N}|W)\right)^{M} }_{\text{meta-training}}\underbrace{p(W|U)p(Z^{N}|W)p(Z|W)}_{\text{meta- testing}}. \tag{10}\]
Here, we omit the index for the meta-training dataset for simplicity, see Supplementary Material for details. Under this setting, we consider the decision rules and excess risk in the same way as in Sec. 2.1. We define the Bayesian meta-risk as
\[R_{l}(Y|X,Z^{N},Z^{NM})\coloneqq\inf_{\psi_{\text{meta}}:\mathcal{X}\times \mathcal{Z}^{N}M\times\mathcal{Z}^{N}\rightarrow\mathcal{A}}\mathbb{E}_{p(Z^ {NM},Z^{N},Z)}[l(Y,\psi_{\text{meta}}(X,Z^{NM},Z^{N}))]. \tag{11}\]
We also define the fundamental limit of learning in meta-learning as
\[R_{l}(Y|X,W,U):=\inf_{\phi_{\text{meta}}:\mathcal{X}\times\mathcal{W}\times \mathcal{U}\rightarrow\mathcal{A}}\mathbb{E}_{p(U,W,Z)}[l(Y,\phi_{\text{meta} }(X,W,U))]. \tag{12}\]
We then define the minimum excess meta-risk (MEMR) as
\[\text{MEMR}_{l}(Y|X,Z^{N},Z^{NM}):=R_{l}(Y|X,Z^{N},Z^{NM})-R_{l}(Y|X,W,U). \tag{13}\]
[27] showed that the MEMR of the log loss equals to the CMI:
\[\text{MEMR}_{\text{log}}(Y|X,Z^{N},Z^{NM})=I(Y;W|X,Z^{N},Z^{NM}), \tag{14}\]
and derived the upper-bound of \(\text{MEMR}_{\text{log}}\), which is similar to Eq. (9), as
\[I(Y;W|X,Z^{N},Z^{NM})\leq\frac{I(U;Z^{NM})}{NM}+\frac{I(W;Z^{N}|U)}{N}. \tag{15}\]
We can see that the CMI is also upper-bounded by the MI that captures the sensitivities of the learned meta-test task parameter, hyperparameter, and meta-training dataset.
Exact Characterization of CMI
Here, we show our novel CMI decomposition and present the information-theoretic quantity of the sensitivity. First, we consider the Bayesian learning setting introduced in Sec. 2.1. All the proofs are shown in Supplementary Material.
### Information-theoretic Decomposition of CMI
As we pointed out in Sec. 1, the analysis of MER\({}_{l}\) introduced in Sec. 2 cannot explain the sensitivity in uncertainty between the test and training data. The limitation of the information-theoretic analysis of Lemma 1 is that the decomposition of the CMI focuses only on the training dataset and not on the test data point.
The following theorem is our first main result, which decomposes the CMI as the sum of the MI and the sensitivities of the test data and each training data point.
**Theorem 1**.: _Under the joint distribution of Eq. (1), we have_
\[I(W;Y|X,Z^{N}) = \frac{1}{N}I(W;Z^{N})-\frac{1}{N}\sum_{n=1}^{N}I(Z,Z_{n}|Z^{N \setminus n})-\frac{1}{N}\sum_{n^{\prime}=1}^{N-1}\sum_{n=n^{\prime}}^{N-1}I(Z _{n+1},Z_{n}|Z^{n-1}), \tag{16}\]
_where \(Z^{N\setminus n}:=(Z_{1},\ldots,Z_{n-1},Z_{n+1},\ldots,Z_{N})\) and \(Z^{n-1}:=(Z_{1},\ldots,Z_{n-1})\)._
Different from the bound in Lemma 1, the CMI is decomposed into three terms connected with equality, not inequality. The first term on the right-hand side of Eq. (16) is the MI between the learned parameter and the training dataset. The second and third terms correspond to the binary relation about how much information each data point has to predict other data points. The second term \(I(Z,Z_{n}|Z^{N\setminus n})\) represents the information-theoretic quantity of the sensitivity of the test and training data points. This term indicates how useful the training data point \(Z_{n}\) is to predict the test data point \(Z\). If the training data point \(Z_{n}\) has more information about the test data, then the uncertainty at \(Z\) decreases. If \(Z_{n}\) is almost independent of \(Z\), then the mutual information becomes 0, which means that the uncertainty increases.
From this observation, we introduce the definition of test data sensitivity as follows.
**Definition 1**.: _The sensitivity of the test data and \(n\)-th training data point is defined as_
\[I_{n}\coloneqq I(Z,Z_{n}|Z^{N\setminus n}). \tag{17}\]
For simplicity, we also express \(I_{n+1,n}:=I(Z_{n+1},Z_{n}|Z^{n-1})\), which appears in Eq. (16).
We note that since \(X\) and \(W\) are independent of each other, \(I(Z,Z_{n}|Z^{N\setminus n})=I(Y,Y_{n}|X,X_{n},Z^{N\setminus n})\) holds. We can transform \(I_{n}\) into a more intuitive expression
as
\[I_{n} =H(Z|Z^{N\setminus n})-H(Z|Z^{N}) \tag{18}\] \[=\mathbb{E}_{p(W,Z^{N},Z)}\ln\frac{\mathbb{E}_{p(W|Z^{N\setminus n })}p(Z,Z_{n}|W)}{p(Z|Z^{N\setminus n})p(Z_{n}|Z^{N\setminus n})}. \tag{19}\]
Eq. (18) is useful for explicitly calculating the sensitivity for some models shown in Sec. 3.2. Eq. (19) states that the joint posterior predictive distribution \(\mathbb{E}_{p(W|Z^{N\setminus n})}p(Z,Z_{n}|W)\) differs from the single-point posterior predictive distribution. The joint predictive distribution has recently attracted attention in decision problems [25, 21, 32]. Thus, our theoretical results suggest new insights into the connection between decision problems, joint predictive distribution, and uncertainty. However, this is outside the scope of this study, and we leave it to future work to explore this connection.
Finally, from Theorem 1, we obtain the new information-theoretic bound for the CMI as follows.
**Corollary 1**.: _Under the joint distribution of Eq. (1), we obtain_
\[I(W;Y|X,Z^{N})\leq\frac{1}{N}I(W;Z^{N})-\frac{1}{N}\sum_{n^{\prime}=1}^{N-1} \sum_{n=n^{\prime}}^{N-1}I(Z_{n+1},Z_{n}|Z^{n-1}). \tag{20}\]
This bound is tighter than that of Lemma 1 owing to the second term on the right-hand side. In Sec. 7, we numerically compare this bound with that of Lemma 1.
### Linear Regression Model
In this section, we use a linear regression model to explore the sensitivity between the test and training data. The likelihood of the model is given as the Gaussian distribution with the mean \(w^{\top}\phi(x)\) and the variance \(\beta^{-1}\in\mathbb{R}^{+}\). We express it as \(p(y|x,w)=\mathcal{N}(w^{\top}\phi(x),\beta^{-1})\), where \(\mathcal{Y}=\mathbb{R}\) and \(\phi(x):=(\phi_{1}(x),\ldots,\phi_{d}(x))^{\top}\in\mathbb{R}^{d}\) is a \(d\)-dimensional feature vector of the input \(x\) and each \(\phi_{i}:\mathcal{X}\rightarrow\mathbb{R}\). We assume a prior distribution \(p(w)=\mathcal{N}(0,\alpha^{-1}I_{d})\) with some positive constant \(\alpha>0\). We define a design matrix as \(\Phi=(\phi(x_{1}),\ldots,\phi(x_{N}))^{\top}\in\mathbb{R}^{N\times d}\). We also define a target vector as \(\mathbf{y}=(y_{1},\ldots,y_{N})^{\top}\). Then, a posterior distribution is given by \(p(w|z^{N})=\mathcal{N}(m_{N},S_{N})\), where \(m_{N}=\beta S_{N}\Phi^{\top}\mathbf{y}\) and \(S_{N}^{-1}:=\alpha I_{d}+\beta\Phi^{\top}\Phi\). We also have a posterior predictive distribution as \(p(y|x,z^{N}):=\mathcal{N}(m_{N}^{\top}\phi(x),\sigma_{N}^{2}(x))\), where \(\sigma_{N}^{2}(x):=\beta^{-1}+\phi(x)^{\top}S_{N}\phi(x)\).
Since the posterior predictive distribution is given as the Gaussian distribution, its entropy is calculated on the basis of its variance. Thus, \(I(W;Y|X,Z^{N})=\mathbb{E}_{p(X)}\log\sigma_{N}^{2}(X)/2+\text{Const}\), and the interplay between \(\phi(x)\) and \(S_{N}\) characterizes the sensitivity of the test and training data points. Similar arguments still hold for Gaussian process models, where the inner products of the feature maps are replaced with kernel functions.
We can explicitly calculate the sensitivity \(I_{n}\) using Eq. (18). Then, we obtain
\[I_{n}=\mathbb{E}_{p(X^{N+1})}\frac{1}{2}(\ln\sigma_{N\setminus n}^{2}(X)-\ln \sigma_{N}^{2}(X)). \tag{21}\]
We can simplify this as follows.
**Theorem 2**.: _For linear models, the sensitivity \(I_{n}\) satisfies the following relation:_
\[\mathbb{E}_{p(X^{N+1})}\frac{(\phi(X)^{\top}S_{N}\phi(X_{n}))^{2}}{2\omega(X_{n} )(\alpha^{-1}\phi(X)^{\top}\phi(X)+\beta^{-1})}\leq I_{n}\leq\mathbb{E}_{p(X^{ N+1})}\frac{(\phi(X)^{\top}S_{N}\phi(X_{n}))^{2}}{2\omega(X_{n})(\beta^{-1}+\phi(X)^{ \top}S_{N}\phi(X)},\]
_where \(\omega(x):=\beta^{-1}-\phi(x)^{\top}S_{N}\phi(x)\)._
This bound implies that the posterior covariance matrix \(S_{N}:=(\alpha I_{d}+\beta\Phi^{\top}\Phi)^{-1}\) can be seen as a metric for measuring the similarity between the training data \(x_{n}\) and the test data \(x\). In Supplementary Material, we numerically evaluated this bound.
Combined with Theorem 1, we obtain
\[\text{MER}_{\log}(Y|X,Z^{N})\leq\frac{1}{N}I(W;Z^{N})-\mathbb{E}_{p(X^{N+1})} \frac{1}{N}\sum_{n=1}^{N}\frac{(\phi(X)^{\top}S_{N}\phi(X_{n}))^{2}}{2\omega(X _{n})(\alpha^{-1}\phi(X)^{\top}\phi(X)+\beta^{-1})}.\]
This suggests that the test error becomes small if the given test and training data points are similar under the feature map with the metric \(S_{N}\).
### Asymptotic Behavior
Here, we discuss the asymptotic behavior of sensitivity. Using the asymptotic expansion of Bayesian inference introduced in [31], we obtain the following relation:
**Theorem 3**.: _Assume that \(p(z|w)\) has a relatively finite variance, that is, for any pair of \(w_{0},w\in\mathcal{W}\), there exists a positive constant \(c_{0}\) such that_
\[c_{0}\mathbb{E}_{p(Z|w_{0})}(\ln p(Z|w_{0})-\ln p(Z|w))^{2}\leq\mathbb{E}_{p(Z |w_{0})}[\ln p(Z|w_{0})-\ln p(Z|w)]. \tag{22}\]
_Then, we obtain \(I_{n}=I(Z,Z_{n}|Z^{N\setminus n})=o\left(\frac{1}{N}\right)\), where \(o(\frac{1}{N})\) is little \(o\)._
The relatively finite variance assumption in Eq. (22) is satisfied in many widely used models. For example, generalized linear models, including the linear and logistic regression models, satisfy this condition. See Supplementary Material and [31] for other examples.
Combined with Theorem 1, since \(\frac{1}{N}I(W;Z^{N})=O\left(\frac{1}{N}\right)\), we obtain
\[I(W;Y|X,Z^{N})\leq\frac{\frac{1}{N}I(W;Z^{N})}{=O\left(\frac{1}{N}\right)}- \frac{\frac{1}{N}\sum_{n=1}^{N}I(Z,Z_{n}|Z^{N\setminus n})}{=o\left(\frac{1}{ N}\right)}. \tag{23}\]
Thus, since the order of the sensitivity term is \(o(1/N)\), it is much smaller than the MI, which is \(O(1/N)\). Finally, using Theorem 3, we obtain the following relation:
\[\frac{1}{N}\sum_{n^{\prime}=1}^{N-1}\sum_{n=n^{\prime}}^{N-1}I(Z_{n^{\prime}+ 1},Z_{n^{\prime}}|Z^{n^{\prime}-1})=O\left(\frac{1}{N}\right). \tag{24}\]
## 4 Exact Characterization of Generalization Error
Here, we present the application of Theorem 1 to the generalization error analysis.
### Relation to Generalization Error
First, we show that Lemma 1 is closely related to the generalization error.
**Lemma 2**.: _When a log loss is used, \(I(W;Y|X,Z^{N})\leq\frac{1}{N}I(W;Z^{N})\) is equivalent to the following inequality;_
\[R_{\log}(Y|X,Z^{N}) \leq-\mathbb{E}_{p(Z^{N})}\ln p(Z^{N})\] \[=-\mathbb{E}_{p(Z^{N})}\mathbb{E}_{p(W|Z^{N})}\frac{1}{N}\sum_{n= 1}^{N}\ln p(Z_{n}|W)+\frac{1}{N}\mathrm{KL}(p(W|Z^{N})|p(W)). \tag{25}\]
The left-hand side of Eq. (25) is the test error, and the right-hand side is the training error plus the regularization term. Thus, Lemma 1 is closely related to the generalization error. With this observation, using Theorem 1, we can incorporate the sensitivity of the test and training data to the generalization error as follows.
**Theorem 4**.: _Under the joint distribution of Eq. (1) with a log loss, we obtain_
\[R_{\log}(Y|X,Z^{N}) =-\mathbb{E}_{p(Z^{N})}\mathbb{E}_{p(W|Z^{N})}\frac{1}{N}\sum_{n= 1}^{N}\ln p(Y_{n}|X_{n},W)+\frac{1}{N}\mathrm{KL}(p(W|Z^{N})|p(W))\] \[\quad-\frac{1}{N}\sum_{n=1}^{N}I_{n}-\frac{1}{N}\sum_{n^{\prime}= 1}^{N-1}\sum_{n=n^{\prime}}^{N-1}I_{n+1,n}. \tag{26}\]
From Eq. (26), if the training data \(x_{n}\) has sufficient information to predict \(x\), \(I_{n}\) becomes large, leading to a smaller test error. Thus, this relation formalizes our intuition that we can predict a test data point, which is similar to the training data in some sense, better than the test data, which are completely different from the training data.
Another interesting point is that, unlike Lemma 2, Eq. (26) is the identity, not the inequality. Thus, we can precisely characterize the relationship between the test and training errors. We will discuss the relation between our result and the recently proposed exact characterization of the generalization error [2] in Sec. 4.2.
### Relationship between the Sensitivity and the Gibbs Test Error
In many generalization error analyses, we often use the Gibbs test error defined as,
\[R_{\log}^{\mathrm{Gibbs}}(Y|X,Z^{N}):=\mathbb{E}_{p(W)p(Z^{N}|W)p(\tilde{W}|Z ^{N})}[-\mathbb{E}_{p(Z|W)}\log p(Y|X,\tilde{W})]. \tag{27}\]
Here, we express the learned parameter as \(\tilde{W}\), which follows the Bayesian posterior distribution \(p(\tilde{W}|Z^{N})\). By comparing with \(R_{\log}(Y|X,Z^{N})\), which uses the posterior predictive distribution, we obtain
\[R_{\log}(Y|X,Z^{N})\leq R_{\log}^{\mathrm{Gibbs}}(Y|X,Z^{N}), \tag{28}\]
where we used the Jensen inequality. This relation is general since we only use the convexity of the log loss.
We further explore the relationship between the Gibbs test error \(R_{\log}^{\text{Gibbs}}(Y|X,Z^{N})\) and the Bayesian risk \(R_{\log}(Y|X,Z^{N})\) using the Lautum information (LI), which was used by [2]. First, we present the exact characterization of the generalization error of the Gibbs test error.
**Theorem 5**.: _Under the joint distribution of Eq. (1) with a log loss, we obtain_
\[R_{\log}^{\text{Gibbs}}(Y|X,Z^{N}) =-\mathbb{E}_{p(W)p(Z^{N}|W)}\frac{1}{N}\sum_{n=1}^{N}\ln p(Y_{n} |X_{n},W)+\frac{1}{N}LI(\tilde{W};Z^{N}|W), \tag{29}\]
_where LI is the conditional Lautum information defined as_
\[LI(\tilde{W};Z^{N}|W) =\mathbb{E}_{p(W)p(\tilde{Z}^{N}|W)p(\tilde{W}|\tilde{Z}^{N})p(Z^ {N}|W)}\log\frac{p(Z^{N}|W)p(\tilde{W}|W)}{p(Z^{N},\tilde{W}|W)}\] \[=\operatorname{KL}(p(\tilde{W}|W)p(Z^{N}|W)|p(\tilde{W},Z^{N}|W)). \tag{30}\]
Note that the LI is closely related to the reverse KL divergence [2]. In conclusion, the generalization error of the Gibbs test error is characterized by the LI. This result is similar to the exact characterization of generalization error under the **frequentist setting** used by [2]. They assumed that the data is drawn i.i.d from an unknown distribution and the model's parameters are not treated as latent variables. Then, they proved that the generalization error between the Gibbs test and training errors is equivalent to the sum of the LI and the MI between learned parameters and training datasets.
Combining Theorems 4 and 5, we obtain the following exact characterization of the Jensen gap:
**Corollary 2**.: _Under the joint distribution of Eq. (1), we obtain_
\[R_{\log}^{\text{Gibbs}}(Y|X,Z^{N}) -R_{\log}(Y|X,Z^{N}) =\frac{LI(\tilde{W};Z^{N}|W)}{N}+\sum_{n=1}^{N}\frac{I_{n}}{N}+ \sum_{n^{\prime}=1}^{N-1}\sum_{n=n^{\prime}}^{N-1}\frac{I_{n+1,n}}{N}-\frac{ I(W;Z^{N})}{N}. \tag{31}\]
From the Jensen inequality of Eq. (28), if the posterior \(p(\tilde{W}|Z^{N})\) is a point mass, the Jensen gap vanishes. From this relation, as the sensitivity term \(I_{n}\) increases, the Jensen gap becomes large. We point out that the Jensen gap has been studied in relation to the model misspecification under the frequentist setting [10, 11]. Since our setting is Bayesian learning, it is difficult to directly compare our Eq. (31) with previously reported results of the frequentist setting. We leave it to future work to clarify how the existing analysis of the Jensen gap under model misspecification is translated into our setting.
## 5 Exact Characterization of CMI in Meta-learning
In this section, we extend our information-theoretic analysis of the sensitivity to a Bayesian meta-learning setting. The following is our main result:
**Theorem 6**.: _Under the joint distribution of Eq. (10), we obtain_
\[I(Y;W|X,Z^{N},Z^{NM}) =\frac{1}{N}I(W;Z^{N}|U)+\frac{1}{NM}I(U;Z^{NM}) \tag{32}\] \[-\frac{1}{NM}\sum_{m=1}^{M}I(Z^{N},Z^{N,(m)}|Z^{N(M\setminus m)})\] (33) \[-\frac{1}{NM}\sum_{m^{\prime}=1}^{M-1}\sum_{m=m^{\prime}}^{M-1}I (Z^{N,(m+1)},Z^{N,(m)}|Z^{N(m-1)})\] (34) \[-\frac{1}{N}\sum_{n=1}^{N}I(Z,Z_{n}|Z^{N\setminus n},Z^{NM})\] (35) \[-\frac{1}{N}\sum_{n^{\prime}=1}^{N-1}\sum_{n=n^{\prime}}^{N-1}I(Z _{n+1},Z_{n}|Z^{n-1},Z^{NM}), \tag{36}\]
_where \(Z^{Nm}:=(Z^{N,(1)},\cdots,Z^{N,(m)})\), \(Z^{N(m-1)}:=(Z^{N,(1)},\cdots,Z^{N,(m-1)})\), and \(Z^{N(M\setminus m)}:=(Z^{N,(1)},\ldots,Z^{N,(m-1)},Z^{N,(m+1)},\ldots,Z^{N,(M) })\)._
Compared with the existing bound in Eq. (15), the terms of Eqs. (33) to (36) newly appeared. Eq. (33) represents the sensitivity between the test and training tasks since it quantifies how useful the \(m\)-th training task is to predict the meta-test task. To the best of our knowledge, our study is the first to theoretically quantify task sensitivity. Eq. (35) quantifies the sensitivities of the meta-test training data and meta-test test data points similarly to Theorem 1.
We can evaluate information-theoretic quantities by considering the following posterior and predictive distributions:
\[p(Y|X,Z^{N},Z^{NM})=\mathbb{E}_{p(W|Z^{N},Z^{NM})}p(Y|X,W)=\mathbb{E}_{p(U|Z^{ NM})p(W|Z^{N},U)}p(Y|X,W)\]
The information from the relevant tasks is captured by the hyper-posterior distribution \(p(U|Z^{NM})\), and the information from meta-test training data is incorporated into the posterior distribution \(p(W|Z^{N},U)\). These correspond to Eq. (32), which also appears in the existing MEMR bound. The sensitivity between the meta-test and meta-training tasks of Eq. (33) is given as
\[I(Z^{N},Z^{N,m}|Z^{N(M\setminus m)})\!=\!H(Z^{N}|Z^{N(M\setminus m)})\!-\!H(Z ^{N}\!\!|Z^{NM}). \tag{37}\]
We can evaluate the left-hand side by evaluating the hyper-posterior distributions \(p(U|Z^{N(M\setminus m)})\) and \(p(U|Z^{NM})\). Note that we can obtain the improved information-theoretic upper bound about MEMR, which improves Eq. (15), in the same way as Corollary 1.
## 6 Related Work
The sensitivity between test and training data points has been an important property theoretically and practically [5, 20]. Linear models and Gaussian processes have extensively been studied to analyze the sensitivity since their posterior predictive distribution is expressed analytically [8, 18]. Our result extends this relationship to general probabilistic models using the information-theoretic quantity for the first time. Such a relationship provides an important
contribution in practice since some recent studies, such as [3, 19, 28], and [13], explicitly introduced the sensitivity property into deep neural networks to enhance the uncertainty quantification performance. Similarly to sensitivity, the CMI is widely used as the objective function in Bayesian experimental designs [9]. Thus, understanding the sensitivity of the CMI will lead to the analysis of such applications. In meta-learning tasks, information-theoretic quantities are widely used [29, 6] to quantify the similarity of tasks. In these applications, evaluating exact information-theoretic quantities is difficult for many practical models. Thus, various approximation methods have been proposed, including variational inference [5]. We leave it to future work to explore how the approximation quality affects the sensitivity in uncertainty.
The information-theoretic analysis has recently received attention in the generalization error analysis [33, 23]. In such generalization error analysis, including the PAC-Bayesian theory [26, 1], the data generating distribution may not be well-specified, and model parameters are not treated as latent variables. Compared with previous studies, we can specify the correct model families in the Bayesian learning settings. Finally, we note that the joint model in Eq. (1) often appears in the Bayesian decision theory [24] in statistics. This model evaluates the _average_ performance of the risk function over a prior distribution and leads to the minimax rate analysis of the parameter estimation. The lower bound for the decision rule in the minimum excess risk has recently been reported [12] using the rate-distortion theory [7]. Moreover, the joint model in Eq. (1) is used in Bayesian experimental design, in which the stochastic dependencies of data and parameters are introduced to incorporate uncertainty. We also remark that this model is closely related to Bayesian online learning, and we show the regret analysis in Supplementary Material.
## 7 Numerical Experiments
Here, we show the numerical evaluation of the sensitivities in Theorems 1 and 6. Detailed experimental settings and additional results are shown in Supplementary Material.
### Experiments in Bayesian Learning Setting
First, using linear regression models introduced in Sec. 3.2, we numerically evaluated information-theoretic quantities appearing in Theorem 1, changing the training data size \(N\). Note that we can calculate all the information-theoretic quantities analytically. In the main paper, we only show the results of Gaussian basis functions as feature map \(\phi\), whose dimension is set to 10.
The results are shown in Fig. 1, where we plot the CMI (\(I(Y;W|X,Z^{N})\)), MI (\(I(W;Z^{N})/N\)), and the sum of the test data sensitivity (\(\sum_{n}I(Z;Z_{n}|Z^{(N\setminus n)})/N\)) and the sum of the training data sensitivity (\(\frac{1}{N}\sum_{n^{\prime}=1}^{N-1}\sum_{n=n^{\prime}}^{N-1}I(Z_{n+1};Z_{n}|Z^ {(n-1)})\)). In the figure legend, we omit the summation with respect to \(n\) and \(n^{\prime}\) for clarity. In the left panel of this figure, we plot them in the log scale, and we can see that all the terms converge linearly in the plot. This is consistent with Eq. (23), which describes the asymptotic order of each quantity. Note that the sensitivity term \(I_{n}\) converges faster than the other terms, as indicated by the asymptotic analysis in Theorem 3. In the right panel of this figure, in addition to the CMI,
MI, and sensitivities, we plot our proposed bound in Corollary 1. Our bound is tighter than the existing bound, which corresponds to \(I(W;Z^{N})/N\) owing to the sensitivity given as \(I(Z_{n+1};Z_{n}|Z^{(n-1)})\). In Supplementary Material, we numerically evaluated the upper and lower bounds of \(I_{n}\) in Theorem 1.
### Experiments in Bayesian Meta-learning Setting
Next, we numerically evaluated the theoretical findings of meta-learning settings in Theorem 6. For this purpose, we put a hyperprior on the parameters of the linear regression model. We consider that \(p(W|U)=\mathcal{N}(U,\alpha^{-1}I_{d})\) and \(p(U)=\mathcal{N}(0,\gamma^{-1}I_{d})\). Under these settings, we can analytically calculate the posterior distributions \(p(U|Z^{NM}),p(W|Z^{N},U)\), and \(p(Y;W|X,Z^{N},Z^{NM})\). Thus, we can analytically evaluate the information-theoretic quantities in Theorem 6, see Supplementary Material for details.
The result is shown in Fig. 2. In the left panel of this figure, fixing \(N=50\), we plot the information-theoretic quantities with increasing \(M\). We can see that MEMR (black line) decreases as we increase the number of meta-training datasets. We can also see that the MI between the hyperparameter and meta-training datasets (red dot line) also decreases. Finally, we can see that the sensitivity between the meta-test and meta-training tasks decreases faster than
Figure 1: Information-theoretic quantities appearing in Theorem 1 and Corollary 1.
Figure 2: Information-theoretic quantities appearing in Theorem 6. The left panel shows the results under different \(M\) (the number of tasks) at fixed \(N\) (the number of training datasets), and the right panel shows the results under different \(N\) at fixed \(M\). In the legends, we omit the summation with respect to \(n,n^{\prime},m\), and \(m^{\prime}\) for clarity.
other information-theoretic quantities. In the right panel of this figure, fixing \(M=20\), we plot the information-theoretic quantities with increasing \(N\). By increasing \(N\), we find that all the quantities decrease as we expected.
## 8 Conclusion
In this work, we showed the novel decomposition of the CMI and then provided the information-theoretic quantity of the sensitivity between the test and training data points. Our analysis rigorously characterizes the uncertainty's widely believed sensitivity property for the first time. Our analysis is also extended to the meta-learning setting and showed the sensitivity between tasks for the first time. It will be interesting to analyze the sensitivity under model misspecification and approximation in future work.
|
2302.11959 | Influence of magnetic and electric fields on universal conductance
fluctuations in thin films of the Dirac semi-metal Cd3As2 | Time-reversal invariance and inversion symmetry are responsible for the
topological band structure in Dirac semimetals. These symmetries can be broken
by applying an external magnetic or electric field, resulting in fundamental
changes to the ground state Hamiltonian and a topological phase transition. We
probe these changes via the magnetic-field dependence and gate
voltage-dependence of universal conductance fluctuations in top-gated nanowires
of the prototypical Dirac semimetal Cd3As2. As the magnetic field is increased
beyond the phase-breaking field, we find a factor of sqrt(2) reduction in the
magnitude of the universal conductance fluctuations, in agreement with
numerical calculations that study the effect of broken time reversal symmetry
in a 3D Dirac semimetal. In contrast, the magnitude of the fluctuations
increases monotonically as the chemical potential is gated away from the charge
neutrality point. This effect cannot be attributed to broken inversion
symmetry, but can be explained by Fermi surface anisotropy. The concurrence
between experimental data and theory in our study provides unequivocal evidence
that universal conductance fluctuations are the dominant source of intrinsic
transport fluctuations in mesoscopic Cd3As2 devices and offers a promising
general methodology for probing the effects of broken symmetry in topological
quantum materials. | Run Xiao, Saurav Islam, Wilson Yanez, Yongxi Ou, Nitin Samarth, Haiwen Liu, X. C. Xie, Juan Chamorro, Tyrel M. McQueen | 2023-02-23T12:13:53Z | http://arxiv.org/abs/2302.11959v1 | Influence of magnetic and electric fields on universal conductance fluctuations in thin films of the Dirac semi-metal Cd\({}_{3}\)As\({}_{2}\)
###### Abstract
Time-reversal invariance and inversion symmetry are responsible for the topological band structure in Dirac semimetals. These symmetries can be broken by applying an external magnetic or electric field, resulting in fundamental changes to the ground state Hamiltonian and a topological phase transition. We probe these changes via the magnetic-field dependence and gate voltage-dependence of universal conductance fluctuations in top-gated nanowires of the prototypical Dirac semimetal Cd\({}_{3}\)As\({}_{2}\). As the magnetic field is increased beyond the phase-breaking field,we find a factor of \(\sqrt{2}\) reduction in the magnitude of the universal conductance fluctuations, in agreement with numerical calculations that study the effect of broken time reversal symmetry in a 3D Dirac semimetal. In contrast, the magnitude of the fluctuations increases monotonically as the chemical potential is gated away from the charge neutrality point. This effect cannot be attributed to broken inversion symmetry, but can be explained by Fermi surface anisotropy. The concurrence between experimental data and theory in our study provides unequivocal evidence that universal conductance fluctuations are the dominant source of intrinsic transport fluctuations in mesoscopic Cd\({}_{3}\)As\({}_{2}\) devices and offers a promising general methodology for probing the effects of broken symmetry in topological quantum materials.
## I Introduction
The past decade has seen enormous interest in the study of topological band structures created by the interplay between fundamental symmetries and strong spin-orbit coupling in a variety of quantum materials [1; 2; 3; 4; 5]. Dirac semimetals, a three-dimensional analog of graphene, are an important subset in this materials class, characterized by Dirac states in the bulk with degenerate Weyl nodes that are protected by the presence of both time-reversal symmetry (TRS) and inversion symmetry (IS) [3; 5]. The response of Dirac semimetals to applied electrical and magnetic fields has been a matter of active discourse, in bulk crystals[6; 7; 8; 9; 10; 11; 12], thin films [13; 14; 15; 16; 17; 18; 19; 20], and patterned micro/nanostructures [21; 22; 23; 24]. An important question in this context is whether one can experimentally observe the expected transformation of a Dirac semimetal into a Weyl semimetal in a given material when the degeneracy of the Weyl nodes is removed by breaking TRS in an external magnetic field. Although angle resolved photoemission spectroscopy (ARPES) could in principle be used to observe such a topological phase transition, it is technically impractical because of the need for a magnetic field. The observation of qualitative changes in the magnetoresistance in a Dirac semimetal at large external magnetic field has provided strongly suggestive evidence for the field-induced transition to a Weyl semimetal in ZrTe\({}_{5}\)[25], but this is still not definitive. We propose that the measurement of universal conductance fluctuations (UCF) potentially provides a more rigorous route to answering this question [26].
UCF, a consequence of quantum interference, are aperiodic, reproducible fluctuations of the conductance of magnitude \(\approx e^{2}/h\) in a system, observed when the sample length is comparable to the phase coherence length (\(l_{\phi}\)) [27; 28; 29; 30; 31]. The magnitude of UCF is strongly influenced by the underlying symmetries of the system and has been used to probe the ground state symmetries in many materials [32; 33; 34; 35; 36; 37; 38; 39; 40]. Theory predicts that the magnetic-field-induced topological phase transition from a Dirac semimetal to a Weyl semimetal will manifest as a reduction in UCF magnitude by \(\sqrt{2}\)[26] as one breaks TRS. However, prior experiments have shown an approximate reduction by a factor of \(2\sqrt{2}\)[23]. Applying an electric field to a Dirac semimetal can also break IS. The effect of this symmetry-breaking perturbation on UCF in a Dirac semimetal is of equal fundamental importance to that of
broken TRS but remains unexplored. Here, we address the effect of applying both magnetic and electric fields on UCF in epitaxially-grown thin films of the protypical Dirac semimetal Cd\({}_{3}\)As\({}_{2}\), patterned into nanowires. We observe a factor of \(\sqrt{2}\) reduction in UCF amplitude as the TRS is broken with application of a magnetic field, consistent with theoretical predictions. We also observe a monotonic enhancement of the UCF magnitude as the chemical potential is increased using electrostatic gating. We argue that this most likely arises due to Fermi surface anisotropy.
Our experiments provide unambiguous proof of UCF to be the intrinsic source of fluctuations in mesoscopic Cd\({}_{3}\)As\({}_{2}\) devices and further establish its suitability for probing topological phase transitions.
## II Material growth and device fabrication
Our experiments used Cd\({}_{3}\)As\({}_{2}\) films (20 nm thickness) grown by molecular beam epitaxy (MBE) on semi-insulating (111) GaAs substrates after the deposition of a 100 nm thick buffer layer of GaSb. During the growth of Cd\({}_{3}\)As\({}_{2}\), we used a high purity compound source of Cd\({}_{3}\)As\({}_{2}\) in a standard effusion cell and a beam equivalent pressure of \(1.2\times 10^{-7}\) Torr; the substrate temperature was 110 \({}^{\circ}\)C (calibrated using band-edge infrared thermometry). We have established that these growth conditions in our MBE chamber yield Cd\({}_{3}\)As\({}_{2}\) films of good structural and electronic quality oriented in the [112] direction [20]. _In-situ_ reflection high-energy electron diffraction (RHEED) measurements showed streaky patterns in both [\(\overline{2}11\)](left) and [\(0\overline{1}1\)] directions, indicating reasonably ordered growth Fig. 1(a). The presence of a Dirac cone in MBE-grown Cd\({}_{3}\)As\({}_{2}\) films synthesized under nominally identical conditions is confirmed by _in vacuo_ ARPES measurements performed after using a vacuum suitcase to transfer films from the MBE chamber to a local measurement chamber. The ARPES measurements were carried out at \(T=300\) K using excitation by the 21 eV helium I\(\alpha\) spectral line from a helium plasma lamp isolated via a monochromator and detection of emitted photo-electrons using a Scienta-Omicron DA 30L analyzer with a spectral resolution of 6 meV. As shown in Fig. 1(b), the ARPES data show the expected linearly dispersing Dirac bands, consistent with previous ARPES measurements from the (112) surface in cleaved bulk samples [41, 42, 43] and thin films [19]. We note that calculations and quantum transport measurements[20] of similar [112]-oriented Cd\({}_{3}\)As\({}_{2}\) thin films indicate that a small quantum confinement-induced gap should be expected at the Dirac point for 20 nm thick films studied here, but our ARPES measurements do not have the resolution to measure this gap. High-resolution transmission electron microscope (TEM) images obtained in cross-section confirm the growth of crystalline Cd\({}_{3}\)As\({}_{2}\) films in the correct phase, as shown in Fig. 1(c).
To fabricate the mesoscopic nanowire devices investigated in this manuscript, we used electron beam lithog
Figure 1: Material growth and device fabrication. (a) RHEED images captured during growth of Cd\({}_{3}\)As\({}_{2}\) films. The electron beam is directed along [\(\overline{2}11\)](left) and [\(0\overline{1}1\)] (right).(b) ARPES spectra of a 10 nm thick Cd\({}_{3}\)As\({}_{2}\) film grown using similar conditions (substrate temperature, beam flux) as the thicker samples measured in transport. The measurements are taken at \(T=300\) K along the \(\bar{K}-\bar{\Gamma}-\bar{K}\) direction. The right panel shows a slightly zoomed in view of the data in the left panel. (c) Cross-sectional HAADF-STEM image of a 20 nm Cd\({}_{3}\)As\({}_{2}\) film. (d) A scanning electron microscope image of a typical mesoscopic device.
raphy to first pattern and deposit 10/30 nm Cr/Au electrodes using e-beam evaporation. This was followed by another round of lithography and Argon plasma etching to pattern the nanowires. To control the chemical potential, we use a top gate with a 30 nm Al\({}_{2}\)O\({}_{3}\) dielectric layer deposited using atomic layer deposition. A scanning electron micrograph of a typical device is shown in Fig. 1(d). The transport measurements were performed using a standard four-probe ac technique with a lock-in amplifier in a pumped He-3 Oxford Heliox system. We used a constant current circuit with an excitation current of 10 nA to reduce Joule heating and a carrier frequency of 17.777 Hz.
## III Electrical transport measurements
The temperature dependence of the sheet resistivity (\(R_{s}\)) of these films shows insulating behaviour, with \(R_{s}\) increasing monotonically as temperature \(T\) is reduced [44] (Fig. 2(a)). \(R_{s}\) as a function of gate-voltage (\(V_{g}\)), measured in two channels of length \(L=4.5~{}\mu\)m (Dev I) and \(1.5~{}\mu\)m (Dev II), and width \(W=0.1~{}\mu\)m, shows maxima \(V_{g}=-2.8~{}\)V and \(-3.1~{}\)V respectively, referred to as the charge neutrality point (CNP), indicating the sample to be \(n\)-doped (Fig. 2(b)). For the nanowire with \(L=4.5~{}\mu\)m, the field-effect mobility (\(\mu\)) of carriers, calculated using \(\sigma=ne\mu\) is 8812 cm\({}^{2}\)V\({}^{-1}\)s\({}^{-1}\) and 2475 cm\({}^{2}\)V\({}^{-1}\)s\({}^{-1}\) for the electron and hole channels, respectively. For the nanowire with \(L=1.5~{}\mu\)m, \(\mu\) for electrons and holes are 6375 cm\({}^{2}\)V\({}^{-1}\)s\({}^{-1}\) and 1275 cm\({}^{2}\)V\({}^{-1}\)s\({}^{-1}\), respectively. The magneto-resistance of both the channels shows weak anti-localization, as expected for a spin-orbit coupled system (Fig. 2c) [45; 46; 47; 48; 49; 50]. In the case of strong spin-orbit coupled systems such as Cd\({}_{3}\)As\({}_{2}\), where (\(\tau_{\phi}>>\tau_{so},\tau_{e}\)), the quantum correction to conductivity (\(\Delta\sigma\)) can be fitted with the Hikami-Larkin-Nagaoka (HLN) equation [46], given as:
\[\triangle\sigma=\alpha\frac{e^{2}}{\pi h}\left[\psi\left(\frac{1}{2}+\frac{B _{\phi}}{B}\right)-\ln\left(\frac{B_{\phi}}{B}\right)\right] \tag{1}\]
Here, \(B_{\phi}\) is the phase coherence field, and \(\alpha\) is a fitting parameter. The phase coherence length \(l_{\phi}\) can be extracted using \(l_{\phi}=\sqrt{\hbar/4eB_{\phi}}\), where \(e\) and \(\hbar\) are the electronic charge and reduced Planck's constant respectively. Both channels show comparable \(l_{\phi}\approx 200\) nm at \(V_{g}=0\) V.
Conductance fluctuations reported in this manuscript were investigated by capturing the four-probe resistance as a function of magnetic field and gate voltage. To probe the effect of breaking TRS, we recorded \(R\) by sweeping the magnetic field at different gate voltages, as shown in Fig. 3(a). We chose two \(V_{g}\) windows: (a) when \(V_{g}\) is close to the CNP (\(-5~{}\)V to \(0~{}\)V) and (b) \(V_{g}\) is away from the CNP (\(6~{}\)V to \(10~{}\)V). The UCF magnitude (\(\delta G\)) is defined as the rms-magnitude of the fluctuations, calculated after subtracting the background by fitting a polynomial. The fluctuations are aperiodic and reproducible, which are key features of UCF (see Fig. 5 in the Appendix for additional data). As a function of perpendicular magnetic field, we found that the magnitude of UCF is reduced by a factor of \(\sqrt{2}\), in both gate-voltage windows, as shown in Fig. 3(b). (See also Fig. 6 in the Appendix for additional data from a second device.)
Within the framework of random matrix theory (RMT) [32; 33; 35], the magnitude of UCF within a phase coherent box is proportional to
\[\langle\delta G_{\phi}^{2}\rangle\propto\left(\frac{e^{2}}{h}\right)^{2}\frac {ks^{2}}{\beta} \tag{2}\]
Here \(\beta\), \(s\), and \(k\) are the Wigner-Dyson parameter (dependent on the universality class of the system), the degeneracy of the system under investigation, and the number of independent eigenmodes of the Hamiltonian respectively. The value of \(\beta\) is 1, 2, or 4 for the orthogonal, unitary, and symplectic symmetry classes respectively. The application of a magnetic field removes TRS, splitting the degenerate Dirac points into a pair of Weyl points (in momentum space). This leads to a transition from the Gaussian symplectic class to the unitary symmetry class. In this scenario, \(s\) changes from 2 to 1 due
Figure 2: **Basic electrical characterization :** (a) Sheet resistivity (\(R_{s}\)) vs temperature (\(T\)) of Dev I and Dev II, both showing insulating behavior. (b) Resistance as a function of gate-voltage (\(V_{g}\)) of Dev I and Dev II at \(T=0.41\) K, exhibiting a charge neutrality point at \(V_{g}=-2.8~{}\)V and \(-3.1~{}\)V, indicating the devices are \(n\)-doped. (c) The quantum correction to conductivity (\(\Delta\sigma\)) as a function of the magnetic field (\(B\)) in both devices, exhibiting weak anti-localization at \(V_{g}=0\) V. The blue and red lines show a fit to the data using Eq 1.
to the removal of Kramer's degeneracy, while \(\beta\) changes from 4 to 2. This results in a factor of \(\sqrt{2}\) reduction in UCF magnitude as the magnetic field is increased. Microscopically, the self-intersecting Cooperon modes are suppressed as the magnetic field is applied, leading to a reduction of the number of transport modes by a factor of two. In this scenario, the fluctuations arise due to the classical diffuson modes.
For further validation, we also performed numerical calculations of the UCF magnitude based on a \(\mathbf{k}\cdot\mathbf{p}\) model developed in Ref. [26]. We adopt the following minimal \(4\times 4\) Hamiltonian \(H_{0}\) to describe the 3-dimensional Dirac semimetal Cd\({}_{3}\)As\({}_{2}\)
\[H_{0}(\mathbf{k})=\left(\begin{array}{cccc}M(\mathbf{k})&A\mathbf{k}_{+}&D \mathbf{k}_{-}&0\\ A\mathbf{k}_{-}&-M(\mathbf{k})&0&0\\ D\mathbf{k}_{+}&0&M(\mathbf{k})&-A\mathbf{k}_{-}\\ 0&0&-A\mathbf{k}_{+}&-M(\mathbf{k})\end{array}\right), \tag{3}\]
with \(M(\mathbf{k})=M_{0}-M_{z}k_{z}^{2}-M_{x}k_{x}^{2}-M_{y}k_{y}^{2}\) and \(k_{\pm}=k_{x}\pm ik_{y}\). \(A\) and \(D\) are the strength of spin-orbital coupling between the inverted bands \(\pm M(\mathbf{k})\) and the two \(M(\mathbf{k})\) orbits, respectively. A real space version of \(H_{0}(\mathbf{r})\) on a discretized lattice, \(H_{0}(\mathbf{k})\), is obtained through Fourier transformation. The externally applied magnetic field \(\vec{B}\) enters \(H_{0}\) through the Piers substitution, e.g., for magnetic field applied along z direction, \(t_{x}\to t_{x}e^{i\phi}\), where \(\phi\) measures the magnetic flux through a unit lattice square. The disordered Cd\({}_{3}\)As\({}_{2}\) material is modeled using \(H=H_{0}+U(\mathbf{r})\), where \(U(\mathbf{r})\) is an onsite random potential uniformly distributed on \([-W,W]\).
We numerically compute the zero-temperature conductance \(G=\frac{e^{2}}{h}Tr[\Gamma_{L}G^{\prime}\Gamma_{R}G^{\alpha}]\) using the Landauer-Buttiker formula [51], where \(G^{\prime}(E_{F})=[G^{\alpha}(E_{F})]^{\dagger}=[E_{F}-H-\Sigma_{L}-\Sigma_{R }]^{-1}\) is the retarded Green's function, \(\Gamma_{L/R}=i[\Sigma_{L/R}^{r}-\Sigma_{L/R}^{\alpha}]\) is the line width function, and \(\Sigma_{L/R}\) is the self-energy of the left/right lead. The conductance fluctuation \(\Delta G\) is calculated as the standard deviation of conductance \(G\) for an ensemble of disorder, \(\Delta G=\langle(G-\overline{G})^{2}\rangle^{\frac{1}{2}}\), averaged over at least 200 ensembles. In the calculation, we use \(M_{0}=-0.4,M_{z}=M_{x}=M_{y}=-0.5\), and \(A=D=1\), and use a quasi-1-dimensional system with sizes \(L_{x}=10,L_{y}=30,L_{z}=100\). The magnitude of UCF is determined as the average value over the plateau where conductance fluctuations saturate as disorder strength is varied. We further confirm the convergence of the UCF magnitude by testing its sensitivity to the system size \(L_{\alpha}\), \(\alpha=x,y,z\). This method ensures that the UCF obtained are universal values in the diffusive regime and also helps get rid of the finite size effect in numerical calculations.
As shown in Fig. 3c, when the magnetic flux per unit cell, normalized with the magnetic flux quanta (\(\phi/\phi_{0}\)) goes beyond a critical value, the UCF magnitude decreases by \(\sqrt{2}\), both close to and away from the charge neutrality point. Thus, our observed reduction in the magnitude of UCF by a factor of \(\sqrt{2}\) is consistent with that predicted theoretically for Dirac materials. It is important to emphasize that although the parameters used in the model are not realistic for the experiments, it is sufficient to capture the correct transitions of the intrinsic UCF magnitude between symmetry classes, since the transitions between UCF magnitudes depend only on the symmetry indices that are not dependent on the details of the Hamiltonian parameters, such as the strength of spin-orbital coupling and the effective band mass, etc. However, our theory is unable to capture the critical magnetic field beyond which the UCF changes its intrinsic magnitude as it depends on various system parameters other than the symmetry indices. We also note that previous experiments on mesoscopic Cd\({}_{3}\)As\({}_{2}\) channels showed a \(2\sqrt{2}\) reduction [23]. This may be caused by factors such as magnetic-field induced gap or decoherence [26]. Quantum confinement, which can open up a small gap at the Dirac point in films of the thickness we used (\(\approx 20\) nm), may also play a role here since the thickness of our films is much smaller than the nanowires investigated in the previous experiment (\(\approx 100\) nm). The role of disorder is also not clear which leads to the removal of valley degeneracy and can affect the UCF magnitude.
Figure 3: **Magnetic-field dependence of UCF:** (a) Conductance fluctuations obtained by sweeping the magnetic field at different gate voltages. (b) The magnitude of the fluctuations, normalized with the value at \(B=0\) T, shows a reduction by a factor of \(\sqrt{2}\) in Dev I. The dashed line corresponds to \(\sqrt{2}\) reduction. (c) Calculated UCF magnitude as a function of the normalised magnetic flux \(\phi/\phi_{0}\) when the Fermi energy is both at the Dirac point and away from it. When the magnetic flux goes beyond a critical value, UCF magnitude decreases by \(\sqrt{2}\), consistent with the experimental data.
To evaluate the impact of an externally applied electric field on UCF, we extracted the rms-magnitude as a function of \(V_{g}\), which is plotted in Fig. 4(a). We observe a strong suppression of UCF near the charge neutrality point. The suppression of UCF at the charge neutrality point was observed in prior studies in topological insulators and Dirac semimetals. This was attributed to an increase in \(l_{\phi}\) at high carrier densities due to the screening of impurity scattering potential [23; 38; 52], although the trend within a phase coherent box was not investigated. In the context of single-layer graphene, although \(\delta G\) shows a decrease or increase at higher Fermi energy (\(E_{f}\)), the magnitude within a phase coherent box (\(\delta G_{\phi}\)) reduces by a factor of four away from the Dirac point, due to the removal of valley degeneracy [53]. To probe this further, we evaluated the UCF magnitude within a phase coherent box by using \(\delta G_{\phi}^{2}=LW\frac{\delta G^{2}}{L_{\sigma}^{2}}\)[36]. We observe an increase in the magnitude (normalized with the value at the Dirac point, \(\delta G_{\phi,DP}\)) as \(V_{g}\) is tuned away from the Dirac point (Fig. 4(b)), by a factor of 2.5 in this device (See Appendix B). This trend of increasing magnitude of UCF with increasing \(E_{f}\) is also captured in our numerical calculation, as shown in Fig. 4c, where we plot in the normalized UCF magnitude as a function of \(E_{f}\). We attribute this increase of UCF to Fermi surface anisotropy. The strong anisotropy of the Fermi surface around the Dirac point is an important feature of Cd\({}_{3}\)As\({}_{2}\) and can be described correctly by the effective low-energy Hamiltonian \(H_{0}(\mathbf{k})\) in Eq. III [41; 54]. The effects of Fermi surface anisotropy can be intuitively understood as follows: we neglect the \(D\mathbf{k}_{\pm}\) terms in \(H_{0}(\mathbf{k})\) and locate the Dirac point at \((0,0,\sqrt{M_{0}/M_{z}})\). The states around the Dirac point have much longer wavelength \(\lambda_{\alpha}=\frac{2\pi}{k_{\sigma}}\) along the \(\alpha=x,y\) direction than the \(z\) direction. Thus the electrons effectively travel in a 1-dimensional system rather than in a 3-dimensional system. UCF are suppressed through the reduction of the prefactor from \(c_{d=3}\) to \(c_{d=1}\) in the formula \(\delta G=c_{d}\sqrt{\frac{k\pi^{2}}{\beta}}\). Away from the Dirac point, electrons recover the 3-dimensional transport and the UCF increase due to the increase of the prefactor \(c_{d}\). This is validated in the numerical calculation, where we have used a cube of size \(L_{x}=L_{y}=L_{z}=12\) to describe a phase-coherent region inside the material, which has been probed experimentally here. We emphasize here that the effect of Fermi surface anisotropy depends on realistic parameters and can only be described qualitatively through the model assumed in this manuscript; a quantitative understanding of the influence of the Fermi-surface anisotropy on the UCF still needs further study. Another possibility is the change in values of \(k\) and \(s\). Due to the electric field introduced by \(V_{g}\), the energy band of Cd\({}_{3}\)As\({}_{2}\) around one Weyl point splits into two bands (\(k=2\)) while still satisfying time-reversal symmetry (\(s=2\)). Thus, at higher carrier densities, the UCF of quasi-particles are characterized by \(k=2\) and \(s=2\), while close to the Dirac point, \(k=1\) and \(s=1\) take place due to the pronounced charge impurity scattering [55]. Hence, from Eq. 2, we get a factor of \(2\sqrt{2}\) increase in UCF magnitude at higher \(E_{f}\). The disagreement with the experimental observations is possibly due to the Fermi surface anisotropy.
Finally, we extracted the \(V_{G}\)-dependence of \(l_{\phi}\) from two independent methods: (a) we determine via the magnetoresistance by using fits to the HLN equation (Eq. 1); (b) we directly determine \(l_{\phi}\) from the UCF by analyzing the auto-correlation function [23; 56]:
\[F(\Delta B)=\frac{\langle\delta G(B)\delta G(B+\Delta B)\rangle_{B}}{\langle \delta G^{2}\rangle}.\]
We use this to obtain the correlation field \(B_{0}\) using the equation \(F(B_{0})=0.5F(0)\) and then determine \(l_{\phi}=2.4(\frac{h}{eB_{0}})^{\frac{1}{2}}\).
We find that the values of \(l_{\phi}\) determined from magnetoresistance and UCF differ in magnitude by a factor of three over the \(V_{g}\) range that has been investigated (Fig. 4((d))). We also find \(l_{\phi}\) increases away from the Dirac point in both cases. Enhanced screening of electromagnetic fluctuations at higher number densities leads to a larger \(l_{\phi}\) away from the Dirac point, while inhomogeneity from electron-hole puddles leads to lower \(l_{\phi}\) around the Dirac point [38; 57; 58]. The factor of three difference in \(l_{\phi}\) obtained from the two methods has two possible explanations. First, the phase breaking time \(\tau_{\phi}\) relevant for weak localization (WL) is related to the Nyquist dephasing rate, while the scattering time scale for UCF depends on the out-scattering time which is related to the inverse of the inelastic collision frequency [59; 60]. The influ
Figure 4: Gate-voltage dependence of UCF. (a) RMS-magnitude of the fluctuations as a function of gate-voltage (\(V_{G}\)) (b) UCF magnitude within a phase coherent box, showing an increase by a factor of 2.5 as \(V_{g}\) tuned away from the charge neutrality point. (c) Normalized UCF magnitude within a phase coherent box as a function of the Fermi energy.(d) Comparison of the phase breaking length (\(l_{\phi}\)) obtained from different methods.
ence of these two mechanisms on dephasing can differ and this difference has been investigated in a variety of samples [48, 61, 62, 63, 64, 38, 65, 66]. Further investigation is required to identify if both WL and UCF are governed by the same scattering rates in Dirac materials.
## IV Conclusion
In conclusion, we have investigated UCF in mesoscopic transport channels of the Dirac semimetal Cd\({}_{3}\)As\({}_{2}\). We find that the UCF magnitude is reduced by a factor of \(\sqrt{2}\) as the magnetic field is increased due to the removal of time reversal symmetry. Our observations are consistent with a topological phase transition from the symplectic to unitary symmetry class. We also find that the magnitude of UCF increases as the Fermi energy in the system is increased, which we attribute to Fermi surface anisotropy rather than broken inversion symmetry. Our experiments establish UCF to be the intrinsic source of fluctuations in these systems and emphasize their importance in probing phase coherent transport. The good concurrence between theoretical predictions and experimental observations indicates that measurements of UCF provide a promising route for rigorously probing the influence of broken symmetry on the band structure of topological quantum materials.
We thank Yayun Hu for invaluable technical contributions to the theoretical calculations in this paper. This project was supported by the Institute for Quantum Matter under DOE EFRC Grant No. DESC0019331. The Penn State Two-Dimensional Crystal Consortium Materials Innovation Platform (2DCC-MIP) under NSF Grant No. DMR-2039351provided support for ARPES measurements.
## Appendix A Key features of UCF.
Figure 5(a) shows the run-to-run reproducibility of the fluctuations as well as their aperiodic nature, two of the key signatures of UCF. The amplitude of the fluctuations also reduces with increasing \(T\) as shown in Fig. 5(b), as the interference effect gets suppressed due to thermal fluctuations, another key feature of UCF.
## Appendix B Magnetic field and gate-voltage dependence of UCF in additional device (Dev II)
Figures 6(a) and 6(b) show the UCF for Dev II as a function of magnetic field and gate-voltage. The behavior of UCF in Dev II is similar to that observed in Dev I.
|
2301.08958 | A Practical Introduction to Regression Discontinuity Designs: Extensions | This monograph, together with its accompanying first part Cattaneo, Idrobo
and Titiunik (2020), collects and expands the instructional materials we
prepared for more than $50$ short courses and workshops on Regression
Discontinuity (RD) methodology that we taught between 2014 and 2023. In this
second monograph, we discuss several topics in RD methodology that build on and
extend the analysis of RD designs introduced in Cattaneo, Idrobo and Titiunik
(2020). Our first goal is to present an alternative RD conceptual framework
based on local randomization ideas. This methodological approach can be useful
in RD designs with discretely-valued scores, and can also be used more broadly
as a complement to the continuity-based approach in other settings. Then,
employing both continuity-based and local randomization approaches, we extend
the canonical Sharp RD design in multiple directions: fuzzy RD designs, RD
designs with discrete scores, and multi-dimensional RD designs. The goal of our
two-part monograph is purposely practical and hence we focus on the empirical
analysis of RD designs. | Matias D. Cattaneo, Nicolas Idrobo, Rocio Titiunik | 2023-01-21T14:39:00Z | http://arxiv.org/abs/2301.08958v2 | # A Practical Introduction to Regression Discontinuity Designs: Extensions
###### Abstract
In this paper, we propose a novel approach to the problem of estimating the \(\alpha\)-norm of the |
2306.12019 | Design of Energy Harvesting based Hardware for IoT Applications | Internet of Things (IoT) devices are rapidly expanding in many areas,
including deep mines, space, industrial environments, and health monitoring
systems. Most of the sensors and actuators are battery-powered, and these
batteries have a finite lifespan. Maintaining and replacing these many
batteries increases the maintenance cost of IoT systems and causes massive
environmental damage. Energy-harvesting devices (EHDs) are the alternative and
promising solution for these battery-operated IoT devices. These EHDs collect
energy from the environment and use it for daily computations, like collecting
and processing data from the sensors and actuators. Using EHDs in IoT reduces
overall maintenance costs and makes the IoT system energy-sufficient. However,
energy availability from these EHDs is unpredictable, resulting in frequent
power failures.
Most of these devices use volatile memories as storage elements, implying
that all collected data and decisions made by the IoT devices are lost during
frequent power failures, resulting in two possible overheads. First, the IoT
device must execute the application from the beginning whenever power comes
back. Second, IoT devices may make wrong decisions by considering incomplete
data, i.e., data-inconsistency issues. To address these two challenges, a
computing model is required that backs up the collected data during power
failures and restores it for later computations; this type of computing is
defined as intermittent computing. However, this computing model doesn't work
with conventional processors or memories. Non-volatile memory and processors
are required to design a battery-less IoT device that supports intermittent
computing. | Satyajaswanth Badri, Mukesh Saini, Neeraj Goel | 2023-06-21T05:08:06Z | http://arxiv.org/abs/2306.12019v1 | # Design of Energy Harvesting based Hardware for IoT Applications
###### Abstract.
Internet of Things (IoT) devices are rapidly expanding in many areas, including deep mines, space, industrial environments, and health monitoring systems. Most of the sensors and actuators are battery-powered, and these batteries have a finite lifespan. Maintaining and replacing these many batteries increases the maintenance cost of IoT systems and causes massive environmental damage. Energy-harvesting devices (EHDs) are the alternative and promising solution for these battery-operated IoT devices. These EHDs collect energy from the environment and use it for daily computations, like collecting and processing data from the sensors and actuators. Using EHDs in IoT reduces overall maintenance costs and makes the IoT system energy-sufficient. However, energy availability from these EHDs is unpredictable, resulting in frequent power failures.
Most of these devices use volatile memories as storage elements, implying that all collected data and decisions made by the IoT devices are lost during frequent power failures, resulting in two possible overheads. First, the IoT device must execute the application from the beginning whenever power comes back. Second, IoT devices may make wrong decisions by considering incomplete data, i.e., data-inconsistency issues. To address these two challenges, a computing model is required that backs up the collected data during power failures and restores it for later computations; this type of computing is defined as intermittent computing. However, this computing model doesn't work with conventional processors or memories. Non-volatile memory and processors are required to design a battery-less IoT device that supports intermittent computing.
Energy-Harvesting, Data-Inconsistency, Intermittent Computing, Internet-of-Things, and Non-Volatile Memory. +
Footnote †: journal: Energy-Harvesting
+
Footnote †: journal: Energy-Harvesting
capacitor with the IoT devices makes them battery-free devices (Kumar et al., 2017). However, the energy available from these EHDs is unstable, which may result in frequent power failures. During power failures, the execution of these IoT applications may become irregular, resulting in inconsistent output. As a result, a computing model is required that enables IoT devices to backup and restore all the computed results that are executed during these frequent power failures. This computing model is known as intermittent computing (Kumar et al., 2017; Li et al., 2018; Li et al., 2019). This chapter discusses the challenges associated with these devices and the hardware required for these battery-free IoT devices that use harvested energy for executing the IoT applications.
## 2. Energy Harvesting in IoT Devices & Environment
This section discusses the available energy sources in our surroundings and how these EHDs benefit IoT devices and the environment. As shown in Figure 1, there are four to five energy sources that are available in our surroundings, including solar energy from the sun, mechanical energy from a nearby windmill, RF energy from a Wi-Fi router, kinetic(mechanical) energy from water flow, vibration(mechanical) energy from the road, and thermal energy from the heat produced by the human body and the car. Human breathing and body movements can also generate a small amount of energy.
Deploying EHDs for each energy source may accumulate a good amount of energy, which can be helpful in executing IoT applications. EHDs such as solar voltaic cells and PV panels harvest solar energy, which extracts energy in the range of \(15-100mW/cm^{2}\). The primary benefit of solar energy is its consistent behavior; during the day, it harvests a large amount of energy from its surroundings. The only drawbacks are its unpredictable behavior and deployment constraints. EHDs like anemometers are used to harvest wind energy, which extracts approximately \(1200mWh/day\). The main advantage of wind energy is its ease of deployment, and it can use in open areas. The only disadvantage for these EHDs is maintenance costs.
EHDs, similar to piezoelectric materials, collect energy from human movements, footfalls, and car vibrations on the road, extracting approximately \(2.1-5W\) for human finger movements/footfalls
Figure 1. Different Energy Harvesting Resources Available in the Surroundings
and \(200\mu W/cm^{2}\) for human motion. Piezoelectric materials are fully controllable and produce energy based on our needs. However, piezoelectric materials extract the least amount of energy. EHD, like a ratchet flywheel, collects energy from human breathing, extracting \(0.42W\) of energy approximately. This energy-harvesting source is advantageous because it is always available in our surroundings but produces little energy.
For extracting thermal energy (body heat, car engine heat), EHDs, such as thermo-couple batteries, collect approximately \(50mW/cm^{2}\). The main advantage of these EHDs is that they are more reliable, require less maintenance, and last longer. However, the energy conversion is very low and insufficient for an IoT environment without additional support. EHDs such as rectennas extract RF energy, which collects about \(1W/cm^{2}\). The main advantage of these EHDs is their mobility and high energy density. However, these EHDs produce radioactivity, which may be more dangerous to people nearby.
### Internals of Energy-Harvesting-based IoT devices
Many of the EHDs are unpredictable and uncontrollable. Using energy-harvesting resources in an IoT environment needs a proper energy management scheme. Figure 2 shows the three main components of an energy-harvesting system (Han et al., 2016; Li et al., 2017; Li et al., 2018; Li et al., 2019).
1. **Energy-Harvesting Resources:** Resources that gather energy from their surroundings.
2. **Energy-Monitoring System:** Once the energy is extracted, a storage mechanism is required to store and manage it.
3. **IoT devices & Environment:** Effective utilization of stored energy is needed for these IoT devices & the environment.
Figure 2. An Overview of Energy-Harvesting-based Architecture for IoT Environment
After gathering energy from the environment, efficient mechanisms for storing the collected energy are required. Rechargeable batteries, supercapacitors, and thin-film batteries, among others, store enough energy within these materials. Supercapacitors/thin-film batteries experience significant energy leakage during charging and discharging cycles. As a result, management techniques are also required to efficiently use accumulated energy by allocating sufficient energy to IoT sensors and actuators based on their needs. An efficient scheme or technique is required to monitor these leaks and implement a near-zero leakage policy. Figure 2 shows an overview of the energy-harvesting architecture used in the IoT environment.
The energy-monitoring system distributes energy to all IoT devices deployed in the field. The majority of IoT devices and environments are composed of three subsystems:
1. The data collection unit contains many sensors and actuators that collect data based on the IoT application.
2. To store the computed results, a processing unit with memory and processors is required to make the correct decision based on the data collected.
3. Once a decision is made in a specific situation, the final decision must be communicated to the other IoT device or end user via the communication unit.
### Challenges associated with these energy-harvesting-based IoT devices
This section discusses the challenges associated with the energy-harvesting-based IoT architecture shown in Figure 2.
* _Challenge 1:_ Energy availability is unpredictable and irregular. These EHDs have no control over when and how much energy they collect.
* _Challenge 2:_ In the IoT environment, Issue-1 causes frequent power failures. Because of these power failures, the execution model of the IoT application becomes irregular.
* _Challenge 3:_ Figure 2 shows how supercapacitors are used as energy storage mechanisms in an energy monitoring system. Once capacitors are connected to IoT devices, the size of the capacitors is fixed, as is the amount of energy stored in these capacitors. As a result, this constant amount of energy should be used efficiently for IoT computations.
* _Challenge 4:_ There will be energy leakage from these capacitors/batteries and an effective method is needed to monitor these energy leakage issues.
* _Challenge 5:_ Traditional processors and memory storage models are volatile in today's world. In these energy-harvesting-based IoT architectures, data collected by IoT sensors and computation results may be lost during power failures. When power is restored, the IoT application must start over, consuming more energy by repeating the same procedures.
This chapter contributes to the discussion of the major problems related to the energy-harvesting-based IoT architecture. This chapter also discusses potential hardware for addressing the challenges listed above.
## 3. Intermittent Computing
When there is enough harvested energy in a capacitor and the energy harvester directly provides enough energy, the IoT application is run as usual, using the energy directly from the harvester source. When the harvester does not provide energy directly from the source, the IoT device must rely on energy from the capacitor to perform essential tasks. Figure 3 shows enough solar energy during the day for the IoT device to run the application without intervention around 01:00 PM. The IoT device must use the capacitor's energy to complete important tasks before turning off in the evening, around 05:00 PM.
Because energy is not always available to harvest, execution in such devices is intermittent, resulting in power failures in the IoT environment (Han et al., 2015; Li et al., 2017; Li et al., 2018). Even when energy is readily available, it takes time to accumulate enough energy to perform valuable work. Incorporating new memory technologies and additional procedures into the execution and memory model of a conventional processor is required to design an intermittently aware design.
### Memory Model for Intermittently Powered Devices
The hardware of an intermittently functioning device may include general-purpose computing units such as a CPU or a microcontroller unit (MCU), a group of sensors, and one or more radios to communicate with the sensor array. Almost all of these devices use a volatile memory model. Figure 4 (a) shows the conventional memory hierarchy, which includes the register file, caches, main memory, and secondary storage. All other memories are volatile except for secondary storage and can hold data only whenever power is available. As illustrated in Figure 3, the IoT device will shut down when no energy is available from the harvester source, i.e., at night (after 7:00 PM); during this time, data stored in registers, caches, and main memory is lost.
There are two alternatives to keep the volatile contents safe. One approach is to save all computed results and decisions made during the execution phase to secondary storage before the MCU enters the power OFF state. The second approach is to restart the IoT application whenever the energy harvester provides enough energy to the MCU. Both alternatives are inefficient because backup/recovery to/from secondary storage and re-executing the same application take more time and energy. As a result, it is necessary to have a memory model that can store the memory contents when the energy source is unavailable.
Non-volatile memory (NVM) is a relatively new memory technology that can retain the system state while consuming no power. NVM technologies are being developed to overcome the drawbacks of volatile memory technologies. Flash, spin transfer torque RAM (STT-RAM), phase-change memory (PCM), resistive RAM (ReRAM), and ferroelectric RAM (FRAM) are examples of emerging NVM technologies. Because of their exhibited physical properties, NVMs have the potential to consume very little power while providing significantly greater density than conventional memory technologies. A standard SRAM cell, for example, has a size of \(125-200F^{2}\), and a PCM and Flash cell have sizes of \(4-12F^{2}\) and \(4-6F^{2}\), accordingly, where F refers to the lowest lithographic dimensions that range in a specific technology node. Because of their advantages, NVMs have become more common in products. Flash memory, for example, is used as a cache in Intel TurboMemory.
Figure 3. An Overview of Intermittent Computing for Solar-based Harvesting Systems
These NVMs, however, have some limitations. NVMs, for example, have a higher latency and consume more energy than volatile memory technology. Write endurance is the property that determines how many writes a memory block can withstand before it becomes ineffective. NVMs have significantly lower write endurance than traditional memory technologies. Table 1 provides detailed comparisons of various properties with various memory technologies. Access granularity is defined as the minimum size of data read/written in each access. Furthermore, they can store data for many years without requiring standby power under regular circumstances. As shown in Table 1, the common insight from all NVM technologies is that write latency/energy is greater than read latency/energy [1, 4, 5, 19, 24].
In terms of characteristic features, STT-RAM outperforms all other NVM technologies. Table 1 shows that STT-RAM outperforms other NVMs in terms of write endurance, latency, and energy consumption. As a result, STT-RAM is a promising candidate for cache, main memory, and scratchpad memory. However, because STT-RAM is more expensive than other NVMs, it is unsuitable for use at the main memory level. PCM is the next better NVM technology after STT-RAM because its
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline
**Property** & **SRAM** & **DRAM** & **HDD** & **SLC Flash** & **PCM** & **STT-RAM** & **ReRAM** & **FRAM** \\ \hline
**Cell Size (\(F^{2}\))** & \(120-200\) & \(6-12\) & NA & \(4-6\) & \(4-12\) & \(6-50\) & \(4-10\) & \(12-15\) \\ \hline
**Read Latency** & \(\sim\)1 ns & \(\sim\)110 ns & 5 ms & \(25\mu\)s & 50 ns & \(<\)10 ns & \(<\)10 ns & 50 ns \\ \hline
**Write Latency** & \(\sim\)1 ns & \(\sim\)110 ns & 5 ms & \(500\mu\)s & 500 ns & 10 ns & \(<\)10 ns & 50 ns \\ \hline
**Write Energy** & \(\sim 10^{-15}\) & \(\sim 10^{-14}\) & \(\sim 10^{-14}\) & \(\sim 10^{-9}\) & \(\sim 10^{-11}\) & \(\sim 10^{-13}\) & \(\sim 10^{-12}\) \\ \hline
**Leakage Power** & High & Medium & Medium & Low & Low & Low & Low \\ \hline
**Erase Latency** & NA & NA & NA & 2 ms & NA & NA & NA & NA \\ \hline
**Access Granularity** & 64 & 64 & 512 & 4K & 64 & 64 & 64 & 64 \\ \hline
**Endurance** & \(>10^{16}\) & \(>10^{16}\) & \(>10^{15}\) & \(10^{4}-10^{5}\) & \(10^{8}-10^{15}\) & \(>10^{15}\) & \(10^{8}-10^{12}\) & \(10^{10}-10^{12}\) \\ \hline
**Standby Power** & \(0.6-1.2W\) & Refresh Power & \(1-2W\) & 0 & 0 & 0 & 0 & 0 \\ \hline \end{tabular}
\end{table}
Table 1: Comparisons between different NVM Technologies for different Features
Figure 4: **Differences between Conventional and Non-Volatile Memory Models**
size and write endurance are better than other NVMs. As a result, PCM is a promising candidate for use in cache and main memory. However, because STT-RAM has a longer lifespan than PCM, it is unsuitable for use at the cache level. PCM is the better candidate for main memory. ReRAM and FRAM share some characteristics; both were promising candidates for main memory. ReRAM, on the other hand, has lower latency and energy consumption than FRAM. For example, the MSP430FR6989 is a recent TI-based microcontroller with 2KB SRAM and 128KB FRAM at the main memory level.
Incorporating NVMs at each level preserves the data in these IoT devices from frequent power failures, switching the traditional processor into a non-volatile processor. (NVP). Figure 4 (b) shows the memory hierarchy of non-volatile flip-flops, non-volatile caches (STT-RAM/PCM), and non-volatile main memory (PCM, ReRAM, and FRAM). Thus, replacing the non-volatile memory model with volatile memory helps in intermittent computing and reduces the time and energy required to backup/retrieve volatile contents.
#### 3.1.1. Designing NVM-based Processor for Intermittently Powered IoT Devices
A non-volatile processor (NVP) is designed by replacing volatile memory with NVM at each level. Figure 5 shows two distinct architectures that demonstrate the differences between pure NVM architecture and hybrid NVM architecture. There is a pure NVM technology at each level in architecture-1, particularly STT-RAM at both the L1 and last-level cache (LLC) levels and PCM at the main memory level. In architecture-2, hybrid NVM technology is used at each level, with SRAM+STT-RAM at the L1 and LLC levels and SRAM+PCM at the main memory level [2, 3, 12, 13, 25].
The main advantage of hybrid architecture over pure NVM architecture is that it takes advantage of both SRAM and NVM, i.e., performance benefits from SRAM and non-volatility and density benefits from NVM. Figure 5 (b) shows a design that allows you to experiment with various combinations. As needed, the designer must select hybrid NVM and pure NVM architectures. The main disadvantage of using hybrid NVM over pure NVM architecture is that the volatile contents in the hybrid architecture must be stored in NVM during frequent power failures, which increases backup time and energy.
Figure 5. (a) Pure-NVM based architecture, STT-RAM at cache levels & PCM at main memory level and (b) Hybrid-NVM based architecture, SRAM+STT-RAM at cache levels & SRAM+PCM at main memory level
### Execution Model for Intermittently Powered Devices
Regardless of several significant differences between the intermittent execution model and the conventional embedded execution model, designers of today's intermittently powered devices use a standard, C-like embedded computing abstraction. The application on an intermittently powered device runs until the device's energy has been drained. When the application's energy is restored, it resumes the execution from a specific point in its execution history, such as the start of the main() function or a safe point. The primary distinction between conventional and intermittent execution models is that a normally executing program is expected to run until it is completed. In contrast, an intermittent execution model must complete the program execution despite multiple power interruptions. Various system components, such as languages, application run time behavior, and program semantics, must be modified to create an intermittence-aware design.
Among all of these changes, we highlighted three significant changes required in the application execution flow for the intermittence-aware design.
**Checkpoint():** When a checkpoint procedure is detected, all volatile contents are copied to NVMs in order to preserve the system state. The existing literature in this area focuses on when and where to place a checkpoint. Recent research proposes two methods for determining when to perform this checkpoint procedure.
1. To monitor the energy source and capacitor, specially designed hardware is required. When it falls below a certain threshold, the system sends an interrupt, which stops the application and starts the backup() procedure. Thus, checkpoints can occur at any time.
2. Instead of maintaining hardware to monitor the energy requirements, existing solutions track changes in the application state. There are solutions that determine the variation at checkpoint time, either through hash comparisons or by comparing main memory word-by-word with the most recent checkpoint data, which is already in NVM. When there are many changes to initiate a backup() procedure, the system issues an interrupt.
**Backup():** Whenever a backup() procedure is initiated, it copies the volatile contents to NVM, which means that it reads the contents from volatile memory technology and writes them to NVM technology.
**Restore():** Whenever a restore() procedure is initiated, it copies the backed-up contents from NVM to volatile memory, which means it reads the contents from NVM technology and writes them to volatile memory technology.
Adding additional procedures such as checkpoint(), backup(), and restore() to the conventional execution supports intermittent computing, which also completes Figure 3. However, these additional procedures may incur additional costs. To reduce additional overheads in the intermittent execution model, efficient checkpointing, backup, and rollback policies are required. Both the execution and memory models support intermittently powered devices and computing for these devices.
### Challenges associated with these Intermittently Powered IoT devices
Intermittent computing introduces a number of challenges. These challenges demonstrate how the system becomes inconsistent and fails to progress, assisting an individual in developing an efficient intermittent-aware design by dealing with these challenges properly.
#### 3.3.1. Application Progress
If the application's system state is not backed up completely across all power failures, the application will re-execute from the beginning, i.e., from the main() function (Han et al., 2015; Wang et al., 2016; Wang et al., 2017; Wang et al., 2018). Figure 6 (b) shows how the execution progress is affected by frequent power failures. In Figure 6 (a), there is a backup() after the first couple of instructions, and a power failure occurs after processing(result). When power is restored, execution begins with sensing() rather than main(),
saving energy and time. However, when a power failure occurs again after processing (result), the application enters an infinite loop, as illustrated in Figure 6 (b). It never advances the execution of that instruction, and as a result, the application never completes its execution and returns no results.
Specific checkpointing techniques must be incorporated into the application in order to make consistent progress. Figure 6 (c) illustrates how inserting a checkpoint after each instruction improves execution progress over the previous approach (Figure 6 (b)). The programmer must ensure that each checkpoint consumes one cycle of capacitor energy and that the distance between checkpoints is acceptable to charge the capacitor sufficiently.
#### 3.3.2. Memory Consistency
Most intermittently powered MCUs can have a mix of volatile and NVMs; the MSP430FR6989, for example, has SRAM and FRAM. A naive design of volatile memory and NVMs could lead to inconsistency in both memories [10, 15, 16, 23]. Figure 7 shows the application performing two tasks: incrementing an NVM variable, saving variable data into an
Figure 6. A detailed Example of Irregular Execution Progress during Intermittent Power Supply (a) Application-1 without any Power failures, (b) Execution of Application-1 during frequent Power failures, and (c) Execution of Application-1 during frequent Power failures using the checkpointing technique.
array1, and adding array elements, saving the final value in the variable, where all array elements and results are stored in NVM.
Assume there is a power failure between the increment operation and saving the data into the array. In that case, the application's entire system state is lost, but not the data stored in the NVM, such as "D," "arr1[]," and "result," as shown in Figure 7. When power is restored, the application begins again, unaware of the fact that the data in NVM has already been computed/incremented. As a result, in execution-2, the application increments the variable "D" once more and computes the variable "result" using the incorrectly stored array values, as illustrated in Figure 7 (b). During frequent power failures, these memory inconsistencies increase, as illustrated in Figure 7 (b). If the run-time system permits certain parts of the program to be re-executed, it must also monitor and recover memory addresses that could cause inconsistencies.
#### 3.3.3. Preserving Program Semantics
Even if an application has well-suited checkpoints and a system that maintains a consistent state between NVM and volatile memory, it may perform differently than the designer expected. When the capacitor discharges and a power failure occurs, the energy harvesting device is turned off for a defined period of time, and all peripherals and their system states are cleared. When compared to continuously powered devices, this behavior may violate the designer's assumptions about the atomicity of operations and the timeliness of data.
1. **Atomicity:** Certain code regions must execute sequentially (with no checkpoints in the middle), ensuring the application runs correctly. Before reading from a sensor device, for example, the device driver checks that the sensing device is turned on (enough energy is provided to the sensor) and that the bus is ready to use because many sensor devices use this bus for communication with other devices. Assume there is a checkpoint between these program assertions and the sensor read function, and a power failure occurs. When power
Figure 7. A detailed Example of Memory Inconsistency behavior during Intermittent Power Supply (a) Application-2 without any Power failures and (b) Outputs after Executing Application-2 during frequent Power failures.
is restored, the application returns to the previously saved checkpoint and executes the sensor's read function directly without performing the initial checks. Designers should be able to specify which atomic regions must be completed within a single energy peak.
2. **Timeliness:** Some information loses value over time. Because the device may remain turned off for a prolonged period of time if the power goes away, placing checkpoints between the gathering and utilizing data restricts its usefulness. Consider an application that monitors a specific system's temperature and sends a radio signal alert indicating whether the temperature is appropriate or exceeds a certain threshold. The application takes temperature readings but terminates before they are completed. When the device is turned off, the temperature changes dramatically. Whenever the application is restarted, the device continues to process the stale data and incorrectly reports that the system's temperature readings are still acceptable. Programmers must be able to communicate which events must occur in a timely fashion.
## 4. Future Research Directions
Intermittent computing in energy-harvesting-based IoT devices is a new research area that has the potential to design battery-free devices.
**Programming Intermittent Devices:** Despite the growing popularity of intermittent devices, recent approaches to using them have several significant drawbacks: (a) the developer's effort in defining the tasks that IoT devices must perform has been increased, (b) deciding the optimal size of each task, (c) additional Overheads are due to run time exceptions and memory models, and (d) inaccurate modeling of application-level characteristics (e.g., I/O atomicity).
The limitations motivate more investigations to address the challenges mentioned above by developing new programming models that reduce overheads and the developer's burden while maintaining developer control over atomicity.
**Designing Distributed environment of intermittent devices:** A communication platform is required for distributed intermittent computing devices to communicate with one another. The time between each communication consumes less energy than the fixed communication span between these intermittent computing devices compared to traditional communication platforms. Synchronizing the collected data from these devices is an unresolved issue, and monitoring the differences between synchronized and unsynchronized data from these devices is also an open research problem.
The development of distributed systems that use intermittent devices allows exceptional battery-free applications. The lack of programming tools & environment, programming language specifications, memory & execution model abstraction increases the complexity of designing an accurate, efficient, distributed intermittent computing system. Recent measures have focused on intermittent distributed shared memory models and simulator-based support.
**Effective use of NvMs:** Incorporating NVMs into intermittent systems consumes more energy. So, when including NVM, efficient management and prediction policies are required to reduce the overheads caused by NVMs and to use NVMs efficiently. When using NVMs for hybrid caches/main memories, an optimal placement policy that predicts which block should be placed in which memory region is a crucial research objective. Recent research has centered on this objective, proposing novel placement and prediction policies.
**Efficient Checkpointing Policies:** Checkpointing techniques are required to keep the execution progress consistent. In an intermittent application programmer inserts the checkpoints wherever it is needed. More checkpoints in an application increase overheads due to unnecessary backup/restore to/from NVMs and an increasing number of NVM accesses. As a result, the checkpointing overhead consumes more energy than standard backup/restore procedures. Determining when to checkpoint
and which parts of an application to backup is an unexplored research area. Reducing the number of unnecessary checkpoints helps to reduce the time and energy required to backup/restore volatile contents during a power failure.
## 5. Conclusions
Battery-free devices are required for today's IoT applications. Integrating EHDs into an IoT environment introduces new challenges. In order to address these challenges and gain support for intermittent computing, there is necessary to change the conventional execution and memory model. The NVM model, which replaces the traditional volatile memory model, assists in support of intermittent computing. NVMs can store data even during a power failure, making them useful as a backup/restore region for intermittent computing. Many new NVM technologies have been proposed and are now on the market. However, NVMs alone are insufficient for intermittent computing and challenges due to unstable energy availability from harvesting resources. The intermittent execution model includes different application procedures that handle when and what to backup/restore to/from NVMs during frequent power failures.
This chapter covers all aspects of intermittent computing for EHD-based IoT devices. This chapter addresses all the modifications and hardware required to replace existing models and energy-harvesting-based architectures in the IoT environment. This chapter discusses how to turn battery-powered IoT devices into batteryless IoT devices.
## Acknowledgement
This work is supported by the grant received from the Department of Science and Technology (DST), Govt. of India, for the Technology Innovation Hub at the IIT Ropar in the framework of the National Mission on Interdisciplinary Cyber-Physical Systems.
|
2305.13657 | ChatGPT as your Personal Data Scientist | The rise of big data has amplified the need for efficient, user-friendly
automated machine learning (AutoML) tools. However, the intricacy of
understanding domain-specific data and defining prediction tasks necessitates
human intervention making the process time-consuming while preventing full
automation. Instead, envision an intelligent agent capable of assisting users
in conducting AutoML tasks through intuitive, natural conversations without
requiring in-depth knowledge of the underlying machine learning (ML) processes.
This agent's key challenge is to accurately comprehend the user's prediction
goals and, consequently, formulate precise ML tasks, adjust data sets and model
parameters accordingly, and articulate results effectively. In this paper, we
take a pioneering step towards this ambitious goal by introducing a
ChatGPT-based conversational data-science framework to act as a "personal data
scientist". Precisely, we utilize Large Language Models (ChatGPT) to build a
natural interface between the users and the ML models (Scikit-Learn), which in
turn, allows us to approach this ambitious problem with a realistic solution.
Our model pivots around four dialogue states: Data Visualization, Task
Formulation, Prediction Engineering, and Result Summary and Recommendation.
Each state marks a unique conversation phase, impacting the overall user-system
interaction. Multiple LLM instances, serving as "micro-agents", ensure a
cohesive conversation flow, granting us granular control over the
conversation's progression. In summary, we developed an end-to-end system that
not only proves the viability of the novel concept of conversational data
science but also underscores the potency of LLMs in solving complex tasks.
Interestingly, its development spotlighted several critical weaknesses in the
current LLMs (ChatGPT) and highlighted substantial opportunities for
improvement. | Md Mahadi Hassan, Alex Knipper, Shubhra Kanti Karmaker Santu | 2023-05-23T04:00:16Z | http://arxiv.org/abs/2305.13657v1 | # ChatGPT as your Personal Data Scientist
###### Abstract.
The rise of big data has amplified the need for efficient, user-friendly automated machine learning (AutoML) tools. However, the intricacy of understanding domain-specific data and defining prediction tasks necessitates human intervention making the process time-consuming while preventing full automation. Instead, envision an intelligent agent capable of assisting users in conducting AutoML tasks through intuitive, natural conversations without requiring in-depth knowledge of the underlying machine learning (ML) processes. This agent's key challenge is to accurately comprehend the user's prediction goals and, consequently, formulate precise ML tasks, adjust data sets and model parameters accordingly, and articulate results effectively. In this paper, we take a pioneering step towards this ambitious goal by introducing a ChatGPT-based conversational data-science framework to act as a "personal data scientist". Precisely, we utilize Large Language Models (ChatGPT) to build a natural interface between the users and the ML models (Scikit-Learn), which in turn, allows us to approach this ambitious problem with a realistic solution.
Our model pivots around four dialogue states: Data Visualization, Task Formulation, Prediction Engineering, and Result Summary and Recommendation. Each state marks a unique conversation phase, impacting the overall user-system interaction. Multiple LLM instances, serving as "micro-agents", ensure a cohesive conversation flow, granting us granular control over the conversation's progression. In summary, we developed an end-to-end system that not only proves the viability of the novel concept of conversational data science but also underscores the potency of LLMs in solving complex tasks. Interestingly, its development spotlighted several critical weaknesses in the current LLMs (ChatGPT) and highlighted substantial opportunities for improvement.
Automatic Machine Learning (AutoML) tools aim to make machine learning accessible for non-machine learning experts (domain experts), improve the efficiency of machine learning, and accelerate machine learning research. However, the current AutoML process still requires a staggering amount of human involvement at a number of vital steps, as shown in Figure 1. For example, a typical AutoML user would be expected to: 1) Deeply understand the data at their disposal, 2) Know how to create training/testing sets from their data, and 3) Select a promising machine learning technique suitable for their goals. But domain experts (experts in a particular domain other than machine learning working with big data) often lack these understandings and rely on someone well-versed in data science, e.g., a data scientist, to do these tasks [40]. These things often still require a prolonged
back-and-forth between the domain expert (end-user) and the data scientist. This makes the process rather inefficient for both parties involved and keeps so-called "AutoML systems" from being truly automatic [20].
The overall goal of this work is to streamline this lengthy back-and-forth process by making use of a conversational agent which will facilitate the democratization of data science across a wider range of audiences. By doing this, the AI system will be able to help guide users to express their analytics goals via natural conversation and, subsequently, translate the goal into a well-defined machine learning problem, which, once done, can be automatically executed and interpreted by existing AutoML solutions. Our proposed system, which we will henceforth refer to as VIDS - a "Virtual Interactive Data Scientist" - aims to be the first of its kind: a true AutoML pipeline and a generalized, well-versed data science assistant. This innovative solution will help lead to the establishment of a higher level of autonomy for AutoML systems, where end-to-end automation is achieved by interfacing large language models with existing AutoML solutions. To be more specific, our dialog-based system aspires to reach the apex of automation, echoing the concept of a Level 6 AutoML system as outlined by Karmaker et al. [20]. This high level of automation, enabled by a
Figure 1: Karmaker et al. [20]: A flowchart showing the machine learning process. This chart highlights points of interaction between domain experts and data scientists, along with bottlenecks. In this paper, we focus on automating three steps in the chat cycle with the largest communication bottleneck: Task Formulation (TF), Prediction Engineering (PE), and Result Summarization and Recommendation (RSR).
consistent, intuitive dialogue with the user, oversees the end-to-end machine learning process, from initial task formulation to comprehensive interpretation of results and subsequent recommendations.
There's no denying the complexity of the task at hand--automating a technical and loosely structured dialogue while concurrently extracting essential information to formulate a cogent machine learning problem. Some critics might view this endeavor as overly ambitious or even unrealistic. However, with the advent of various large language models (LLMs)[7, 11, 31, 46, 58, 60], such as ChatGPT 1, this problem becomes demonstrably more feasible. These larger models have become very proficient in providing personalized guidance tailored to each user's specific context, ensuring that individual concerns are addressed and any unknown outcomes are effectively explained and interpreted - a level of personalized support that is challenging to achieve with traditional tools. As ChatGPT and similar LLMs continue to evolve, we foresee a future where these models are closely integrated with various industries & applications, helping to automate tasks, enhance decision-making processes, and assist users in exploring new avenues of innovation.
Footnote 1: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt)
In this context, the potential of LLMs like ChatGPT extends to even more complex use cases, allowing users to intuitively express their needs and engage in meaningful conversations with an automated system. This affords the potential for creating seamless natural language interfaces for various complex systems, like the aforementioned conversational agent. If done well, this potential greatly simplifies the automation of interactions between the user and the system. By harnessing the potential of these LLMs, we aim to realize the full VIDS system, revolutionizing the way users interact with and benefit from data science & machine learning, thereby making these technologies available to a far broader audience. Potential use cases of VIDS become more compelling in dynamic situations where hands-free tools are essential, such as driving, cooking, or battlefield scenarios. This "natural conversation" solution allows users to interact with automated machine learning pipelines safely and effectively in these dynamic situations, as well as provides a more "human" way to interact with data at scale. Furthermore, the conversational aspect helps accommodate users who may not have complete knowledge of the underlying data and/or have limited access to it. The dialogue naturally helps users understand what tasks are feasible for what reasons and helps them make informed decisions when working with their data.
In summary, breakthroughs in NLP research and language understanding through LLMs, such as ChatGPT, have equipped us with viable technology to realize our ambitious goal of automating the machine learning pipeline through conversational data science. Our solution (VIDS) offers a communication interface that supports natural conversations, enhanced accessibility, personalized guidance, adaptability to dynamic situations, and accommodation for users with limited data knowledge. This innovative solution will empower individuals from diverse backgrounds to harness the power of advanced machine-learning pipelines with ease. As we move forward, these advancements open up the possibility of introducing a new paradigm that will utilize LLMs like ChatGPT to build a virtual interactive data scientist, revolutionizing the way users interact with and benefit from data science and machine learning.
## 2 Related Works
### Large Language Models
Large Language Models (LLMs) [7, 11, 31, 46, 58, 60] have been increasingly recognized as powerful tools in dialog systems. They are widely applied due to their ability to generate human-like text, understand complex language patterns, and provide contextually appropriate responses.
In the context of dialog systems, GPT-3, developed by OpenAI, has been a prominent example in recent literature [7]. It demonstrated significant improvements over its predecessors in terms
of fluency and context understanding. By leveraging a Transformer-based architecture, it's able to generate more coherent and contextually appropriate responses compared to earlier models [30, 37].
Another relevant research area is the application of LLMs for multi-turn dialogues. Here, models like DialoGPT have shown promising results in maintaining conversational context over extended interactions [61]. They operate by refining the previous response generation models to better maintain the context of the conversation, which significantly enhances the coherence and relevancy of their responses.
Fine-tuning of LLMs for specific domains or tasks within dialog systems is another active area of research. Several studies have focused on techniques such as prompt engineering, rule-based post-processing, or incorporating external knowledge into these models to increase their efficiency and accuracy [10, 54].
Recent works have also begun exploring the integration of LLMs into larger dialog system architectures. For example, studies on systems like HuggingGPT have examined how these models can be leveraged to act as a controller for other AI models [41].
However, despite the progress made, challenges remain in managing the complexity of multi-turn conversations, ensuring the consistency of responses, and mitigating the tendency of LLMs to generate implausible or "hallucinated" information [1, 49, 55, 59]. Therefore, further research is needed to optimize the use of LLMs in dialog systems.
### Dialog Systems
In Dialog Systems research, significant progress has been achieved through advancements in Conversation Topic Prediction [23] and Dialogue State Tracking (DST) [14, 15]. DST improvements involve a range of approaches, including schema guidance for better structure [9, 19, 20], recursive inference for deeper understanding [25], generalization and value normalization for more adaptability [50, 52], zero-shot transfer learning for data efficiency [8, 35, 40], and attention modulation for improved focus during inference [48]. Open-domain dialogue systems have also seen significant advancements. GODEL's [33] grounded pre-training adapts to diverse downstream tasks, FusedChat [57] combines task-oriented and open-domain dialogue for natural conversations, & ChatGPT further enhances conversational agent performance across various applications.
### AutoML Research
The ML community as well as the systems community have put a lot of effort in the past decade into automating different _Data Science_ pipelines. Major efforts towards automation include _Data Cleaning and visualization_, _Feature Engineering_, _Learning and Parameter Tuning_, _Alternative Models Exploration, Testing and Validation_.
* **Data Cleaning and visualization:** This step involves identifying relevant data, handling missing values, "_joining_" multiple data sets, and creating visualizations for improving the quality of the data set. The _Data Mining_ and _Databases_ community has spent significant effort to automate this step, which has been nicely summarized in [17] and [12].
* **Feature Engineering:** a Data Scientist would attempt to construct useful (informative) features from raw data. Later, these features can be directly fed to ML models to train them and make predictions. In the past 5 years, a number of efforts have focused on automating "_Feature engineering_" ([18, 21, 22, 24, 29, 47, 51]).
* **Learning and Parameter Tuning:** These include basic machine learning techniques like decision trees, support vector machines, linear regression, neural networks, etc. which have current implementations like scikit-learn [32], weka [53] etc. Machine learning models often contain multiple hyperparameters whose values are critical to obtaining good performance. Automation efforts for hyperparameter tuning include [3, 42, 44, 16, 28, 4, 3, 6].
* **Alternative Models Exploration, Testing, and Validation:** Automating the process of selecting models, validating them, and finalizing them is critical to the large-scale deployment of ML models. Major automation efforts in this direction include [2, 13, 26, 27, 34, 36, 43, 45, 56, 62, 63]. However, it is evident that both communities have been reluctant to automate two of the most crucial tasks: _Task Formulation_ and _Prediction Engineering_. One particular reason for such reluctance may be attributed to the human-centric nature of these problems. Indeed, both tasks demand significant human interaction during the process and an interactive dialog with the user is necessary to automate this process.
## 3 Model Architecture
This section delves into pioneering methodology of VIDS, illuminating the intricate interplay between overarching structures and localized nuances. Central to our model are four distinct dialogue states - Data Visualization, Task Formulation, Prediction Engineering, and Result Summary and Recommendation, with each representing a unique phase in the conversation and contributing significantly to the overall user-system interaction. VIDS employs stateless global micro-agents, functioning independently of any state-related data or history, to create an overarching structure that enables fluid transitions throughout the dialogue, irrespective of the specific state. This stateless design ensures a smooth narrative flow and avoids complications of state-dependent biases or entanglements, thus bolstering the versatility and adaptability of our dialogue system. Alongside these global agents, local micro-agents, each tailored to a specific dialogue state, proficiently handle the nuances of user utterances and conversation contexts, facilitating smooth transitions between states in line with the evolving dialogue. VIDS' strength lies in this symbiotic relationship between the global and local micro-agents across the different dialogue states. Through this state-oriented, multi-layered approach, we aim to provide a dynamic, user-friendly, and efficient conversation experience, facilitating a streamlined process for automating the machine learning pipeline and fostering improved interaction between users and data science tools.
### Global Micro-agents
#### 3.1.1 State Detector:
The dialog state is a fundamental element, essential in identifying the current phase of the conversation. Its primary task is to ascertain if the user wishes to transition to the next state in the conversation. As shown in Figure 2, VIDS integrates a variety of well-defined states, each corresponding to the different stages of a conversation. The initial state is "data visualization", which centers around the presentation of data in a comprehensible and approachable manner. This transitions into the "task formulation" state, wherein the focus shifts to defining and structuring the task or problem the user wishes to address. Following this, the system moves into the "prediction engineering" state. In this phase, the system focuses on constructing and implementing predictive models that are based on the tasks as defined in the previous stage. Finally, the conversation arrives at the "result summarization and recommendation" state. Here, the system offers a succinct summary of the results, coupled with relevant recommendations based on the outcomes.
The system, considering the immediate context, the current dialog state, and the user's utterance, dynamically determines the user's intent. With this information, the micro-agentor decides whether the user wants to proceed to the next state of the dialog. This approach ensures a smooth flow of conversation, accurately aligning with the user's needs and objectives while offering a user-friendly and engaging experience. The system's design, thus, focuses not only on addressing the user's needs, but also on enriching their interaction with the system. Table 1 presents the unified prompt design employed to guide ChatGPT to correctly identify the current state of the conversation and the intent of the user.
Figure 2: The state diagram of the dialog system. The gray boxes are different state of the conversation history, the dark yellow boxes are global micro-agent, and the purple
#### 3.1.2 **Dialogue Summarizer:**
This micro-agent generates concise summaries of the ongoing conversation, enabling effective communication between different micro-agents. By considering the latest user utterance, previous conversation history, and the current response from a micro-agent, this component creates a new dialogue summary that maintains coherence and context throughout the conversation. Table 2 presents the unified prompt design employed to guide ChatGPT to summarize interactions between the user and VIDS.
\begin{table}
\begin{tabular}{p{34.1pt} p{284.5pt}} \hline \hline \multicolumn{2}{c}{**Prompt Design**} \\ \hline User & Taking into account the given context [ In this dialogue, the AI assistant provided information on suitable machine learning tasks for three different datasets: airlines, flights, and airports. For the flights dataset, the assistant suggested that classification and regression would be suitable tasks. Classification could be used to predict flight delays or cancellations, while regression could be used to predict the amount of delay. The user expressed interest to know more about the dataset.], the conversation state { dataset_understanding } the utterance { What details are included in the flight delay dataset? }, identify my current intent and next state of conversation. Please remember to only response in following format predefined json format without any additional information. Carefully examine the utterance and think about how the context might influence the current utterance, leading you to determine my present intent and next state. \\ \hline ChatGPT & [“intent”: “Get dataset info”, “current_state”: “dataset_understanding”, “next_state”: “dataset_understanding”] \\ \hline User & Taking into account the given context { In this dialogue, the AI assistant provided information on suitable machine learning tasks for three different datasets: airlines, flights, and airports. For the flights dataset, the assistant suggested that classification and regression would be suitable tasks. Classification could be used to predict flight delays or cancellations, while regression could be used to predict the amount of delay. The user expressed interest in the flights dataset and asked if it could be formulated as a time series problem, but the assistant did not provide a response to this question. }, the conversation state { dataset_understanding } the utterance { I want to predict if a flight will be delayed or not }, identify my current intent and next state of conversation. Please remember to only response in following format predefined json format without any additional information. Carefully examine the utterance and think about how the context might influence the current utterance, leading you to determine my present intent and next state. \\ \hline ChatGPT & [“intent”: “Select problem”, “current_state”: “dataset_understanding”, “next_state”: “problem_selection”] \\ \hline \multicolumn{2}{p{34.1pt}}{**Directive**} \\ \hline Taking into account the given context {context}, the conversation state { conversation state } the utterance {user input}, identify my current intent and next state of conversation. Please remember to only response in following format predefined json format without any additional information. Carefully examine the utterance and think about how the context might influence the current utterance, leading you to determine my present intent and next state. \\ \hline \hline \end{tabular}
\end{table}
Table 1: The details of prompt design for the Intent and State Detector micro-agent. In the prompt, the {context}, {conversation state}, and {user input} are placeholders which will be replaced dynamically in different stage of conversation
#### 3.1.3 **Conversation Manager:**
The conversation management micro-agent integrates input from the appropriate micro-agents to create a coherent, overarching dialogue. This component ensures a seamless user experience and effective task execution by maintaining the dialogue's structure and context throughout the conversation. Table 3 presents the unified prompt design employed to guide ChatGPT.
\begin{table}
\begin{tabular}{l} \hline \hline
**Prompt** \\ \hline \hline \multicolumn{2}{l}{Given the dialog between user and assistant, the AI assistant summarizes the dialog summary. The AI agent should not leave out any crucial information. The goal of this summary generation is not being precise, rather the goal should be to contain all crucial information. if the previous dialog is empty then you should return the current user utterance. \\ \hline \multicolumn{2}{l}{**Directive**} \\ \hline \multicolumn{2}{l}{Summarize the following dialog. You should not exclude any important information. [history]} \\ \hline \hline \end{tabular}
\end{table}
Table 2: The details of prompt design for the Dialogue Summarizer microprocess. In the prompt, the {history} is a placeholders which will be replaced dynamically during the conversation
\begin{table}
\begin{tabular}{l} \hline \hline
**Prompt Design** \\ \hline \hline \multicolumn{2}{l}{**System setup**} \\ \hline The AI assistant serves as a virtual data scientist, designed to engage with users and comprehend their objectives. The purpose of this interaction is to develop a machine learning task tailored to the user’s data. To achieve this, the assistant will collaborate with various micro agents, each performing specialized tasks to support the primary agent. The assistant will receive context, utterances, dataset summaries, and micro agent responses as input, and should aim to steer the conversation towards the goal. The following micro agents will aid the assistant, providing their output as input to the AI agent for further processing and integration. Depending on the current conversation state, different micro agents will be activated to provide their respective responses: \\ Intent Detector: Identifies the user’s intent from a list including ‘Get dataset info’, ‘Get dataset trend’, ‘Select problem’, ‘Formulate problem’, ‘Problem execution’, and ‘Chitchat’. The detected intent will be used to determine the direction of the conversation. \\ State Selector: Determines the conversation state, choosing from “data_visualization”, “task_selection”, “task_formulation”, or “task_execution”. The chosen state helps the AI agent to adapt its responses and maintain a coherent discussion flow. \\ Task Selector: Selects an appropriate ML task from options such as “classification”, “regression”, “clustering”, “dimensionality reduction”, “anomaly detection”, and “time series”. The selected task guides the AI agent in suggesting relevant solutions to the user. \\ Task Formulator: Constructs the ML task by utilizing a slot-value filling process. The formulated problem, complete with specified parameters, is then provided to the AI agent, which can assist the user in refining or executing the task. \\ \hline \multicolumn{2}{l}{**Directive**} \\ \hline Taking into account the given context [context], the conversation state {state} the utterance {input}, current intent {intent} and the response from the {micoprocess} microprocess {mp_resp}, provide appropriate response to the user to carry the conversation to its goal which is formulating a ML task based on user demands. \\ \hline \multicolumn{2}{l}{Table 3: The details of prompt design for the Conversation Manager microprocess. In the prompt, {state}, {input}, {micoprocess}, and {mp_resp} are placeholders which will be replaced dynamically during the conversation.} \\ \hline \hline \end{tabular}
\end{table}
Table 3: The details of prompt design for the Conversation Manager microprocess. In the prompt, {state}, {input}, {micoprocess}, and {mp_resp} are placeholders which will be replaced dynamically during the conversation.
### Data Visualization
The interaction pathway of VIDS commences with the Data Visualization stage. Here, users are presented with the option to upload their dataset or choose from an array of pre-existing demonstration datasets. This flexibility fosters an environment of exploration and discovery, enabling users to engage with datasets that align with their specific interests and requirements.
Once a dataset is selected, VIDS embarks on a two-step process to unlock valuable insights from the data. Initially, the system generates a condensed version of the dataset, a maneuver designed to optimize computational resources and streamline subsequent processing. The next step leverages the power of ChatGPT, guided by finely-tuned prompts, to dive deep into the dataset and extract a wealth of insights.
These insights, extracted via the Dataset Summarizer micro-agent, offer users a comprehensive understanding of the dataset, including its overall structure, individual row and column descriptions, and potential visualization ideas. Simultaneously, the Task Suggestor micro-agent analyzes the dataset summary to propose suitable Machine Learning tasks. These interconnected micro-agents ensure a seamless and informative exploration of the dataset, setting the stage for the next phase of interaction.
#### 3.2.1 **Dataset Summarizer micro-agent:**
The Dataset Summarizer micro-agent functions as the heart of the Data Visualization stage. Utilizing a precisely designed prompt, it delves into the reduced version of the dataset, extracting a range of insights that provide users with a comprehensive understanding of the dataset's content, structure, and potential applications. The unified prompt design, presented in Table 4, guides ChatGPT in this extraction process to ensure the data analysis is thorough and user-friendly.
#### 3.2.2 **Task Suggestor micro-agent:**
The Task Suggestor micro-agent complements the Dataset Summarizer by proposing suitable Machine Learning tasks based on the dataset summary. This micro-agent employs a unified prompt design, as illustrated in Table 5, to guide ChatGPT in generating effective task suggestions. This task suggestion capability enriches the Data Visualization stage, effectively laying the groundwork for the subsequent Task Formulation stage.
\begin{table}
\begin{tabular}{p{227.6pt}} \hline \hline
**Prompt Design** \\ \hline \hline
**System setup** \\ \hline You are an AI agent who will provide a conprihensive summary of a given dataset. Your task is to provide a comprehensive summary of a given dataset in a strict “JSON” format. \\ The summary MUST include the following informations: \\
1. dataset summary: the summary of the given dataset in natural language \\
2. column: it will list all columns and give a brief description about that column \\
3. Row: AI agent will select a row at random and describe what the row means in natural language \\
4. Trend: In natural language the AI agent will write the trends that can be found from the given dataset. \\ The response should be in a strict JSON format as follows: [“summary”: “...”, “columns”: [“name”: “coll”, “description”: “...”, “name”: “col2”, “description”: “...”, “row”: “description of a random row”, “trend”, “...”] \\ Please make sure to provide clear and concise descriptions in natural language to facilitate understanding for non-technical users. \\ \hline
**Directive** \\ \hline Please provide a comprehensive summary of the given dataset. The response MUST be in JSON format NOTHING ELSE. Use the following dataset: \{dataset\}. \\ \hline \hline \end{tabular}
\end{table}
Table 4: The details of prompt design for the Dataset Summarizer microprocess. In the prompt, the \(\{\)dataset\(\}\) is a placeholders which will be replaced a miniature version of the user provided dataset.
### Task Formulation
Following the Data Visualization stage, VIDS proceeds to Task Formulation. This section is broken down into two interconnected components: Task Selection and PeTEL Construction, each managed by specialized micro-agents to ensure a thorough and user-oriented formulation of the machine learning task.
#### 3.3.1 **Task Selection:**
Task Selection is the cornerstone of defining the machine learning task. Drawing from the dataset summary and user objectives, this step generates suitable ML tasks for the user to consider. Users have the freedom to select from the suggested tasks or propose an alternative from a pool of common tasks such as "classification", "regression", "clustering", and more. Throughout the dialogue, the system iteratively refines the user's understanding and requirements until a task is selected. The Task Selection micro-agent (detailed in Section 3.3.3) manages this exchange, guiding the user and ensuring the chosen task aligns with their dataset and objectives. The conversation continues until the user is confident in their task choice, promoting effective problem-solving and better outcomes.
#### 3.3.2 **PeTEL Construction:**
Following task selection, the system employs the Prediction Task Expression Language (PeTEL) [20] for concise representation of the selected machine learning task. PeTEL uses slot-value pairs to encapsulate the task's essential components, presenting a precise yet comprehensible task description. A complete PeTEL includes the task's desired outcome and search parameters, offering a clear directive for the subsequent ML task.
The PeTEL Construction micro-agent group (detailed from Section 3.3.4 to Section 3.3.6) assists in populating necessary values for PeTEL slots based on the chosen ML task. This iterative process guarantees an accurate representation of user requirements, leading to superior results.
The PeTEL Construction concludes with a comprehensive task representation that is user-specific and efficient for further processing. A sample populated PeTEL, demonstrating the iterative process of filling out the different components, is available in Listing 1.
```
{ problem_type:classification, target_variable:delay_severity, features:[departure_airport, arrival_airport, airline, scheduled_departure_time, scheduled_arrival_time, weather_conditions], dataset_size:10000/Default,
```
\begin{table}
\begin{tabular}{l} \hline
**System setup** \\ \hline The AI agent must analyze the provided dataset summary and recommend appropriate machine learning (ML) tasks. Based on the summary, column descriptions, row information, and any observed trends, the agent should suggest at least two suitable ML task from the following task list: [“classification”, “regression”, “clustering”, “dimensionality reduction”, “anomaly detection”, “time series”]. For each ML task the agent chooses a clear rationale must be provided which may include an explanation of why the chosen task aligns with the dataset, and a concrete example of how the task can be formulated. \\ \hline
**Directive** \\ \hline Suggest ML tasks based on the following dataset summary: \{summary\} \\ \hline \end{tabular}
\end{table}
Table 5: The details of prompt design for the suggest ML task a sub-process of Dataset Summarizer microprocess. In the prompt, the (summary) is a placeholders which will be replaced by the dataset summary of the user provided dataset.
performance_metrics: [accuracy, precision, recall, fl_score, confusion_matrix], validation_method: cross_validation, classification_methods: [logistic_regression, decision_tree_classifier, random_forest_classifier, svm_classifier, knn_classifier, xgboost_classifier, naive_bayes], data_filters: [ {column: delay_duration, condition: greater_than, value: 15}, {column: departure_airport, condition: equals, value: JFK} ], business_goals: [reduce customer complaints, optimize scheduling, improve airport operations], additional_requirements: [robust to outliers, handle class imbalance], model_preferences: interpretable } ```
Listing 1: Sample populated PeTEL for classification task based on FlightDelay dataset (one of our demo datasets).
#### Task Selector micro-agent:
The task selection micro-agent guides users through a conversation to identify an appropriate machine learning problem from a pool of available options, while also assisting them in choosing a suitable model tailored to their needs. By understanding their requirements and considering the dataset's characteristics, user's objectives, and the dialog context, the assistant is capable of selecting from an array of model types, such as "classification", "regression", "clustering", "dimensionality reduction", "anomaly detection", etc. This micro-agent facilitates user engagement and ensures the chosen problem and model align with the dataset's properties and the user's goals, offering personalized recommendations that seamlessly integrate into the micro-agent framework. Table 6 presents the unified prompt design employed to guide ChatGPT to select an appropriate ML task from the conversation summary until the user is fixated on a ML task.
```
\begin{tabular}{l} \hline \hline
**Pyment Setup** \\ \hline The AI assistant is designed to comprehend the user's needs through conversation and assist them in selecting a suitable machine learning model for formulating a Machine Learning problem. The assistant must choose the appropriate model from the provided list: ["classification", "regression", "clustering", "dimensionality reduction", "anomaly detection"]. The assistant should consider the user's problem, requirements, and dataset, which may be found in the dialog context, to recommend the best model tailored to their specific needs. \\ \hline \multicolumn{2}{l}{**Demonstration**} \\ \hline
**User** & I want to predict whether a flight will be delayed or not based on factors like weather conditions \\ & and previous delays. \\ \hline
**ChatGPT** & {’model’: 'classification','reason': 'A classification model can be used to categorize flights as delayed or not delayed based on the input features, such as weather conditions and previous delays.’} \\ \hline
**User** & I need to find groups of flights with similar delay patterns, considering variables like departure airport, airline, and time of day. \\ \hline
**ChatGPT** & {’model’: 'clustering','reason': 'A clustering model can help identify groups of flights with similar delay patterns by analyzing variables like departure airport, airline, and time of day, without requiring labeled data.’} \\ \hline
**Directive** \\ \hline \hline \end{tabular}
#### 3.3.4 **Seeker micro-agent:**
The Seeker micro-agent, part of the PeTEL Construction micro-agent group, converses with the user to populate the next slot in the PeTEL representation. It effectively guides the user through each unfilled slot, ensuring a complete and accurate task formulation. Table 7 presents the unified prompt design employed to guide ChatGPT for asking questions about a specific unfilled slot from the PeTEL expression effectively.
\begin{table}
\begin{tabular}{l} \hline \hline
**Prompt Design** \\ \hline \hline \multicolumn{2}{l}{Please choose a most suitable model the conversation context: [context] and my latest utterance [input]. The output must be in a strict JSON format: [”model”: “model name”, “reason”: “your detail reasons for the choice”].} \\ \hline \hline \end{tabular}
\end{table}
Table 6: The details of prompt design for the Task Selector microprocess. In the prompt, the [context], and [input] are placeholders which will be replaced dynamically during the conversation
#### 3.3.5 Feeder micro-agent:
The Feeder micro-agent, another part of the PeTEL Construction micro-agent group, populates the PeTEL representation with user responses. It ensures the user's utterance is correctly interpreted and integrated into the problem formulation. Table 8 presents the
\begin{table}
\begin{tabular}{p{34.1pt}|p{284.5pt}} \hline \hline & **Prompt Design** \\ \hline
**User** & In order to obtain information about the next unidentified slot in the given JSON object \\ \hline & problem\_type: time\_series\_forecasting, \\ & target\_variable: null, \\ & forecast\_horizon: null, \\ & business\_goals: null, \\ & granularity: null, \\ & features: null, \\ & time\_range: null, \\ & model\_preferences: null, \\ & performance\_metrics: null, \\ & validation\_method: null, \\ & additional\_requirements: null, \\ & time\_series\_methods: null, \\ & data\_filters: [ \\ & {column: null, condition: null, value: null}, \\ & {column: null, condition: null, value: null} \\ & ] \\ & ] \\ \hline & The agent should ask the user a clear and specific question. Considering the provided context (User wants to formulate a time series forecasting problem by using the flight delay dataset. He is interacting with the agent and the agent is now asking questions to the user to formulate the problem), please formulate a suitable query to gather the necessary data for the missing slot. \\ \hline
**ChatGPT** & based on the provided context and the current state of the JSON object, the next unidentified slot is the **target\_variables** slot. What specific variable are you trying to forecast using the flight delay dataset? This is also known as the target variable. For example, are you trying to forecast the number of flights that will be delayed, or the total duration of delays in minutes? \\ \hline
**Directive** & \\ \hline \hline \end{tabular}
* In order to obtain information about the next unidentified slot in the given JSON object ([petel]), the agent should ask the user a clear and specific question. Considering the provided context ([context]), please formulate a suitable query to gather the necessary data for the missing slot.
\end{table}
Table 7: The details of prompt design for the Seeker microprocess.In the system setup, {dataset_summary} is a placeholder for the current dataset summary user is working with. In the prompt, the {petel}, and {context} are placeholders which will be replaced dynamically during the conversation
unified prompt design employed to guide ChatGPT to fill PeTEL espressions' unfilled slots from the user interactions effectively.
\begin{tabular}{l|l} \hline \multicolumn{2}{c}{**Prompt Design**} \\ \hline \multicolumn{2}{l}{**System setup**} \\ \hline The AI assistant serves as a virtual data scientist, striving to complete a given JSON object by incorporating missing values through user input. Update the JSON object with the relevant unknown slot, considering that the user may also wish to modify existing filled slots. Ensure that the output is a strictly JSON format. If it is not possible to update the JSON object, return the original input JSON object. \\ \hline \multicolumn{2}{l}{**Demonstration**} \\ \hline
**User** & Using the provided information (I'd like to forecast the total number of flight delays.), please \\ & update the appropriate slot in the given JSON object: \\ \cline{2-3} & \{ \\ & problem\_type: time\_series\_forecasting, \\ & target\_variable: null, \\ & forecast\_horizon: null, \\ & business\_goals: null, \\ & granularity: null, \\ & features: null, \\ & time\_range: null, \\ & model\_preferences: null, \\ & performance\_metrics: null, \\ & validation\_method: null, \\ & additional\_requirements: null, \\ & time\_series\_methods: null, \\ & data\_filters: [ \\ & \{column: null, condition: null, value: null\}, \\ & \{column: null, condition: null, value: null\} \\ & ] \\ \cline{2-3} & \} \\ \cline{2-3} & Ensure that you fill only the slot that is directly related to this information. Your response should \\ & consist solely of the updated JSON file. \\ \hline \hline \end{tabular}
Continued on next page
\begin{table}
\begin{tabular}{l|l} \hline \hline
**ChatGPT** & \multicolumn{1}{c}{} \\ \hline & problem\_type: time\_series\_forecasting, \\ & target\_variables: [flight\_delays], \\ & forecast\_horizon: null, \\ & business\_goals: null, \\ & granularity: null, \\ & features: null, \\ & time\_range: null, \\ & model\_preferences: null, \\ & performance\_metrics: null, \\ & validation\_method: null, \\ & additional\_requirements: null, \\ & time\_series\_methods: null, \\ & data\_filters: [ \\ & {column: null, condition: null, value: null}, \\ & {column: null, condition: null, value: null} \\ & ] \\ & ] \\ \hline \hline \end{tabular}
\end{table}
Table 8: continued from previous page
\begin{table}
\begin{tabular}{l|l} \hline \hline & **Prompt Design** \\ \hline
**User** & Using the provided information (for time horizon I would like to use one month I guess.), please \\ & update the appropriate slot in the given JSON object: \\ \hline & \{ \\ & problem\_type: time\_series\_forecasting, \\ & target\_variables: [flight\_delays], \\ & forecast\_horizon: null, \\ & business\_goals: null, \\ & granularity: null, \\ & features: null, \\ & time\_range: null, \\ & model\_preferences: null, \\ & performance\_metrics: null, \\ & validation\_method: null, \\ & additional\_requirements: null, \\ & time\_series\_methods: null, \\ & data\_filters: [ \\ & \{column: null, condition: null, value: null\}, \\ & \{column: null, condition: null, value: null\} \\ & ] \\ & ] \\ \hline \hline \end{tabular}
\end{table}
Table 8: continued from previous page
#### 3.3.6 PeTEL Descriptor micro-agent:
The PeTEL Descriptor micro-agent, the final part of the PeTEL Construction micro-agent group, communicates the filled PeTEL expression to the user in natural language. It ensures the user fully understands the formulated task, reinforcing the user-centric approach of VIDS. Table 9 presents the unified prompt design employed to guide ChatGPT.
\begin{table}
\begin{tabular}{l} \hline \hline
**Phempt Design** \\ \hline \hline
**System setup** \\ \hline \hline \end{tabular}
\end{table}
Table 8: The details of prompt design for the Feeder microprocess. In the prompt, the {input}, and {petel} are placeholders which will be replaced dynamically during the conversation
**Prompt Design**
As the AI agent, your primary objective is to carefully examine the problem illustrated within the json object and then develop a comprehensive, yet clear, natural language description. This explanation should effectively communicate the underlying issue to the reader, ensuring that they can grasp the key aspects and implications of the problem at hand. By providing an elaborative and accessible interpretation, the AI agent will enable a better understanding of the situation for individuals who may not be familiar with the json format or the specific context of the issue.
**Demonstration**
**User**: Please provide a clear and concise natural language description of the problem formulation presented here:
{
problem_type: classification,
target_variable: delay_severity,
features: [departure_airport, arrival_airport, airline,
scheduled_departure_time, scheduled_arrival_time,
weather_conditions],
dataset_size: 10000/Default,
performance_metrics: [accuracy, precision, recall, fl_score,
confusion_matrix],
validation_method: cross_validation,
classification_methods: [logistic_regression,
decision_tree_classifier, random_forest_classifier,
svm_classifier, knn_classifier, xgboost_classifier,
naive_bayes],
data_filters: [
{column: delay_duration, condition: greater_than, value: 15
},
{column: departure_airport, condition: equals, value: JFK}
},
business_goals: [reduce customer complaints, optimize
scheduling, improve airport operations],
additional_requirements: [robust to outliers, handle class
imbalance],
model_preferences: interpretable
} ```
**ChatGPT**: This is a time series forecasting problem that aims to predict the total delay duration of flights for the next month. The prediction will be based on various features such as departure and arrival airports, airline, weather conditions, and delay duration. The model should be interpretable and capable of handling seasonality, trends, and nonlinear relationships. The problem also aims to understand the impact of weather on flight delays and to forecast delay duration per airline. The dataset is at hourly granularity and covers the past three years. The performance metrics used to evaluate the model include mean squared error, mean absolute error, and R2 score. The model will be validated using time series cross-validation. The data filters used in this problem include only flights departing from JFK airport and operated by Delta airlines.
**Directive**
Please provide a clear and concise natural language description of the problem formulation presented here:
{petel}. Continued on next page
\begin{table}
\begin{tabular}{l|l} \hline \hline
**Demonstration** & \\ \hline
**User** & Please provide a clear and concise natural language description of the problem formulation presented here: \\ \hline \{ & problem_type: classification,
target_variable: delay_severity,
features: [departure_airport, arrival_airport, airline,
scheduled_departure_time, scheduled_arrival_time,
weather_conditions],
dataset_size: 10000/Default,
performance_metrics: [accuracy, precision, recall, fl_score,
confusion_matrix],
validation_method: cross_validation,
classification_methods: [logistic_regression,
decision_tree_classifier, random_forest_classifier,
svm_classifier, knn_classifier, xgboost_classifier,
naive_bayes],
data_filters: [
{column: delay_duration, condition: greater_than, value: 15
},
{column: departure_airport, condition: equals, value: JFK}
},
business_goals: [reduce customer complaints, optimize
scheduling, improve airport operations],
additional_requirements: [robust to outliers, handle class
imbalance],
model_preferences: interpretable
}
```
\end{table}
Table 9: continued from previous page
### Prediction Engineering
Following Task Formulation, the journey progresses to Prediction Engineering, a fundamental stage where the system transforms the problem representation into a tangible prediction model. This phase is composed of three primary steps: PeTEL to Feature, Data Cleaning and Preparation, and AutoML interfacing. Each step is crucial in bridging the gap between the problem's conceptual representation and its practical implementation.
#### 3.4.1 PeTEL to Attribute Converter:
The PeTEL to Feature conversion is the first step in the Prediction Engineering process. Here, the PeTEL representation, which succinctly describes the machine learning task, is translated into features that can be used by the prediction model. This process ensures that the machine learning algorithms can interpret and work with the problem definition, turning the abstract task representation into concrete, computable features.
#### 3.4.2 Data Preper Micro-Agent:
Once the features are defined, the next step is Data Cleaning and Preparation. This stage involves pre-processing the dataset to ensure it's suitable for the prediction model. Common procedures during this phase include handling missing data, dealing with outliers, and encoding categorical variables. The goal is to produce a clean, well-structured dataset that can be readily consumed by downstream machine learning algorithms, maximizing the potential for accurate and meaningful predictions.
#### 3.4.3 AutoML interfacer Micro-Agent:
The final step in the Prediction Engineering phase is interfacing with AutoML systems. AutoML platforms automate the process of applying machine learning to real-world problems, making the technology accessible to non-experts and improving efficiency of experts. In this step, the prepared dataset is fed into an AutoML system, which automatically selects the most suitable machine learning algorithm, optimizes its parameters, and trains the model. The result is a robust prediction model that is ready to generate insights from new data, bringing the conceptual machine learning task to fruition.
### Result Summary and Recommendation
A data scientist's work typically culminates in consolidating any findings and suggesting optimal approaches to domain experts. These recommendations can span diverse levels, such as models, features, or computational overhead. However, this crucial stage is primarily manual and lacks systematic structuring in the current landscape. In response to this, we aim to enhance and refine the final phase of VIDS, known as the Result Summary and Recommendation, in upcoming iterations. We anticipate incorporating two primary processes within this phase: Result Summarization and Result Visualization. These proposed enhancements aim to bolster users' comprehension and capacity to make informed decisions, thereby streamlining the intricate process of data science.
#### 3.5.1 Result Summarizer Micro-Agent:
Currently, we have implemented the Result Summarization micro-agent, where the system produces a comprehensive summary of the findings once the machine learning tasks have been executed. Utilizing an AutoML library such as Auto-SKLearn, the system trains all specified models, equipping users with a broad comparison to discern the most effective solution. This process distills the results into an accessible format, enabling users to grasp the essence of the findings quickly.
\begin{table}
\begin{tabular}{l} \hline \hline
**Prompt Design** \\ \hline \hline \end{tabular}
\end{table}
Table 9: The details of prompt design for the PeTEL Descriptor microprocess. In the prompt, {petel} is a placeholders which will be replaced by a fully filled PeTEL expression.
5.2 **Result Visualizer Micro-Agent (Future work)**: Looking forward, we aim to implement the Result Visualization micro-agent. Visualizing the outcomes can significantly aid users' understanding and facilitate more informed decision-making. We plan to develop a process that generates suitable visualizations based on the results, such as performance metrics or feature importance, offering a more intuitive perspective of the findings.
Additionally, we aspire to further optimize the decision-making process, by having the system engage in an interactive dialogue with the user, recommending the most suitable model based on the conversation. This personalized approach would empower users to make informed decisions, streamline the process, and enhance the effectiveness of their machine learning endeavors.
## 4 Qualitative Examples
The purpose of this section is to conduct a thorough investigation of the qualitative aspects of Large Language Models (LLMs) in complex tasks, with a specific focus on three key areas. This study will use the Student Performance (SP)2 dataset. First, we will examine the overall chat cycle, which involves the interactions between the user and VIDS, consisting of well-designed assemblies of LLM agents. This analysis will provide insights into the user experience and highlight the usability and effectiveness of the VIDS' LLMs in facilitating seamless communication. Secondly, we will explore the interaction among the micro-agent agents themselves, each equipped with its own LLM instance. This exploration will deepen our understanding of the internal mechanics of the micro-agent agents and their role in executing complex tasks. Lastly, we will conduct a comprehensive evaluation of Prompt Taxonomy testing on these micro-agent agents to assess their adaptability and responsiveness in diverse scenarios, further emphasizing the practicality and usability of LLMs in complex tasks. By conducting this comprehensive qualitative analysis, our goal is to provide a holistic understanding of the role and effectiveness of LLMs in managing complex tasks.
Footnote 2: [https://www.kaggle.com/datasets/larsen0966/student-performance-data-set](https://www.kaggle.com/datasets/larsen0966/student-performance-data-set)
### Overall Chat Cycle
In this subsection, we turn our attention to the overall chat cycle, a crucial component of the interaction between the user and the system. This cycle, facilitated by VIDS' well-designed assembly of Large Language Model (LLM) agents, is the cornerstone of the model's user-system communication. We will delve into the intricacies of this cycle, examining how the LLMs contribute to a seamless and effective communication experience. Our analysis will provide valuable insights into the user experience, highlighting the usability and effectiveness of the system's LLMs in complex tasks.
\begin{tabular}{l|l} \hline
**Turn** & **Conversation** \\ \hline \(system_{1}\) & Welcome, I am your personal data scientist. I have processed the provided dataset for your convenience I present the summary here: \{Summary\}. I propose the following ML tasks for this dataset: \(\{ML\_task\}\) \\ \hline \(utterance_{1}\) & Ok, from the description it seems like classification is a good choice. \\ \hline \(response_{1}\) & Great! Let's move forward with the classification task. Now that we have decided on the classification task, let's move on to formulating the problem. Can you provide me with more details on what you would like to achieve with this task? For example, what is the target variable you would like to predict? \\ \hline \(utterance_{2}\) & I think I will use final grade \\ \hline \(response_{2}\) & Based on the provided context, the next unidentified slot in the JSON object is 'dataset_size'. To obtain this information, the agent could ask the user: Can you please provide the size of your dataset for the Classification model you are building? \\ \hline \end{tabular}
Continued on next page
### Interaction Between Micro-Agents
In this subsection, we delve into the interactions within the micro-agents themselves, each equipped with its own instance of a Large Language Model (LLM). These interactions occur in four distinct states within the system: data visualization, task formulation, prediction engineering, and result generation and summarization. By examining the interplay among these micro-agents in each state, we aim to enhance our understanding of the internal mechanics of the LLMs and their role in executing complex tasks.
#### 4.2.1 **Data Visualization State:**
The first state, data visualization, involves the presentation of data in a visual format that aids in understanding the complexities and patterns within the data. The interaction between micro-agent agents in this state is crucial as it determines the effectiveness of the data representation. In Table 11, we show the specific roles and interactions of the micro-agents in this state.
\begin{table}
\begin{tabular}{c|l} \hline
**Turn** & **Conversation** \\ \hline \(utterance_{3}\) & Lets use 10000 samples for this task \\ \hline \(response_{n}\) & The task is to predict the final\_grade of a student based on various features like study\_hours, attendance, participation, homework\_scores, and test\_scores. The dataset size is hypothetical. Performance is measured using accuracy, F1 score, and the confusion matrix. K-fold cross-validation is used for validation. Three classification methods are considered: random forest, SVM, and logistic regression. \\ The data is filtered to only include records where attendance is greater than 75 (assuming attendance is measured as a percentage) and study\_hours is more than 1. The business goal is to predict student performance to implement early interventions for those at risk of failing. Model interpretability is highlighted as an important additional requirement, and there’s a preference for models with higher accuracy, but interpretability is also considered important. \\ \hline \(utterance_{n+1}\) & That seems all right to me. go ahead with this task. \\ \hline \end{tabular}
\end{table}
Table 10: Interaction of user and system where \(utterance_{*}\) are the user utterances and \(response_{*}\) are the overall system response to the user. In \(system_{1}\), {summary} and {\(ML\_task\)} are from Table 11.
\begin{table}
\begin{tabular}{c|c|l|l} \hline
**\#** & **Input** & **Micro-Agent** & **Output** \\ \hline
2 & _Min\_ds_ & Dataset Summarizer & _summary_ : \{ \\ & & & dataset\_summary_ : This dataset contains information about students in a school. It includes their demographic information such as sex, age, address, family size, and parents’ education level. It also includes information about their academic performance such as grades, study time, and number of failures. Additionally, it includes information about their social life such as alcohol consumption, going out with friends, and having a romantic relationship. \\ & & columns : list of column names and description \\ & & sample row : This row represents a female student who is 16 years old, lives in an urban area with a family size greater than 3, and her parents are living together. Her mother has a higher education level than her father and works in the health sector, while her father works in the services sector. She chose this school because of its reputation and her father is her guardian. It takes her less than 15 minutes to travel from home to school. She studies for 2 to 5 hours per week and has not failed any classes in the past. She receives no extra educational support from school or family members, and does not pay for extra classes. She participates in extracurricular activities and attended nursery school. She wants to take higher education and has internet access at home. She is not in a romantic relationship and has a good quality of family relationships. She has a moderate amount of free time and goes out with friends occasionally. She has a low workday alcohol consumption and a very low weekend alcohol consumption. Her current health status is good and she has had 6 school absences. Her grades are 12 in the first period, 12 in the second period, and 13 in the final period. \\ & & trend : From the dataset, it can be observed that female students tend to have higher grades than male students. Additionally, students who receive extra educational support from school or family members tend to have higher grades than those who do not. Students who have a higher amount of study time per week also tend to have higher grades. On the other hand, students who have a higher amount of alcohol consumption tend to have lower grades. Finally, students who have a higher number of school absences tend to have lower grades. \} \\ \hline \end{tabular}
\end{table}
Table 11: continued from previous page
#### 4.2.2 **Task Formulation State:**
The second state, task formulation, is the stage where the task to be performed is defined and structured. The interaction between micro-agent agents in this state is critical as it shapes the formulation of the task and ultimately influences the success of the task's execution. In Table 12, we delve into the specifics of these interactions.
\begin{table}
\begin{tabular}{p{42.7pt}|p{34.1pt}|p{34.1pt}|p{34.1pt}|p{34.1pt}} \hline \hline
**\#** & **Input** & **Micro-Agent** & **Output** \\ \hline
1 & \(utterance_{1}\), \(summary\), \(ML\_task\) & Dialog summarizer & \(context_{1}:\) The dialog provides a summary of a dataset containing information about students in a school, including their demographic information, academic performance, and social life. The dataset summary highlights trends such as female students having higher grades and students who receive extra educational support having higher grades. Based on this information, the user suggests that classification would be a suitable machine learning task to predict whether a student will pass or fail based on their demographic information, academic performance, and social life. The user’s rationale for choosing classification is to identify students who are at risk of failing and provide early intervention to improve their academic performance. \\ \hline
2 & \(context_{1}\), \(utterance_{1}\) & State Selector & \(ds_{1}:\) Task Formulation \\ \hline \hline \end{tabular}
\end{table}
Table 11: Interaction of different micro-agent in the Data Visualization state. In this qualitative study we work with Flight Delay dataset.
\begin{table}
\begin{tabular}{l|l|l|l|l} \hline \# & **Input** & **Micro-Agent** & **Output** \\ \hline
3 & \(utterance_{1}\), & Task Selector & \(ML\_task:\) {“model”: “classification”, “reason”: “Based on the dataset summary, the AI agent suggested two suitable machine learning tasks: classification to predict whether a student will pass or fail, and regression to predict a student’s final grade. The user agreed that classification would be a good choice based on the dataset summary. Therefore, a classification model is a good choice for this task.” \\ \hline
4 & \(utterance_{2}\), & Dialog Summarizer & \(context_{2}:\) In this dialogue, the AI assistant provides a summary of a dataset containing information about students in a school, including their demographic information, academic performance, and social life. The assistant notes that female students tend to have higher grades than male students, and students who receive extra educational support from school or family members tend to have higher grades. The assistant suggests two suitable ML tasks for the dataset: classification and regression. The user decides to use classification to predict whether a student will pass or fail based on their demographic information, academic performance, and social life. The assistant asks for more details on the target variable, and the user decides to use the final grade as the target variable. \\ \hline
5 & \(context_{2}\), & State Selector & \(ds_{2}:\) Task Formulation \\ & \(utterance_{2}\), & Feeder & \(PeTEL_{2}:\) \\ \hline
6 & \(utterance_{2}\), & Feeder & \(PeTEL_{2}:\) \\ & \(PeTEL_{1}\) & & \{ problem\_type: classification, target\_variable: final grade, features: None, dataset\_size: None, performance\_metrics: None, validation\_method: None, classification\_methods: None, data\_filters: [ column: final grade, condition: less than, value: passing grade), column: None, condition: None, value: None) ], business\_goals: None, additional\_requirements: None, model\_preferences: None ) \\ \hline \end{tabular}
\end{table}
Table 12: continued from previous page
\begin{table}
\begin{tabular}{c|c|c|c} \hline
**\#** & **Input** & **Micro-Agent** & **Output** \\ \hline
7 & \(utterance_{2}\), & Seeker & \(Q_{1}:\) Based on the provided context, the next unidentified slot in the JSON object is ‘dataset_size’. To obtain this information, the agent could ask the user: Can you please provide the size of your dataset for the Classification model you are building? \\ \hline
8 & \(utterance_{3}\), & Dialog Summarizer & \(context_{3}:\) The AI assistant provided a summary of the dataset, including information about the columns and a sample row. The assistant also mentioned trends observed in the dataset, such as the correlation between study time and grades, and the negative correlation between alcohol consumption and grades. The user asked for suggestions on machine learning tasks, and the assistant suggested classification and regression. The user chose classification and specified that the target attribute would be the final grade. The assistant then asked for more information about the problem to help define it more precisely. The user decided to use 10,000 samples for the task \\ \hline
9 & \(utterance_{3}\), & Feeder & \(PeTEL_{3}:\) \\ & \(context_{3}\), & \(\{\) \\ & \(PeTEL_{3}\) & & \\ & & & \\ & & & \\ & & & \\ & & & \\ \hline
9 & \(utterance_{3}\), & Feeder & \(PeTEL_{3}:\) \\ & \(context_{3}\), & & \\ & \(PeTEL_{3}\) & & \\ & & & \\ & & & \\ & & & \\ & & & \\ & & & \\ \hline \end{tabular}
\end{table}
Table 12: continued from previous page
#### 4.2.3 Prediction Engineering State:
The third state, prediction engineering, is an integral part of the AutoML pipeline. This state takes the formulated task and prepares the dataset accordingly. The interaction between micro-agent agents in this state is essential as it directly influences the preparation of the dataset, which in turn impacts the accuracy and reliability of the predictions made. In this state, the micro-agent agents work collaboratively to interpret the task requirements, adjust the dataset to align with these requirements, and set the stage for accurate prediction generation. We will delve into the specifics of these interactions, referring to a table that outlines the interactions between the micro-agent agents during prediction engineering. This discussion will provide a comprehensive understanding of the role and effectiveness of the micro-agent agents in this crucial state of the AutoML pipeline.
#### 4.2.4 Model Training, Result Summary, and Recommendation State:
After the task is formulated, VIDS interfaces with AutoML tools (e.g. AutoSKLearn) and trains downstream model(s) based on the task formulation determined beforehand. As shown in Table 14, from training performance of different models, VIDS generates summaries, including the results and recommendations based on user preferences defined in the task formulation. Our future work will be to interact with the user in this stage and evaluate different models based on the user's business goals.
\begin{table}
\begin{tabular}{c|l|l|l} \hline \# & **Input** & **Micro-Agent** & **Output** \\ \hline
1 & \(PeTEL_{3}\) & PeTEL to Attribute Converter & List of attributes \\ \hline
2 & \(PeTEL_{3}\) & Data Prepper & Prepares data with the conditions in PeTEL \\ \hline
3 & \(PeTEL_{3}\) & AutoML Interfacer & Calls the AutoML interface \\ \hline \end{tabular}
\end{table}
Table 13: Interaction of different micro-agent in the Prediction Engineering state.
\begin{table}
\begin{tabular}{c|l|l} \hline \# & **Input** & **Micro-Agent** & **Output** \\ \hline n & \(PeTEL^{*}\), & PeTEL Descriptor & \(Description:\) \\ & \(context_{n}\) & & The task is to predict the final\_grade of a student based on various features like study\_hours, attendance, participation, homework\_scores, and test\_scores. The dataset size is hypothetical. \\ & & & Performance is measured using accuracy, F1 score, and the confusion matrix. K-fold cross-validation is used for validation. Three classification methods are considered: random forest, SVM, and logistic regression. \\ & & & The data is filtered to only include records where attendance is greater than 75 (assuming attendance is measured as a percentage) and study\_hours is more than 1. The business goal is to predict student performance to implement early interventions for those at risk of failing. Model interpretability is highlighted as an important additional requirement, and there’s a preference for models with higher accuracy, but interpretability is also considered important. \\ \hline n+1 & \(context_{n+1}\), & State Selector & \(ds_{n+1}:\) Prediction Engineering \\ & \(utterance_{n+1}\) & & \\ \hline \end{tabular}
\end{table}
Table 12: **continued from previous page**
### Prompt Engineering Taxonomy
The successful collaboration between humans and artificial intelligence in complex tasks necessitates a comprehensive understanding of the various levels of interaction that occur between them. These levels span from Level 0, where AI is solely responsible for data processing, to Level 5, which involves the integration of evaluation criteria. Building upon the foundational work on taxonomy of prompt engineering (TELeR) by Santu and Feng [38], we put forward the notion of considering the depth of information that the System Role discloses to the Large Language Model (LLM). To illustrate, if a system role is well-delineated, it precludes its prompt from being classified as Level 0. This study will specifically focus on three micro-agents: the Intent and State Detector, the Dialogue Summarizer, and the Conversation Manager. Each of these micro-agents plays a unique and integral role in fostering a dynamic and functional dialogue between the user and the AI, leading to a more streamlined and efficient system overall. The revised taxonomy for these interaction levels is as follows:
**Level 0**: No directive is given. The focus is solely on the exchange of data.
**Level 1**: A simple one-sentence directive is provided, expressing the high-level goal of the task.
**Level 2**: A multi-sentence (paragraph-style) directive is given, expressing the high-level goal and the sub-tasks needed to achieve this goal.
**Level 3**: A complex directive is provided, expressing the high-level goal along with a bulleted list of subtasks that need to be performed.
**Level 4**: This level includes a complex directive that encompasses the following: 1) A description of the high-level goal, 2) A detailed bulleted list of subtasks, and 3) An explicit statement asking the LLM to explain its response.
**Level 5**: This level includes a complex directive that encompasses the following: 1) A description of the high-level goal, 2) A detailed bulleted list of subtasks, 3) An explicit statement asking the LLM to explain its response, and 4) A guideline about how the response should be evaluated.
By understanding these levels of interaction, we can maximize the potential benefits of AI and guide future research into user experience, system performance, and ethical considerations in AI applications.
#### 4.3.1 **Intent and State Detector micro-agent:**
In terms of the taxonomy of prompts, the data for this micro-agent is as follows:
1. **context:** The user and the AI assistant discussed the summary of a dataset containing information about students in a school, including their demographic information, academic performance, and social life. The AI assistant suggested two suitable machine learning tasks based on the dataset: classification and regression. The user agreed that classification is a good choice to identify students who are at risk of failing and provide early intervention to improve their academic performance.
\begin{table}
\begin{tabular}{l|l|l|l} \hline \hline
**Step** & **Input** & **Micro-Agent** & **Output** \\ \hline
1 & \(context_{n}\), & Result Summarizer & \(Result:\) performance of each model based on evaluation criteria set in problem formulation. \\ & & & \\ & & & \\ \hline
2 & \(context_{n}\), & Result Visualizer & \(Output:\) Description of results in natural language. \\ & & & \\ \hline \hline \end{tabular}
\end{table}
Table 14: Interaction of different micro-agent in the Task Formulation state. In the table, \(utterance_{1},utterance_{2},utterance_{3}\) are from Table 10
2. **state:** dataset_understanding
3. **utterance:** Ok, from the description it seems like classification is a good choice.
By following the prompt taxonomy we discussed earlier, Table 15 represents the response from the ChatGPT agent for each level of prompt. For this task, prompts of Level 4 and 5 are not applicable as the output should be strict JSON so that the output is used by other micro-agents.
#### 4.3.2 **Dialogue Summarizer micro-agent:**
In terms of the taxonomy of prompts, the data for this micro-agent is:
* **history:** 1. assistant: summary of Student Performance dataset 2. user: From the summary you provided can you suggest me any machine learning task? 3. assistant: \(context_{1}\) from Table 12 4 user: Ok, from the description it seems like classification is a good choice.
By following the prompt taxonomy we discussed earlier, Table 16 represents the response from the ChatGPT agent for each level of prompt. Similar to Intent and State Detector, prompts of Level 4 and 5 are not applicable for this task as the output is used by other micro-agents
\begin{table}
\begin{tabular}{p{56.9pt}|p{284.5pt}} \hline
**Prompt Level** & **Output From ChatGPT** \\ \hline
0 & Great! Classification is a suitable choice for predicting whether a student will pass or fail based on their demographic information, academic performance, and social life. You can use various classification algorithms such as logistic regression, decision trees, random forests, and support vector machines to build a model that can accurately predict the target variable. Additionally, you can use techniques such as feature selection and hyperparameter tuning to improve the performance of the model. \\ \hline
1 & Great! Classification is a suitable choice for predicting whether a student will pass or fail based on their demographic information, academic performance, and social life. This can help identify students who are at risk of failing and provide early intervention to improve their academic performance. \\ \hline
2 & The user and assistant discussed the summary of a dataset containing information about students in a school, including their demographic information, academic performance, and social life. The assistant suggested two suitable machine learning tasks based on the dataset: classification and regression. The user decided that classification would be a good choice to identify students at risk of failing and provide early intervention. \\ \hline
3 & Description of results in natural language. \\ \hline \end{tabular}
\end{table}
Table 15: Intent and State Detector micro-agent response for different level of prompt engineering input.
#### 4.3.3 Conversation Manager micro-agent
In terms of the taxonomy of prompts, the data for this micro-agent is as follows:
1. **context:** The user and assistant discussed the dataset containing information about students in a school, including their demographic information, academic performance, and social life. The assistant suggested that the suitable machine learning tasks for this dataset are classification and regression, with classification being a good choice to identify students at risk of failing and provide early intervention to improve their academic performance. The user agreed that classification is a good choice.
2. **state:** Dataset Visualization
3. **utterance:** Ok, from the description it seems like classification is a good choice.
4. **intent:** Select problem
5. **mp_resp:** ["model": "classification", "reason": "Classification is a good choice for identifying students at risk of failing and providing early intervention to improve their academic performance. The dataset contains demographic information, academic performance, and social life, which can be used as input features to predict the likelihood of a student failing or passing." ]
By following the prompt taxonomy we discussed earlier, Table 17 represents the response from the ChatGPT agent for each level of prompt.
\begin{table}
\begin{tabular}{l|l} \hline
**Prompt Level** & **Output From ChatGPT** \\ \hline
0 & The assistant then asked the user about the specific features in the dataset that could be used for classification. The user mentioned that features such as gender, age, ethnicity, parental education level, and previous academic performance could be used to predict whether a student is at risk of failing. The assistant agreed and suggested that the dataset could be split into training and testing sets to build and evaluate a classification model. The user agreed and asked the assistant for recommendations on which classification algorithms to use. The assistant suggested using logistic regression, decision trees, and random forests, as they are commonly used for classification tasks and can handle both categorical and numerical data. The user thanked the assistant for the suggestions and said they would explore these algorithms further. \\ \hline
1 & Yes, based on the information you provided, classification seems like a suitable machine learning task for the dataset containing information about students in a school. It can help identify students who are at risk of failing and provide early intervention to improve their academic performance. Regression can also be a good choice to predict the academic performance of students based on their demographic and social information. \\ \hline
2 & Great! Now that we have agreed that classification is a suitable task for your dataset, let’s move on to formulating the problem. Can you please provide me with more details about the specific problem you want to solve using this dataset? For example, do you want to predict which students are at risk of failing a particular subject or identify factors that contribute to academic success? This information will help us formulate a more precise ML task that meets your needs. \\ \hline
3 & N/A \\ \hline \end{tabular}
\end{table}
Table 16: Dialog Summarizer micro-agent response for different level of prompt engineering input.
## 5 Discussion
As we navigate the complex task of automating task formulation and prediction engineering for an AutoML pipeline, ChatGPT emerges as a crucial ally. Its capabilities have streamlined this intricate process, paving the way for more accessible automation. Integral to this is the decomposition of larger tasks into smaller, specific micro-agents. This strategy enhances ChatGPT's performance and results in superior accuracy. We've observed that using targeted prompts, as opposed to a single overarching one, offers increased precision and control, leading to a more focused AI response and reducing the risk of errors and misunderstandings.
\begin{table}
\begin{tabular}{p{56.9pt}|p{284.5pt}} \hline
**Prompt Level** & \multicolumn{2}{c}{**Output From ChatGPT**} \\ \hline
4 & Based on the given context, it seems like we have reached a consensus that classification is a suitable machine learning task for the dataset containing information about students in a school. Now, we need to formulate the problem to proceed further. To do that, I suggest we move to the next conversation state, which is “problem_formulation”. In this state, we can discuss the specific details of the classification task, such as the target variable, the features to be used, and the algorithm to be employed. So, to move forward, can you please provide me with more information about the problem you want to solve using classification? This will help me formulate the problem and suggest relevant solutions to you. \\ \hline
5 & Based on the current conversation state, it seems like we are in the Task_Formulation state, and the user has agreed that classification is a good choice for the given dataset. Therefore, we can proceed with formulating the classification problem. To do so, we need to determine the target variable that we want to predict. In this case, we want to identify students who are at risk of failing and provide early intervention to improve their academic performance. Therefore, we can use the “final grade” variable as our target variable, and classify students into two categories: those who are at risk of failing and those who are not. To achieve this, we can use a binary classification algorithm, such as logistic regression or decision tree. We can also use feature selection techniques to identify the most important features that contribute to the prediction of the target variable. Based on this, we can formulate the ML task as follows: Develop a binary classification model to predict whether a student is at risk of failing based on their demographic information, academic performance, and social life. The target variable is the final grade, and the model should be trained on a labeled dataset. Feature selection techniques can be used to identify the most important features. This task can be further refined based on the users specific requirements and preferences. For example, the user may want to use a different classification algorithm or include additional features in the model. Evaluation criteria for the generated response: - The response provides a clear and concise formulation of the ML task based on the users’ requirements and preferences. - The response takes into account the dataset description and the previous discussion between the user and the assistant. - The response suggests an appropriate classification algorithm and feature selection techniques based on the problem requirements. - The response is open-ended and allows for further refinement and customization based on the users’ specific needs \\ \hline \end{tabular}
\end{table}
Table 17: Conversation Manager micro-agent response for different level of prompt engineering input.
\begin{table}
\begin{tabular}{p{56.9pt}|p{284.5pt}} \hline
**Prompt Level** & \multicolumn{2}{c}{**Output From ChatGPT**} \\ \hline
4 & Based on the given context, it seems like we have reached a consensus that classification is a suitable machine learning task for the dataset containing information about students in a school. Now, we need to formulate the problem to proceed further. To do that, I suggest we move to the next conversation state, which is “problem_formulation”. In this state, we can discuss the specific details of the classification task, such as the target variable, the features to be used, and the algorithm to be employed. So, to move forward, can you please provide me with more information about the problem you want to solve using classification? This will help me formulate the problem and suggest relevant solutions to you. \\ \hline
5 & Based on the current conversation state, it seems like we are in the Task_Formulation state, and the user has agreed that classification is a good choice for the given dataset. Therefore, we can proceed with formulating the classification problem. To do so, we need to determine the target variable that we want to predict. In this case, we want to identify students who are at risk of failing and provide early intervention to improve their academic performance. Therefore, we can use the “final grade” variable as our target variable, and classify students into two categories: those who are at risk of failing and those who are not. To achieve this, we can use a binary classification algorithm, such as logistic regression or decision tree. We can also use feature selection techniques to identify the most important features that contribute to the prediction of the target variable. Based on this, we can formulate the ML task as follows: Develop a binary classification model to predict whether a student is at risk of failing based on their demographic information, academic performance, and social life. The target variable is the final grade, and the model should be trained on a labeled dataset. Feature selection techniques can be used to identify the most important features. This task can be further refined based on the users’ specific requirements and preferences. For example, the user may want to use a different classification algorithm or include additional features in the model. Evaluation criteria for the generated response: - The response provides a clear and concise formulation of the ML task based on the users’ requirements and preferences. - The response takes into account the dataset description and the previous discussion between the user and the assistant. - The response suggests an appropriate classification algorithm and feature selection techniques based on the problem requirements. - The response is open-ended and allows for further refinement and customization based on the users’ specific needs \\ \hline \end{tabular}
\end{table}
Table 17: **continued from previous page**
Progressing with ChatGPT as a cornerstone in our project, we find that it simplifies natural language processing and advances us towards our broader goal of democratizing Machine Learning. This pivotal role that ChatGPT plays fuels VIDS' capacity to tackle more nuanced and intricate tasks, guiding our trajectory for future endeavors.
Turning to our existing system, it is designed to engage with the user in active dialog for problem formulation and subsequently present the results in a user-friendly conversational format. However, at this stage, our focus remains on the front-end process, and we do not yet facilitate in-depth discussion or analysis of these results. Looking ahead, our vision for continuous improvement involves augmenting VIDS to assess the performance of various models based on the user's unique business requirements. This enhancement will elevate our capacity to cater to individual needs, improving user understanding and empowering more informed decision-making. This commitment to continuous evolution drives us closer to our ambition of democratizing Machine Learning.
### _Fail cases_
When assessing the Intent and State Detection micro-agent, we confronted some area of failure in the ChatGPT model's performance. This issue manifested itself primarily in its inability to accurately decipher highly specific prompts, as described in Table 1 for the state detection task. Though the prompts distinctly defined both the current and subsequent states, ChatGPT consistently failed to correctly identify the intended state. One glaring example can be found in the user utterance, "Ok, from the description it seems like classification is a good choice", found in the dataset descriptions in Table 11. Here, the user's clear intent to select a Machine Learning task (classification) should have led to the identification of "Task Selection" as the selected state. Yet, ChatGPT mistakenly attributed 'Model Training' as the selected state. In an attempt to mitigate this failure, we introduced a modification to the prompt design to specify potential next states: "Next state should be from the following states - {next_states}". In this case, {next_states} should have included [data_visualization, task_selection]. This remedial action has shown promise in enhancing the accuracy of the state selector.
Additionally, we encountered a significant number of failures during the development of the dialog summarization micro-agent. Specifically, ChatGPT exhibited a propensity to generate unrelated, or "hallucinated", content when given few-shot learning examples. Our original process involved supplying a sample dialog between a user and an agent, along with its summary, in the expectation that ChatGPT would replicate this summarization approach. However, during the testing phase, it became evident that ChatGPT failed to understand the task correctly, treating the few-shot examples as part of the source text for summarization, rather than concentrating on the latest user input.
In conclusion, these cases represent significant challenges encountered in the development and testing phases of the ChatGPT model. Despite its advanced capabilities, the model displayed critical areas of failure in both the Intent and State Detection and dialog summarization micro-agents. Although we have introduced modifications to mitigate these issues and have seen some improvement, it is crucial to acknowledge these failures as opportunities for further research and development. The ability to accurately identify and rectify such errors is paramount in enhancing the model's robustness, efficiency, and overall performance. This analysis is instrumental in guiding our future efforts towards optimizing the ChatGPT model and bringing us closer to our ultimate goal of creating an AI that can effectively understand and engage with its users.
## 6 Conclusion
In this research, we have ventured into the realm of Large Language Models (LLMs) as personal data scientist (VIDS), with language acting as the pivotal interface linking LLMs and machine learning models. VIDS is architectured around four distinct dialogue states - Data Visualization,
Task Formulation, Prediction Engineering, and Result Summary and Recommendation. Each of these states signifies a unique phase in the conversation and plays a substantial role in the overall user-system interaction.
We have introduced the concept of global micro-agents, which form an overarching structure, maintaining a cohesive narrative throughout the dialogue, irrespective of the specific state. Complementing these are the local micro-agents, which are integral to each state and play a crucial role in VIDS' functionality.
Despite the advanced capabilities of VIDS, it is crucial to acknowledge the areas of failure, particularly in the Intent and State Detection and dialog summarization micro-agents. While we have implemented modifications to mitigate these issues and have observed some improvements, these shortcomings highlight the need for further research and development. The identification and rectification of such errors are paramount in enhancing the model's robustness, efficiency, and overall performance.
In conclusion, this research serves as a significant milestone towards our ultimate goal of creating an AI data science assistant that can effectively understand and engage with its users. The insights gleaned from this study will steer our future efforts in optimizing the ChatGPT model, edging us closer to harnessing the full potential of AI in the field of data science. We are confident that the continued refinement of these models will pave the way for more intuitive and effective human-AI interactions, revolutionizing the way we approach complex tasks and data analysis.
|
2301.10871 | Qualitative Analysis of a Graph Transformer Approach to Addressing Hate
Speech: Adapting to Dynamically Changing Content | Our work advances an approach for predicting hate speech in social media,
drawing out the critical need to consider the discussions that follow a post to
successfully detect when hateful discourse may arise. Using graph transformer
networks, coupled with modelling attention and BERT-level natural language
processing, our approach can capture context and anticipate upcoming
anti-social behaviour. In this paper, we offer a detailed qualitative analysis
of this solution for hate speech detection in social networks, leading to
insights into where the method has the most impressive outcomes in comparison
with competitors and identifying scenarios where there are challenges to
achieving ideal performance. Included is an exploration of the kinds of posts
that permeate social media today, including the use of hateful images. This
suggests avenues for extending our model to be more comprehensive. A key
insight is that the focus on reasoning about the concept of context positions
us well to be able to support multi-modal analysis of online posts. We conclude
with a reflection on how the problem we are addressing relates especially well
to the theme of dynamic change, a critical concern for all AI solutions for
social impact. We also comment briefly on how mental health well-being can be
advanced with our work, through curated content attuned to the extent of hate
in posts. | Liam Hebert, Hong Yi Chen, Robin Cohen, Lukasz Golab | 2023-01-25T23:32:32Z | http://arxiv.org/abs/2301.10871v3 | Qualitative Analysis of a Graph Transformer Approach to Addressing Hate Speech: Adapting to Dynamically Changing Content
###### Abstract
Our work advances an approach for predicting hate speech in social media, drawing out the critical need to consider the discussions that follow a post to successfully detect when hateful discourse may arise. Using graph transformer networks, coupled with modelling attention and BERT-level natural language processing, our approach can capture context and anticipate upcoming anti-social behaviour. In this paper, we offer a detailed qualitative analysis of this solution for hate speech detection in social networks, leading to insights into where the method has the most impressive outcomes in comparison with competitors and identifying scenarios where there are challenges to achieving ideal performance. Included is an exploration of the kinds of posts that permeate social media today, including the use of hateful images. This suggests avenues for extending our model to be more comprehensive. A key insight is that the focus on reasoning about the concept of context positions us well to be able to support multi-modal analysis of online posts. We conclude with a reflection on how the problem we are addressing relates especially well to the theme of dynamic change, a critical concern for all AI solutions for social impact. We also comment briefly on how mental health well-being can be advanced with our work, through curated content attuned to the extent of hate in posts.
## Introduction
Online social platforms have allowed vast amounts of communication between individuals at an unprecedented scale. Platforms such as Facebook have over 2.9 billion monthly active users who share opinions and connect with other users1. A central tenet of these platforms is the removal of traditional editorial barriers to reach a wider audience. Opinions or commentary do not need to be regulated by editors before they can be published and shared. However, this open approach to free speech has also led to the explosion of propaganda, violence, and abuse against users based on their race, gender, and religion [15]. In addition, widespread dissemination of hateful speech has resulted in traumatizing mental health effects for the victims [16] and has ignited social tensions and polarization between groups [20]. To combat this trend, social platforms have created rigorous community guidelines which describe the kinds of content that can be shared2. These guidelines are then enforced by teams of human moderators who manually allow or disallow content. While effective, this approach can be insufficient for coping with the growing scale of these platforms.
Footnote 1: [https://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/](https://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/)
In an effort to allow improvements, platforms have also turned to the use of automated methods to detect hate speech [15, 16, 17], aiming to classify the text that comprises the comment as either hate speech or non-hate speech. However, we argue that this comment-only scope is becoming increasingly limited and ineffective, due to the importance of capturing context when deciding whether speech is hateful or not.
To this end, we have designed an approach that goes beyond current hate speech labelling efforts in three distinct ways [1]. First, we analyze entire discussions following a post, to detect hate speech. Second, we support predicting when hate speech will occur, rather than simply reacting to hateful posts once they are detected. All of this is achieved using graph transformer networks, coupled with modelling attention and BERT natural language processing. In so doing, we can capture the discussion context and anticipate upcoming anti-social behaviour. This allows us to analyze the conversational dynamics of different communities, being sensitive to cases where the usage of a slur can be re-appropriated to appear to be non-abusive. For example, the usage of certain slurs has been largely re-appropriated in African American culture as a normal part of their vernacular [14].
In this paper, we offer a detailed qualitative analysis of this solution for hate speech detection in social networks, leading to insights into where the method has the most impressive outcomes in comparison with competitors and identifying scenarios where there are challenges to achieving ideal performance. We draw out the key observation that comments on social platforms have evolved to include images and external articles. These additional elements can provide essential context to properly understanding the content that follows.
We will conclude with a discussion on how to extend our
the concern of mental health well-being and discuss how our more comprehensive automated solution for hate speech prediction can be the basis for some significant steps forward. The social impact that we anticipate coming from the research presented in this paper will be on social media environments and their users.
## Data and Methods
### Comment-Only Hate Speech Models
To evaluate recent work in hate speech detection, we selected MuRIL by Das et al. (2022) and Bert-HateXplain by Mathew et al. (2021). Both of these systems are based on the BERT transformer architecture, which can create rich embeddings of text toward classification tasks Devlin et al. (2019). We refer to these methods as comment-only hate speech models.
The main difference between the two methods is the data that both systems were trained on. For Bert-HateXplain, the authors collected a combined dataset of 20,148 hateful tweets and posts from social platforms Twitter and Gab. For MuRIL, the authors combined the HateXplain dataset with Founta et al. (2018) (85,775 tweets) and Davidson et al. (2017) (24,783 tweets). This combined approach was found to outperform HateXplain to become state-of-the-art in hate speech detection Das et al. (2022).
### Graph Hate Speech Models
To study the usage of Graph Networks for hate speech detection, we focus on our Graphormer approach proposed in Hebert et al. (2022). This model was novel in its ability to advance the study of hate speech detection in social media by a) predicting where hate may arise rather than simply reacting to posts that have been labelled as hate and b) leveraging graph transformers for capturing contextual attention between comments in discussion graphs.
Core to this model is the Graphormer architecture (Figure 1), which was originally created by Ying et al. (2021) to predict molecular properties. Graphormer uses a transformer model to create embeddings of atoms by computing self-attention relationships between each atom of the molecule in relation to their structure. The key to this approach is that self-attention can be computed between all nodes of a graph irrespective of their structural distance. This is contrasted by previous graph neural networks, which are constrained to computing relationships between immediate neighbours of a node Wu et al. (2020).
To adapt Graphormer to social media analysis, we proposed reformulating hate speech detection as a graph prediction task. Under this approach, comments are represented as nodes and edges are the reply-to relationships between them. We first initiate this graph by creating BERT embeddings of each comment. Then, we aggregate and process the embeddings in relation to the discussion structure using Graphormer, creating hate predictions for each node in the graph. To focus on proactive predictions, the label of each node is an ordinal value (0-4) based on how prevalent and encouraged the hate speech that follows that comment. This training objective requires the model to reason about the degree of hate throughout the entire discussion rather than being isolated to the comment itself.
In this study, we also evaluate a baseline approach that uses Graph Attention Networks (GAT) Velickovic et al. (2017). GAT models have previously been adapted for hate speech detection by Parmentier and Cohen (2019); Parmentier et al. (2021). Like Graphormer, GAT models utilize attention to create node embeddings in a graph structure. However, this attention is constrained to direct node neighbours, a limitation overcome by Graphormer by utilizing transformers. This can result in predictions that focus more on the immediate discussion context rather than the larger global context. Other work has examined graph-based approaches for hate speech detection Mishra et al. (2019); Tian et al. (2022); we confine our attention here to comparisons with the models already described above.
### Reddit
We focus on the social platform Reddit to capture examples for this study. On Reddit, discussions take place in topic-oriented communities called subreddits. Each discussion is organized in tree-like structures where users can create branching threads by replying to any comment in the tree. Prior work analyzing Reddit communities has found that these communities exhibit significant differences in their social makeup and communal behaviours Waller and Anderson (2021). For example, communities such as r/conservatives demonstrate a right-leaning bias and community whereas r/politics contains a polarizing left-leaning bias.
We analyzed 32 discussions from 16 different communities centred around contentious topics and unique communities. Each discussion was chosen based on the amount of comments it had. We also draw from Reddit conversations sampled in Kurrek et al. (2020) for examples of reclaimed speech. For this paper, we select five interesting examples to display in the section that follows.
Figure 1: Graphormer Architecture
## Analysis
In this study, we focus on capturing examples from two categories of conversational hate speech. First, we start with samples of contextual hate speech, harmful comments that directly refer to or respond to the prior discussion. Examples of this kind of hate speech would be a harmful commentary on contentious topics, such as responding negatively to gay rights. We hypothesize that comment-only methods would fail to capture the contextual nuances that underpin the hatefulness of the target text. This can result in false positives or false negatives when comments are judged in isolation.
The second category of hateful speech we study is inciteful hate speech: comments that at first glance appear to be neutral but are designed to prompt harmful discourse by other users. These examples aim to evaluate the ability of graph hate speech models to proactively predict the direction of conversations toward hatefulness, rather than only detect the hate of individual comments. As a result, this direction also evaluates the ability of graph methods to capture the dynamic nature of social media discussions, where conversations are not static but grow over time as users add replies to the content posted by other users. Examples of this kind of speech would be comments concerning US president Biden in right-leaning political subreddits, which can prompt hateful comments and threats.
For each comment in the discussion, we predict labels between zero and four using comment-only and graph hate speech models. To match the ordinal predictions given by Graphormer, we follow Hebert, Golab, and Cohen (2022) and map the zero to one prediction given by comment-only models to bins of width 0.20 ([0-0.20], [0.20 - 0.40]. [0.40 - 0.60], [0.60-0.80], [0.80-1]). To capture the ability of graph methods to adapt to evolving conversations, we initialize the discussion graph with the initial post and immediate replies (depth 1). We then iteratively predict the labels of each comment by gradually increasing the depth of the discussion tree provided to the graph models. As such, graph methods are constrained to make predictions about the direction of conversations without seeing future comments.
### Contextual Hate Speech
We start by analyzing a conversation that took place on the subreddit /r/gay, a community centred around LGBTQ topics (Table 1). In this conversation, users are discussing a tweet from openly gay pop artist Lil Nas X. Our analysis of this discussion thread demonstrates how a lack of context can lead to false predictions about hatefulness.
The conversation begins with the initial post "Anyone else loving Lil Nas X meming on biggots", referring to his tweet stating "i thought y'all didn't like political correctness. what happened?". This comment resulted in a high hatefulness prediction from comment-only Bert-HateXplain and a moderate prediction from the Graph methods. However, the hate predictions from the graph methods quickly neutralize as the conversation leads into a discussion of his latest song, Montero (depths 5 and 6). Given this context, it is clear to these methods that the conversation is not hateful but rather just discussing the lyrics of the song. However, this context is not available to comment-only methods. The inaccuracy introduced without this important context can be seen by the comment-only methods, in which both systems predicted high hateful scores for both comments. This illustrates the ability of graph methods to maintain conversational context when predicting hate scores.
Next, we turn to an example of a discussion where context is needed to detect hate speech. For this, we focus on the subreddit /r/MensRights. This community advocates for increased men's rights by discussing social issues that adversely impact them, which frequently devolves into harmful misogyny [16].
This pattern of abuse can be seen in the discussion presented in Table 2. Here, the user posts an image of a brief exchange they had in the r/Feminism subreddit. In this exchange, the user was banned from that community for advocating against the right for women to _feel_ safe but rather that women only have the right to _be_ safe, regardless of reassurances and comfort. By posting this exchange on r/Men-sRights, the user aimed to frame the r/Feminism community in a negative light by stating that they were banned in response to sharing a valid point.
In this discussion, graph methods were able to accurately understand the negative context of feminism towards the hateful comments that followed. Most surprisingly, none of the comments that followed were labelled as hate speech by the comment-only methods. Such examples include "Feminism is cancerous anyways", which was appropriately labelled as hateful by graph methods, receiving a prediction of 3 from Graphormer, but mislabeled as innocuous by text methods. We assume that this large difference in prediction comes uniquely from the graph structure of the discussion.
Finally, we examine a conversation that contains reclaimed language that was previously perceived as harmful to a given community. For this, we sample a discussion from /r/rupaulsdragrace, a community dedicated to the LGBTQ drag competition Ru Paul's Drag Race (Table 3). In this conversation, users are commenting on one competitor's manufactured drag outfits. In this community, slurs such as
Figure 2: Photo of the Drag Queen discussed in Table 3
\begin{table}
\begin{tabular}{c|l|c|c|c|c|c} Depth & Text & Graphormer & GAT & Bert-HateXplain & MuRIL \\ \hline \hline
0 &
\begin{tabular}{c} Anyone else loving Lil Nas X meming on biggots? [image] \\ \end{tabular} & 2 & 1 & 3 & 0 \\ \hline
1 & What was this in reference too? & 1 & 1 & 0 & 0 \\ \hline
2 & Biggots are mad about his most recent song/music video & 1 & 2 & 1 & 0 \\ \hline
3 & Ah ok! I’ll have to check it out! & 0 & 0 & 0 & 0 \\ \hline
4 & The song is called Montero (call me by your name). & 0 & 0 & 0 & 0 \\ \hline
5 & My favorite line is “I wanna f*ck you cuz I envy you”, and that hit hard & 0 & 0 & 2 & 4 \\ \hline
6 & \multicolumn{1}{l}{’I want that jet leg from f*ckin’ and flyin’ call me in} & 0 & 1 & 3 & 4 \\ \end{tabular}
\end{table}
Table 1: Conversation on r/gray containing contextual speech leading to false positive predictions
\begin{table}
\begin{tabular}{c|l|c|c|c|c} Depth & Text & Graphormer & GAT & Bert-HateXplain & MuRIL \\ \hline \hline
0 & How to get banned from r/Feminism [image] & 3 & 3 & 0 & 0 \\ \hline
1a & i guess they were on their period and they want their feelings to matter at that point and you got banned for it & 3 & 3 & 1 & 0 \\ \hline
1b & Feminism is cancerous anyways & 3 & 2 & 1 & 0 \\ \hline
1c & Wow, they are almost as fragile as the donald. I guess all special snow flakes need their own safe space. & 1 & 2 & 1 & 0 \\ \hline
1d & Absolutely horrible. & 1 & 0 & 1 & 1 \\ \end{tabular}
\end{table}
Table 2: Conversation on /r/MensRights containing contextual affiliation-directed hate speech
\begin{table}
\begin{tabular}{c|l|c|c|c|c|c} Depth & Text & Graphormer & GAT & Bert-HateXplain & MuRIL \\ \hline \hline
0 & \multicolumn{3}{l|}{*SPOILERS* Always and Forever, paparazzi who? [image - Figure 2]} & 1 & 1 & 1 & 0 \\ \hline
1a & Am I the only f*ggot that LIVED for this look? & 1 & 2 & 4 & 3 \\ \hline
2a & I honestly truly thought it was an amazing concept and I love the final result & 0 & 0 & 1 & 0 \\ \hline
2b & Not at all. F*ck fashion. I want fashion I’ll by the Vogue fall guide. Give me something creative I haven’t seen before. & 0 & 0 & 2 & 4 \\ \hline
1b & Say what you will about the look, but can we appreciate the fact that this b*tch got over a dozen Canon DSLRs plus the lenses on this dress? That shit ain’t cheap. & 1 & 1 & 3 & 4 \\ \hline
1c & This look is ridiculous. I love it, but my God, who thinks of this shit. & 0 & 0 & 1 & 4 \\ \end{tabular}
\end{table}
Table 3: Conversation on r/rupaulsdragrace containing reclaimed language and multi-modal context
"f*ggot" and "b*tch" are reclaimed as positive terms to refer to LGBTQ members and competitors3. However, these slurs are more often used in a hateful context, proving a challenge for methods that do not consider specific contexts.
Footnote 3: A common slogan on the show is ”Yass b*tch”, which is used to cheer on competitors
Analyzing the predictions of each method, we see that comment-only methods assign a high hate score for comments at depths 1a and 2b. However, both comments are in fact positive and supportive of the competitor mentioned in the initial post. Inspecting the content of these comments, we can infer that the false positive prediction can likely be attributed to the usage of reclaimed slurs without contextualizing them to the community and prior discussion. This behaviour is contrasted with graph methods, which predicted more accurate scores, likely due to the context provided by other comments concerning the fashion of the dress.
It is important to note that both graph and comment-only methods can only infer context by inspecting the text of comments. In this conversation, graph methods were likely only able to infer context due to the other comments, which discuss the fashion of the dress (depths 1b and 1c). However, upon looking at the initial post, we see that the user accompanied their post with an image of a drag queen (Figure 2). This picture provides immense context to the discussion, such as the focus on fashion and the LGBTQ community. As such, we hypothesize that future work that would include multi-modal posts could provide important context to the comments that follow.
### Inciteful Hate Speech
For our next set of examples, we investigate the ability of graph networks to predict the direction of conversations. We start by analyzing a conversation that took place on /r/conversatives, a community for discussing right-wing policies (Table 4). These policies often include advocacy for gun rights and strong support for election denial [1]. Discussions in these political communities have become increasingly polarized and hateful towards members of the opposite party [23].
In this discussion, users are referring to a tweet from a black user concerning the difficulty for people to purchase weapons due to a lack of governmental IDs. The conversation is centred around confounding the Black Lives Matter movement with gun advocacy and election denial ("Black Guns Matter, as does election integrity"). The discussion then devolves into affiliation-based hate as users claim left-leaning users are racists and associate black activist groups ("the black panthers" and "BLM") with armed violence.
Looking at the predictions of both groups of methods, we see that both graph methods predict very high hatefulness scores for many of the comments within the discussion. This can especially be seen in the comments at depth 2, in which users delve into race accusations. However, it is important to note that these comments are more akin to debate rather than explicit hate speech. This is reflected in the predictions from the comment-only methods, which consistently predict low hatefulness scores for these comments. As such, it can be inferred that these high predictions originate from a belief that the conversation will head in a hateful direction given the context thus far. Indeed, this is the case as later comments (depth 5) intensify the discussion and accuse members of the Black Lives Matter movement of armed violence. However, it is also important to note that the earlier predictions appear to conflate polarizing political discourse with hate (depth 1) with minimal discussion context.
To further examine the ability of Graphormer to predict the direction of conversations, we investigate a conversation from /r/politics. This community is known to have strong left-leaning political views and to have a distinct and polarized user base from /r/conservatives [23]. Inside the sampled conversation (Table 5), users are discussing comments made by former president Trump in relation to the Ukraine war. The conversation begins benign but becomes combative, with one user shifting the blame for the Ukrainian war onto the current left-leaning president. This trend culminates into a climax at depth 4, where the user escalates to using hateful language against Trump.
Investigating the predictions, we see a similar trend from the previous example where each of the predictions from the comment-only methods remains mostly neutral apart from the hateful comment at depth 4. However, the contentiousness of the conversation given the previous instigating comments is captured by graph methods, resulting in high predictions by both methods. This can especially be seen with the prediction at depth 1, which seemingly captured the contentious relationship between comments concerning Biden and Trump. Indeed, we can see that the conversation did turn hateful later in the conversation, validating this prediction.
## Discussion
In our analysis, we focused on analyzing two types of difficult hate speech: contextual and inciteful. Contextual hate speech requires conversational context to understand, and inciteful hate speech is not inherently hateful but is designed to incite further hateful comments. Both types of speech present difficulties for current comment-only approaches due to the heavy reliance on the context in order to make correct predictions.
Starting with contextual hate speech, we analyzed three different conversations originating from /r/gy, /r/Men*Rights, and /r/rupaulsdragrace. Using comment-only methods, we found many predictions that were false positives or false negatives depending on the text of the comment in isolation. For false positives, we found that comment-only methods tended to predict high hate scores for comments that contained slurs (Table 1 and Table 3). However, upon reading the rest of the conversation, it becomes clear that many of these slurs are utilized in a non-derogatory context. The same can be said for false negatives, where antagonistic replies can lose their hateful context when considered in isolation (Table 2). However, in each of these conversations, we found that graph methods perform well at capturing the vital discussion context that is required to appropriately understand these comments. Between the two graph methods we evaluated, both GAT and Graphormer performed similarly well in the examples we explored.
\begin{table}
\begin{tabular}{c|l|c|c|c|c} Depth & Text & Graphormer & GAT & Bert-HateXplain & MuRIL \\ \hline \hline
0 & Black Guns Matter, as does election integrity [image] & 3 & 1 & 0 & 1 \\ \hline
1 & That’s Hilarious. I think its interesting how the left keeps trying to rub ”BLACK PEOPLE ARE BUYING GUNS” & 4 & 3 & 1 & 2 \\ & in the faces of conservatives, like we would somehow be opposed to that. [...] glad that gun ownership is expanding among all demographics & & & \\ \hline
2a & It’s on every conservative news and media platform as a big positive, but the left doesn’t pay attention to that literally all they think is conservatives racist, therefore black guns bad for them. & 3 & 2 & 1 & 2 \\ \hline
2b & They want us to be divided by race like they are. They are so racist they can’t imagine us being united by our love of fundamental rights & 4 & 3 & 0 & 0 \\ \hline
4 & I agree. The black panthers had the right idea. BLM members should arm too. & 2 & 0 & 0 & 0 \\ \hline
5 & They have been.... smh & 4 & 1 & 0 & 0 \\ \end{tabular}
\end{table}
Table 4: Conversation from r/conservatives containing inciteful speech regarding Black Lives Matter
\begin{table}
\begin{tabular}{c|l|c|c|c|c} Depth & Text & Graphormer & GAT & Bert-HateXplain & MuRIL \\ \hline \hline
0 & Trump, who was impeached for withholding nearly \$400 & 2 & 2 & 0 & 0 \\ & million in military aid from Ukraine, said ’this deadly Ukraine situation would never have happened’ if he were in office [article] & & & \\ \hline
1 & This happened under Biden’s watch. That is a fact & 3 & 3 & 0 & 0 \\ \hline
2a & Russia has been threatening Ukraine for the last 8 years & 2 & 0 & 0 & 0 \\ \hline
3 & Whatever, still happened under Biden’s watch. Not Trump’s. & 3 & 2 & 0 & 0 \\ \hline
4 & Right, I’m sure the situation would’ve been so much better under the leadership of a failed jackass lapdog for Putin. & 4 & 3 & 3 & 0 \\ \hline
2b & You might want to do a little reading about Ukraine. Your comment is completely ludicrous & 3 & 3 & 0 & 0 \\ \end{tabular}
\end{table}
Table 5: Conversation from r/politics requiring long range forecasting from community cues
To examine the ability of graph and comment-only methods to capture inciteful speech, we analyzed two discussions from /r/conservatives and /r/politics, communities with polarizing user-bases [23]. We found that graph methods are sensitive to counter-speech as evidence of inciting hateful discourse. This can be especially seen in Table 5, where users disagreeing on the cause of the Ukraine war lead to a high hatefulness prediction from the two graph methods. While there may be some validity to these predictions regarding their contentiousness, it does raise a concern about how to moderate these heated debates. However, in the case of the example in Table 4, the most inciteful comments (depth 2a, 2b, and 5) are appropriately labelled as such. In each of these cases, comment-only methods predicted each comment as neutral, even if the comments were hateful (depth 2b). We found that Graphormer was more sensitive to higher predictions than GAT when faced with these types of comments.
In each of the examples, we also evaluated the ability of graph networks to adapt to evolving social media conversations. Coping with this dynamic change is essential for the successful real-life implementation of AI for social impact. We evaluate this behaviour by iteratively predicting comments in the discussion graph in a depth-wise fashion, differing from Hebert2022. As a result, we constrain graph models to predict labels from only the context provided by previous comments, mirroring how the system would be deployed in real situations. Despite this constraint, we still see that graph systems are able to make accurate predictions. This can best be seen in Table 1, where the graph models adapted their predictions to be less hateful once the conversation developed.
We also found that many examples we retrieved were entered around multi-modal posts. Such examples include the discussion in Table 1, involving an image of a tweet, and the discussion in Table 5, involving an article concerning Trump and the Ukraine war, among others. When investigating contextual hate speech, Table 3 presents an example where the image (Figure 2) provides important context to the comments that followed.
By analyzing this picture, it would be possible to understand that the discussion concerns an LGBTQ drag queen competing in an elaborate dress. However, without this context, we found that comment-only methods misclassified supportive speech using reclaimed LGBTQ vernacular as hateful. This is especially concerning given that these predictions could serve to suppress communities that are vitally important to the mental health of minority populations [10, 23]. Furthermore, memes sent on online platforms are often only hateful if one considers both the image and the text caption together, as seen in Figure 3[11]. By taking a holistic view of conversations by encoding images, text, and discussion structure together, we hypothesize those hate speech detection methods would be able to avoid many false predictions, such as the ones incurred in Table 3. Furthermore, following Tian2022, it would be possible to include user-level information into this graph representation.
Finally, it is also important to analyze the mental health impact given by a graph approach to hate speech. By reformulating hate speech as a graph prediction task, we are able to train systems that can leverage discussion context toward predicting the direction of conversations. This can allow moderators on social platforms to be alerted of potentially harmful comments and deploy mitigation strategies to shield users who are susceptible to mental health effects. We see an example of such a discussion in Table 4, where users that are susceptible to trauma from guns and race can be warned ahead of time by utilizing the proactive graph predictions. Furthermore, by utilizing an increasing ordinal scale (from zero to four) for predicting hate, users can select their level of comfort by choosing the intensity of contentious comments they are comfortable viewing. As the conversation develops, these predictions can then be updated with further context and revised accordingly. An example of where this would be useful is the discussion in Table 3, where further comments add credence to the innocence of previous comments. By providing these scores, platform owners can allow users to have control over the content they see through self-moderation. Another valued opportunity for deployment of our methods shown by qualitative analysis is in assiting platforms to curtail hate speech proliferation: greater prediction of impending escalating harm and caution in imposing penalties when discussion isn't hate can both be addressed.
## Conclusion
In this work, we explored the impact of Graph Transformer Networks on hate speech detection [1]. To do this, we performed an extensive qualitative analysis of graph and comment-only methods on conversations sampled from different communities on Reddit. When examining contextual hate speech, we found that Graph Transformer Networks can prevent both false positives and false negatives incurred by comment-only methods. In these cases, context played a key role in understanding the nature of analyzed comments. We also found similar gains in performance when analyzing discussions that concerned inciteful speech. However, we also found that debates were prone to high hate predictions despite being mostly civil.
Guided by this study, one promising direction for future work is to include more modalities to better contextualize comments. Among the examples we retrieved, many were centred around an image or article. We hypothesize that utilizing a holistic view of conversations by including all modalities can help prevent false positives. Most importantly, this approach could help catch the most pervasive hate speech of all - discourse.
Figure 3: Examples of Multi-Modal Hate Speech |
2305.02982 | Preliminary results of a therapeutic lab for promoting autonomies in
autistic children | This extended abtract describes the preliminary qualitative results coming
from a therapeutic laboratory focused on the use of the Pepper robot to promote
autonomies and functional acquisitions in highly functioning (Asperger)
children with autism. The field lab, ideated and led by a multidisciplinary
team, involved 4 children, aged 11-13, who attended the laboratory sessions
once a week for four months. | Cristina Gena, Rossana Damiano, Claudio Mattutino, Alessandro Mazzei, Andrea Meirone, Loredana Mazzotta, Matteo Nazzario, Valeria Ricci, Stefania Brighenti, Federica Liscio, Francesco Petriglia | 2023-05-04T16:47:05Z | http://arxiv.org/abs/2305.02982v1 | # Preliminary results of a therapeutic lab for promoting automonies in autistic children
###### Abstract
This extended abstract describes the preliminary quantitative and qualitative results coming from a therapeutic laboratory focused on the use of the Pepper robot to promote automonies and functional acquisitions in highly functioning (Asperger) children with autism. The laboratory1 started in February 2021 and lasted until June 2021, and the weekly meetings lasted two hours and were led by one or two therapists (educators, speech therapists, psychologists, etc.), helped by 2-3 trainee master students. The participants recruited were four highly functioning (Asperger) children, aged between 11 and 13 years. There have been in total 16 lab sessions, all recorded by a fixed camera, in addition to the Pepper's 2D cameras. Furthermore, trainees filled out evaluation forms provided by psychotherapists, noting the children autonomy's progress in a diary with the helping of rating scales [1]. These notes were then reworked to draw up shared reports, reflecting on the behavior's evolution and progress of the children meeting by meeting.
Footnote 1: Ethical approval for this study was obtained from the bioethical committee of the University of Turin, with approval number: 0664572
The setting of the lab was an elegant apartment furnished as a real home in the city center. Each meeting had a similar structure: 1) welcome in the apartment; 2) social moment: dialogue with the robot; 3) moment of snack preparation; 4) moment of post-snack dialogue; 5) final feedback and goodbye.
The snack preparation was one of the most stimulating moments for the children, dedicated to the preparation, in the kitchen or directly on the dining room table, of some increasingly complex snacks. The group was led both by Pepper, instructed to organize and coordinate the activity, and by the therapists, ready to intervene when required.
The goal of the activity was to gradually mitigate the therapist's aid, so that therapists could only make suggestions from time to time. Pepper, with the help of the video-modeling [5] encouraged the participants to schematically organize themselves, listing the ingredients, illustrating the procedures with images, animations, and videos, and giving the children time to manage the preparation, as well as the possibility of reviewing the steps. The activity gave good results: the children appreciated the help of Pepper, showing good levels of increasing autonomy, even if, sometimes, the difficulties in maintaining a high concentration affected the scores reported by the trainees.
During the social moment, Pepper was conversing with children. On the one hand, the robot responded to their curiosity about itself, and, on the other, guiding a dialogue called "making friends", in which the robot attempted to establish a link with each participant, according to their previously declared interests, as advocated in the design of
social and educational robots [2],[3], also target to autistic users [4]. Indeed, from time to time, Pepper re-proposed the topics children previously liked most. In the first case, the scenario envisaged that the children took their turns facing the robot, waiting for it to catch their gaze and listen, and then ask it any question. In the second case, the robot called the children one by one and began to talk with them on a previously liked topic (e.g., music, video games, etc.), which followed a script manually updated week by week, with the robot trying to guide the conversation. However, both the attempts led to unsatisfactory results, often arousing frustration among participants.
In fact, it could be argued that the problem arose at the roots of design, since both activities were very far from what we could really define a "conversation", other than a simple transmission of information. Bringing the mind back to the sociologist Sherry Turkle [6] conversations convey much more than the details of an argument: it is not just a question of answers, but of what they mean. As the developers did not implement real dialogue autonomy in the robot, Pepper showed no progress in the interaction, leaving the trainees to take note of the children's inclinations, and planning, from one meeting to the next, a new dialogue that considered what emerged.
The results from this experience showed some critical issues to be addressed in future works. Concerning the dialogue system, at least two related features need to be empowered. On the one side, the dialogue system needs to be improved in robustness and in precision. The actual conversations show a high degree of expectation from children about the robot's knowledge: to fulfill this expectation, one needs to have a correct and precise semantic representation of the children's questions in encyclopedic and commonsense domains. A possible improvement could be based on the construction of an annotated corpus by using a Wizard of Oz approach [7]. In this way, one could train a machine learning frame based natural language understanding system, starting from the annotation of user intents. The other improvement concerns the preparation of back-up dialogue strategies that the dialogue system can adopt in the case of non-comprehensible questions/utterances from the child or sentences not strictly related. An interesting possibility for building a back-up strategy is using large language-models as BERT [8].
We also defined an ontology for the possible topics of children interests whose classes and properties have been defined by extracting them from DBpedia2. As future work, we will integrate the robot's dialogue with this knowledge base to make the robot able to navigate the ontology and reason on it, thus enriching its dialogue strategies.
Footnote 2: [https://www.dbpedia.org/](https://www.dbpedia.org/)
Focusing on the specific autistic children's features, we must notice that the autistic functioning distorts the essence of the conversation. The exchange of utterances does not produce the pleasure of sharing but is functional to obtaining something more concrete, such as searching for information. If the goal is reaching a typical conversation, the results could always be unsatisfactory in this context. At the same time, this calls for the collection of conversational data targeted at this specific group of interactants.
|
2302.09870 | Exploring the hidden Universe: A novel phenomenological approach for
recovering arbitrary gravitational-wave millilensing configurations | Since the first detection of gravitational waves in 2015, gravitational-wave
astronomy has emerged as a rapidly advancing field that holds great potential
for studying the cosmos, from probing the properties of black holes to testing
the limits of our current understanding of gravity. One important aspect of
gravitational-wave astronomy is the phenomenon of gravitational lensing, where
massive intervening objects can bend and magnify gravitational waves, providing
a unique way to probe the distribution of matter in the universe, as well as
finding applications to fundamental physics, astrophysics, and cosmology.
However, current models for gravitational-wave millilensing - a specific form
of lensing where small-scale astrophysical objects can split a gravitational
wave signal into multiple copies - are often limited to simple isolated lenses,
which is not realistic for complex lensing scenarios. In this paper, we present
a novel phenomenological approach to incorporate millilensing in data analysis
in a model-independent fashion. Our approach enables the recovery of arbitrary
lens configurations without the need for extensive computational lens modeling,
making it a more accurate and computationally efficient tool for studying the
distribution of matter in the universe using gravitational-wave signals. When
gravitational-wave lensing observations become possible, our method can provide
a powerful tool for studying complex lens configurations, including dark matter
subhalos and MACHOs. | Anna Liu, Isaac C. F. Wong, Samson H. W. Leong, Anupreeta More, Otto A. Hannuksela, Tjonnie G. F. Li | 2023-02-20T10:01:57Z | http://arxiv.org/abs/2302.09870v2 | Exploring the hidden Universe: A novel phenomenological approach for recovering arbitrary gravitational-wave millilensing configurations
###### Abstract
Since the first detection of gravitational waves in 2015, gravitational-wave astronomy has emerged as a rapidly advancing field that holds great potential for studying the cosmos, from probing the properties of black holes to testing the limits of our current understanding of gravity. One important aspect of gravitational-wave astronomy is the phenomenon of gravitational lensing, where massive intervening objects can bend and magnify gravitational waves, providing a unique way to probe the distribution of matter in the universe, as well as finding applications to fundamental physics, astrophysics, and cosmology. However, current models for gravitational-wave millilensing - a specific form of lensing where small-scale astrophysical objects can split a gravitational wave signal into multiple copies - are often limited to simple isolated lenses, which is not realistic for complex lensing scenarios. In this paper, we present a novel phenomenological approach to incorporate millilensing in data analysis in a model-independent fashion. Our approach enables the recovery of arbitrary lens configurations without the need for extensive computational lens modeling, making it a more accurate and computationally efficient tool for studying the distribution of matter in the universe using gravitational-wave signals. When gravitational-wave lensing observations become possible, our method could provide a powerful tool for studying complex lens configurations in the future.
keywords: gravitational waves - gravitational lensing: micro - gravitational lensing: strong
## 1 Introduction
Gravitational waves (GWs) were predicted by Albert Einstein's theory of general relativity in 1916. However, it took nearly a century before their direct detection by the LIGO and Virgo Collaborations in 2015, opening a new era of gravitational-wave astronomy (Abbott et al., 2016, 2016). Since then, numerous GW detections have been made, including binary black hole (BBH) mergers, binary neutron star (BNS) mergers, and the merger of a BBH with a neutron star (BH-NS) (Abbott et al., 2019, 2021), and with the Advanced LIGO (Aasi et al. (2015)), Advanced Virgo (Acernese et al. (2014)), and KAGRA detectors (Akutsu et al. (2019)).
Gravitational lensing, the bending of light or GWs by a massive object, was first observed in 1919 by Sir Arthur Eddington during a total solar eclipse (Dyson et al., 1920), which confirmed Einstein's theory of general relativity. In recent years, the search for GW lensing signatures has become a fast-developing field. Multiple lensed signals can arrive at the observer, described as time-separated de/magnified GW signals with a phase shift relative to the unlensed GW signal (Takahashi and Nakamura, 2003; Cao et al., 2014; Dai and Venumadhav, 2017). The current gravitational-wave data has been used to search for GW lensing, but so far, there has not yet been any widely accepted detection.
The challenges in making GW lensing detections are significant, including the need for new tools to detect strong lensing (Haris et al., 2018; Dai et al., 2020; Liu et al., 2020; Lo and Hernandez, 2021; Janquart et al., 2021, 2022), micro- and millensing (Lai et al., 2018; Dai et al., 2018; Liao et al., 2018; Christian et al., 2018; Pagano et al., 2020; Kim et al., 2020; Seo et al., 2021; Wright and Hendry, 2021), and wave optics lensing (Lai et al., 2018; Christian et al., 2018; Bulashenko and Ubach, 2021; Oguri and Takahashi, 2022; Basak et al., 2022; Tambalo et al., 2022), beyond the need for statistical modelling of both gravitational lenses and binary black holes (Dai et al., 2016; Ng et al., 2017; Smith et al., 2017, 2018; Oguri, 2018; Wierda et al., 2021; Xu et al., 2021; More and More, 2021; Smith et al., 2022). Furthermore, the low expected rates of lensed GW events make detection difficult (Caliskan et al., 2022). Nevertheless, recent studies have shown that the detection of lensed GW events is possible with current and future GW detectors, assuming the current best models for the strong lensing galaxy population and presuming that binary black holes trace the star-formation rate density (Ng et al., 2017; Li et al., 2018; Oguri, 2018; Xu et al., 2021; Wierda et al., 2021; Wempe et al., 2022; Smith et al., 2022; Ma et al., 2022). Indeed, the era of gravitational wave lensing is imminent, and the detection of lensed
GW events would provide valuable information about the properties of the lensing objects, test the theory of general relativity in a new regime, probe high-redshift cosmology, study dark matter, search for primordial black holes and MACHOs, precisely localize merging black holes, and more (Takahashi, 2005; Itoh et al., 2009; Baker and Trodden, 2016; Collett and Bacon, 2017; Liao et al., 2017; Fan et al., 2017; Lai et al., 2018; Dai et al., 2018; Mukherjee et al., 2019, 2020; Diego, 2019; Oguri and Takahashi, 2020; Goyal et al., 2020; Hannuksela et al., 2020; Finke et al., 2021; Iacoelli et al., 2022; Chung and Li, 2021; Sereno et al., 2010; Bolejko et al., 2012; Magana Hernandez, 2022; Tambalo et al., 2022b; Basak et al., 2022b,a) and the searches for gravitational-wave lensing have started recently Hannuksela et al. (2019); Abbott et al. (2021b); Li et al. (2019); Dai et al. (2020); Liu et al. (2020); Pang et al. (2020); The LIGO Scientific Collaboration et al. (2021); Kim et al. (2022).
Depending on the lens mass, gravitational lenses can cause different types of lensing. Massive objects such as galaxies and galaxy clusters can produce strong gravitational lensing, resulting in multiple signals with varying time separations and effects on the GW signal (Takahashi and Nakamura, 2003; Dai and Venumadhav, 2017; Smith et al., 2017, 2018; Hairs et al., 2018; Liu et al., 2020; Robertson et al., 2020; Ryczanowski et al., 2020; Dai et al., 2020; Wang et al., 2021; Ezquiaga et al., 2021; Lo and Hernandez, 2021; Janquart et al., 2021, 2021, 2022; Vilajekumar et al., 2022; Calskan et al., 2022; Cao et al., 2022). On the other hand, smaller-mass lenses such as stars or stellar-mass compact objects can cause microlensing, resulting in potentially observable beating patterns in the frequency evolution of the GW waveform (Deguchi and Watson, 1986; Nakamura, 1998; Takahashi and Nakamura, 2003; Christian et al., 2018; Jung and Shin, 2019; Diego et al., 2019; Mishra et al., 2021; Meena and Bagla, 2020; Cheung et al., 2020; Bultashenko and Ubach, 2021; Cremonese et al., 2021; Seo et al., 2021; Yeung et al., 2021; Qiu et al., 2022; Wright and Hendry, 2021; Kim et al., 2022). In this paper, microlensing refers to lens masses of tens to hundreds of solar masses, while millilensing refers to lens masses in the range of \(10^{2}-10^{6}M_{\odot}\), covering dark matter subhalos, primordial black holes, massive compact halo objects (MACHOs), and lenses with an Einstein radius approximately the size of a milliarcsecond. The distinction between microlensing and millilensing is based on the lens mass range and the corresponding time delays between individual lensed GW signals. The analysis of microlensing requires the wave optics approximation in the low-mass regime, where the GW wavelength matches the Schwarzschild radius of the lens, while we assume millilensing follows the geometrical optics approximation in this work, which holds for the corresponding mass range (Takahashi and Nakamura, 2003).
To date, several lens models have been suggested for GW micro- and millilensing studies. These models typically assume a particular lens mass distribution. Examples include the point-mass lens (PML) model and singular isothermal sphere (SIS) model (Nakamura (1998); Takahashi and Nakamura (2003); Congdon and Keeton (2018)), where the lenses are assumed to be isolated from any astrophysical objects. However, past electromagnetic lensing observations have shown that astrophysical objects, typically galaxies, can have a significant impact on the lens morphology, making it difficult to treat the lens as an isolated object (e.g. Diego et al., 2019; Seo et al., 2021; Oguri and Takahashi, 2022). For example, when a gravitational wave is strongly lensed by a galaxy and micro- or millilensed by the small-scale structure within the galaxy, the millilens would experience additional effects, including gravitational shear due to the galaxy's potential (see Fig. 1) (Mishra et al., 2021; Seo et al., 2021; Oguri and Takahashi, 2022). This scenario could lead to a non-symmetric distribution of the millilens mass, as shown in Fig. 2. The widely used isolated spherically symmetric lens assumption is then physically unrealistic, at least in scenarios with significant shearing effects. In such a scenario, the current parameter estimation tools could obtain a significantly biased result or even miss the signal altogether (Yeung et al., 2021).
One potential way to improve our ability to recover these more complex lenses is to implement a collection of more intricate lens models in our parameter estimation framework, including extensions that account for galaxy shearing effects and fields of lenses, as well as several millilensing mass profiles. However, this approach would require a significant amount of effort to extend the parameter estimation framework to every plausible lensing scenario. Additionally, to implement this approach, we would need to perform parameter estimation on each signal for each lens model, making the process computationally expensive. Instead of relying on complex and computationally intensive lens models to account for physically realistic effects of astrophysical objects on millilensed gravitational-wave signals, we propose a different approach in this work.
Figure 1: An example configuration of millilenses within a galaxy producing multiple millispals and influencing each other gravitationally (figure not to scale); (a) as a gravitational wave passes near a massive object such as a galaxy or a galaxy cluster, its path gets bent which can result in multiple strongly-lensed GW signals (represented as curved black lines) reaching the observer; (b) additionally, if the signals encounter smaller compact objects, e.g., stars, dark matter subhalos, or massive compact halo objects, acting as millilenses, further splitting of the signals occurs, which results in multiple lensed BBH _images_ arising in the lens plane. The circular curves around each lens are the (tangential) critical curves. Throughout this work, we use the _thin lens approximation_ in which the source and lens objects are confined to two-dimensional planes.
In particular, we suggest using a general, model-independent description of millilensed signals that parameterizes "image parameters" instead of the lensing system itself. Every millilensing system produces an integer number of gravitational-wave milli-images, each with their own set of magnifications, time delays, and arrival times, regardless of the lens configuration (Takahashi & Nakamura, 2003; Dai & Venumadhav, 2017). By parameterizing the properties of these milli-images, we can target any millilensing configuration, covering signals from different types of lenses and configurations without imposing limits on the system setup or the number of lensed GW signals formed. This phenomenological approach also allows us to recycle the results from the analysis of multiple data sets, without performing separate analyses with different lens models independently. Furthermore, we demonstrate that the results obtained from the phenomenological analysis parameterizing the lensed GW signal can be mapped to specific lens models to select the most favorable lens mass profile.
## 2 Methods
### Lens models
In order to demonstrate the effectiveness of our phenomenological model for searching generic millilensing configurations, we need to test it by injecting lensed GW signals into LVK detector data. To simulate these signals, we must choose a specific lens model, and for this exercise, we will focus on well-understood models that can be easily tested.
Therefore, we consider the point mass lens (PML) and singular isothermal sphere (SIS) models as our millilenses (Nakamura, 1998; Takahashi & Nakamura, 2003; Congdon & Keeton, 2018). We also incorporate an external shearing effect when these models are embedded into a galaxy. This effect stretches the caustic curves and can lead to additional milli-images (Diego et al., 2019; Diego, 2019). While the PML model with a shearing effect is commonly used to approximate compact objects acting as gravitational lenses, the SIS model assumes a spherically symmetric mass distribution and is often used as a toy model to represent galaxies, star clusters, or dark matter subhalos. Note that more complex models, such as tidally stripped Navarro Fenk White lenses, may be required to accurately model dark matter subhalos, but our primary goal is to demonstrate a phenomenological search that can be applied to any lens model. We use the Lenstronomy package, a multi-purpose Python package for gravitational lensing, to perform all lens modeling (Birrer & Amara, 2018).
In addition to choosing a lens model, simulating the lensed GW waveform is also necessary. A lensed GW can be mathematically expressed as \(\tilde{h}_{\rm L}(f)=F(f)\tilde{h}(f)\), where \(F(f)\) is the amplification factor, which is a function of frequency \(f\) and specific lens parameters, and \(\tilde{h}(f)\) represents the unlensed GW. In the case of the PML and SIS models, the amplification factor is dependent on two lensed parameters: the redshifted lens mass \(Mz\) and the relative position of the source with respect to the lens, denoted by \(y\). The addition of an external shearing effect, as described in the previous paragraph, would introduce three additional parameters: two components of the shearing effect, denoted as \(\gamma_{1,2}\), and a convergence value denoted as \(\kappa\).
In the case of an isolated millilens, using the thin lens approximation and solving the lens equation for the PML model gives rise to two solutions of lensed GW signals (see the left panel in Fig.2). For relatively short time delays, the millilensed signals will overlap, leading to a single lensed GW signal arriving at the detector (see Fig.3). However, assuming the lens is embedded within a galaxy and including the internal shear, the number of the component millilensed GW signals can be different (Diego et al., 2019; Diego, 2019). Indeed, a generic lens potential may even include a field of small-scale lenses.
### Model-independent inference of millilensed GW signals
When considering millilensing, there are a multitude of potential lensing configurations, each with varying levels of complexity. This can include different types of millilenses, such as PMLs, SISs, or more complex lens structures, as well as a varying number of such lenses. Additionally, the effect of the strong lensing galaxy can further complicate the system. Due to this complexity, it is impractical to implement every gravitational lens model directly in parameter estimation. However, regardless of the specific millilens model, the observable signal will always consist of an integer number of GW millil signals, or "milli-images," each with their own properties such as time delay, magnification, and Morse phase. These signals overlap at the GW detector. In this work, we present a phenomenological search approach that directly targets the milli-images, allowing for an arbitrary configuration of the lensing system.
When considering a millilens embedded in a macrosystem such as a galaxy, it is important to take into account the effects of the
Figure 3: Illustration of the millilensing effect on the GW waveform. A GW signal passing near a millilens is split into two millispinals that overlap, resulting in a single GW signal at the detector with frequency-dependent beating patterns. One can discern the millililensing effect by locating the beating patterns in GWs. The thin lens approximation used confines the source, the lens and the observer to a 2-dimensional plane each. The deflection of the GW occurs instantaneously at the point where the GW crosses the lens plane.
Figure 2: Schematic representation of point-mass lens model and phenomenological approach in the lens plane: (a) in the point-mass lens model, the lens is represented as a point-like mass (purple) producing two GW millispals from the background BBH source. This model is often used in microlensing analysis, where the size of the lens is small compared to the relative size of the system and the critical curve of the lens is circular; (b) in the phenomenological approach presented here, we assume multiple millilenses can influence each other gravitationally, producing an overall gravitational shearing effect, hence breaking the spherical symmetry of the millilenses. Additionally, unlike the point-mass lens model, we assume each millilens can produce an arbitrary integer number of millispinals.
macrosystem. In contrast to previous gravitational wave analyses that have focused primarily on isolated point mass lenses and singular isothermal sphere profiles described by physical parameters of the lensing system (such as lens mass \(M_{L,z}\) and source position \(y\)) (e.g. The LIGO Scientific Collaboration et al., 2021), we parameterize each individual millilensed GW image using a set of millilensing parameters: relative magnification \(\mu\), time delay \(t\), and Morse factor \(n\). This parameterization can account for non-symmetric effects due to the macrosystem without making prior assumptions about the system configuration and mass distribution. Each individual millilensed GW signal is treated independently of the other millilsignals.
Before the individual millispals reach the detector, they overlap due to relatively short time separations (order of milliseconds). The resultant GW signal can therefore be described as a sum of the millispals with the waveform shape characterised by frequency-dependent beating patterns (see Fig. 3).
To allow for the most general case where any integer number of millispals may be formed due to lensing, we introduce an additional lensing parameter, \(K\), which corresponds to the number of millispals. Unlike traditional models that assume a fixed number of millispals based on a specific lensing scenario, our phenomenological approach does not limit the number of signals. By relaxing this assumption, we provide a more flexible framework for characterizing the complex physical configuration of the lensing system. The mathematical formulation of our approach is described below.
### A phenomenological formulation
A gravitationally lensed waveform can be separated into an unlensed part (unlensed complex strain amplitude) multiplied by an _amplification factor_\(F(f,\ \mathbf{\theta_{L}})\) which contains lensing information described by lensing parameters \(\mathbf{\theta_{L}}\). A millilensed waveform \(\tilde{h}_{L}(f;\mathbf{\theta},\ \mathbf{\theta_{L}})\) can thus be expressed as
\[\tilde{h}_{L}(f;\mathbf{\theta},\ \mathbf{\theta_{L}})=F(f,\mathbf{\theta_{L}})\cdot \tilde{h}_{U}(f;\mathbf{\theta}), \tag{1}\]
where \(\tilde{h}_{U}(f;\mathbf{\theta})\) represents a frequency-domain GW waveform in the absence of millilensing with \(\mathbf{\theta}\) corresponding to unlensed BBH source parameters, \(F(f,\mathbf{\theta_{L}})\) is an amplification function dependent on parameters associated with millilensing \(\mathbf{\theta_{L}}=(\mathbf{\mu},\mathbf{t},\mathbf{n})\) and defined as:
\[F(f,\mathbf{\mu},\mathbf{t},\mathbf{n})=\sum_{j=1}^{K_{\rm max}}\left|\mu_{j}\right|^{1/2} \exp\left[2\pi ift_{j}-i\pi n_{j}\right], \tag{2}\]
where the expression is summed over the total number of millispals up to a chosen maximum number \(K_{\rm max}\) and \((\mu_{j},t_{j},n_{j})\) correspond to the lensing parameters of the \(j^{\rm th}\) millilsignal. Such a summation corresponds to the _geometrical optics approximation_, which applies when the GW wavelength \(\lambda_{\rm GW}\) is longer than the Schwarzschild radius \(R_{\rm S}\) corresponding to the lens mass: \(\lambda_{\rm GW}>R_{\rm S}\)(Deguchi and Watson, 1986; Takahashi and Nakamura, 2003). For ground-based GW detectors, the geometrical optics approximation can be applied down to masses \(M_{L,z}\sim\mathcal{O}(10^{2})\ M_{\odot}\) (see Appendix A for an explanation of the validity of geometrical optics approximation). Throughout this work, we use geometrical units (\(c=G=1\)).
The notation introduced in Eq. (2) uses the conventional notation of lensing magnification \(\mu\). However, to overcome degeneracy between consecutive millilensed signals, we use the effective luminosity distance notation which relates to the true luminosity distance from the source \(d_{L}\) and the magnification of \(j^{\rm th}\) lensed GW signal by:
\[d^{\rm eff}_{j}=\frac{d_{L}}{\sqrt{\mu_{j}}}. \tag{3}\]
This effective luminosity distance notation allows for a better separation of individual millilensed signals and reduces degeneracy between the binary orbital and millilensing parameters.
The individual millilensed GW signals must be time-ordered to avoid degeneracy between their arrival times. To achieve this, we define the time delay between each consecutive signal such that the arrival time of each signal increases with \(j\), that is, \(t_{j+1}>t_{j}\). We can describe the millilensed GW signals with respect to the first signal, which arrives earliest, by choosing \(t_{1}=0\). Using the effective luminosity distance notation introduced in Eq. (3), we can express the amplification factor
\[\begin{split}& F(f,d^{\rm eff}_{j},t_{j},n_{j})\\ &=d_{L}\left(\frac{1}{d^{\rm eff}_{1}}\exp[-i\pi n_{1}]+\frac{1} {d^{\rm eff}_{2}}\exp\left[2\pi ft_{2}-i\pi n_{2}\right]+\cdots\right)\end{split} \tag{4}\]
where the first signal (arriving the earliest) is described by its Morse factor \(n_{1}\) and the consecutive millilensed signals (\(j\geq 2\)) are described by their effective luminosity distance \(d^{\rm eff}_{j}\) (magnification) and time delay \(t_{j}\) w.r.t the arrival time and the luminosity distance of the first signal.
Consequently, the millilensed GW signal can be expressed as
\[\begin{split}\tilde{h}_{L}(f)&=\sum_{j=1}^{K_{\rm max }}\frac{d_{L}}{d^{\rm eff}_{j}}\exp\left[2\pi ift_{j}-i\pi n_{j}\right]\tilde{h }_{U}(f)(f;\mathbf{\theta},d_{L},t_{\rm col})\\ &=\sum_{j=1}^{K_{\rm max}}\exp\left[-i\pi n_{j}\right]\tilde{h}_{ U}(f)^{\rm eff}(f;\mathbf{\theta},d_{L},t_{\rm col},d^{\rm eff}_{j},t_{j})\end{split} \tag{5}\]
where in the second line we have defined an effective GW signal dependent on the source parameters, as well as the millilensing parameters (\(d^{\rm eff}_{j},t_{j}\)) of \(j^{\rm th}\) component signal.
The resultant millilensed GW signal, which is a sum of \(K\) individual millilensed GW signals, can hence be expressed in terms of the amplification function as
\[\tilde{h}_{L}(f;\mathbf{\theta},\mathbf{\theta}^{L}_{K_{\rm max}},K_{\rm max})=F(f;\bm {\theta}^{L}_{K_{\rm max}},K_{\rm max})\cdot\tilde{h}(f;\mathbf{\theta}) \tag{6}\]
where \(\mathbf{\theta}^{L}_{K_{\rm max}}\) represents the lensing parameters of the sum of all \(K_{\rm max}\) millilensed signals.
### Problem statement in Bayesian framework
To perform parameter estimation and model selection on real GW data, we adopt a Bayesian framework. Our goal is to obtain a result with a specified number of millilensed GW signals, \(K\), while considering varying dimensionality of the models due to the changing number of signals. To address this problem, we use joint Bayesian model selection, where the posterior distribution is defined on a union of subspaces with different dimensions, each corresponding to a model with a fixed number of signals. Assuming a countable set of \(K_{\rm max}\) models with \(k\) models, we define the total union of parameters as the BBH parameters \(\mathbf{\theta}\) and lensing parameters \(\mathbf{\theta}^{L}_{K_{\rm max}}\) with \(K_{\rm max}\) being the maximum number of allowed signals specified by the user in each parameter estimation run. Our aim is to estimate the unknowns \(k\) and \(\mathbf{\theta}^{L}_{k}\), where \(k\in 1,...,K_{\rm max}\), given the data set \(\mathbf{d}\). We use the nested sampling algorithm (Skilling (2006)) to draw these unknowns from corresponding prior distributions.
We express the joint distribution of all variables as
\[p(k,\mathbf{\theta},\mathbf{\theta}_{k}^{L},\mathbf{d})=p(\mathbf{d}|\mathbf{\theta},\mathbf{\theta}_{k}^{ L},k)p(\mathbf{\theta})p(k) \tag{7}\]
where \(p(\mathbf{d}|\mathbf{\theta},\mathbf{\theta}_{k}^{L},k)\) is the likelihood, \(p(\mathbf{\theta}_{k}^{L}|k)\) is the prior distribution of lensing parameters, \(p(\mathbf{\theta})\) is the prior distribution of the BBH source parameters and \(p(k)\) is the prior distribution of the number of millielsned signals.
We can express the marginal likelihood as follows (for detailed derivation, see Appendix (B)):
\[p(\mathbf{d})=\sum_{k=1}^{K_{\text{max}}}\int p\left(\mathbf{d}\mid\mathbf{\theta},\mathbf{ \theta}_{k}^{L},k\right)p(\mathbf{\theta})p\left(\mathbf{\theta}_{k}^{L}\mid k\right)p (k)d\mathbf{\theta}d\mathbf{\theta}_{k}^{L}. \tag{8}\]
We can obtain the joint posterior distribution \(p(\mathbf{\theta}_{k}^{L},k|\mathbf{d})\) and the posterior distribution for the number of images can be expressed as
\[p(k\mid\mathbf{d})=\frac{\int p\left(\mathbf{d}\mid\mathbf{\theta},\mathbf{\theta}_{k}^{L},k \right)p(\mathbf{\theta})p(\mathbf{\theta}_{k}^{L}\mid k)p(k)d\mathbf{\theta}d\mathbf{\theta} _{k}^{L}}{\sum_{k=1}^{K_{\text{max}}}\int p\left(\mathbf{d}\mid\mathbf{\theta},\mathbf{ \theta}_{k}^{L},k\right)p(\mathbf{\theta})p(\mathbf{\theta}_{k}^{L}\mid k)p(k)d\mathbf{ \theta}d\mathbf{\theta}_{k}^{L}}. \tag{9}\]
## 3 Model-independent simulations
### Simulations
We begin by simulating a lensed GW waveform with varying numbers of millisignals using the lenstronomy package to model the physical setup of the system. The simulated signal is injected with BBH parameters from GW190408 and added to the detector Gaussian noise of the network of three ground-based detectors (LIGO Livingston, LIGO Hanford, and Virgo) using the bilby Bayesian inference library. We perform parameter estimation of both the source and millielsing parameters using the phenomenological waveform approximant IMRPhenomPv2 and the nested sampling technique with the dynesty sampler. The priors used in the simulation are listed in Table 2, with the time delay and magnification specified with continuous uniform priors and the Morse factor and number of images having discrete uniform priors. We simulate three injections with different signal-to-noise ratios (SNRs): 20, 30, and 50.
### Lens configuration
We start with a strongly lensed GW signal by a galaxy resulting in two GW signals lensed at different deflection angles, and each of the two strongly lensed signals passes near a millilens located within the same plane as the strong lens, which experiences a gravitational shearing effect due to the presence of the strong lens galaxy. The modelling is divided into two steps: first, solving the lens equation for the strong lens to obtain the shear components (\(\gamma_{1},\gamma_{2}\)), convergence components (\(\kappa_{1},\kappa_{2}\)) and magnifications (\(\mu_{1}^{S},\mu_{2}^{S}\)) of the two strongly lensed GW signals; secondly, using the shear values from a strong lens as a macro-environment input into millilens models to obtain the magnifications, time delays and Morse factors for the resultant millisignals which are then estimated in injection run parameter estimation. In this example, we assume that one of the strongly lensed signals is further split into four millielsned signals, but the choice of the number of millisignals is arbitrary and can take different integer values depending on the system setup, convergence, and shear values.
The strong lens is modelled as a galaxy with a singular isothermal ellipsoid (SIE) mass distribution located at redshift \(z_{\text{lens}}=0.3\), with eccentricity components \(e_{1}=0.05\) and \(e_{2}=0\) and an Einstein radius of \(\theta_{E}=1\). The millilens is modelled as an SIS mass distribution, and the source is a BBH emitting GWs at typical redshift \(z_{\text{source}}=1.5\)(Wierd et al. (2021)). A flat \(\Lambda\)-CDM cosmological model is assumed throughout the simulation with \(H_{0}=70,\text{kms}^{-1}\,\text{Mpc}^{-1}\) and \(\Omega_{\text{M}}=0.3\). Results of the injection run are presented in the following section.
### Example: 4 millisignals
The case reported here illustrates a GW signal lensed into 4 millisignals injected into detector noise. Fig. 4 shows the posterior distribution for parameter \(K\), corresponding to the number of millielsed GW signals recovered. All three runs with three different SNR values lead to consistent results, but for the clarity of the figure, only results from the injection run with SNR 20 are shown. As can be seen from
\begin{table}
\begin{tabular}{l l} \hline BBH source parameter & Value \\ \hline Mass 1, \(m_{1}\) & \(31.6\,M_{\odot}\) \\ Mass 2, \(m_{2}\) & \(23.7\,M_{\odot}\) \\ Luminosity distance \(d_{L}\) & \(1598\) Mpc \\ Dimensionless Spin 1, \(a_{1}\) & \(0.35\) \\ Dimensionless Spin 2, \(a_{2}\) & \(0.36\) \\ Tilt Angle 1, \(\theta_{1}\) & \(1.7\) rad \\ Tilt Angle 2, \(\theta_{2}\) & \(1.6\) rad \\ Right Ascension, RA & \(6.68\) rad \\ Declination, \(\delta\) & \(0.92\) rad \\ Polarization, \(\psi\) & \(3.16\) rad \\ Inclination, \(\theta_{\text{m}}\) & \(0.73\) rad \\ Azimuthal Angle of \(\widetilde{L}\), \(\theta_{\text{j}}\) & \(2.9\) rad \\ Azimuthal Angle Difference, \(\phi_{12}\) & \(3.1\) rad \\ Phase \(\phi\) & \(3.134\) rad \\ \hline Millielsing parameter & Value \\ \hline Effective luminosity distance \(d_{2}^{\text{eff}}\) & \(1598\) Mpc \\ Effective luminosity distance \(d_{2}^{\text{eff}}\) & \(1577\) Mpc \\ Effective luminosity distance \(d_{2}^{\text{eff}}\) & \(2570\) Mpc \\ Effective luminosity distance \(d_{4}^{\text{eff}}\) & \(2758\) Mpc \\ Time delay \(t_{2}\) & \(0.0066\) s \\ Time delay \(t_{3}\) & \(0.0467\) s \\ Time delay \(t_{4}\) & \(0.0512\) s \\ Morse phase \(n_{1}\) & \(0\) \\ Morse phase \(n_{2}\) & \(0\) \\ Morse phase \(n_{3}\) & \(0.5\) \\ Morse phase \(n_{4}\) & \(0.5\) \\ Number of millisignals \(K\) & \(4\) \\ \hline \end{tabular}
\end{table}
Table 1: Injected parameters for the SNR\(=20\) parameter estimation run. BBH source parameters’ values correspond to the median values of GW190408 (Abbott et al., 2021), millielsing parameters are obtained from simulating a lensing system with lenstronomy package (Birrer and Amara, 2018; Birrer et al., 2021)
\begin{table}
\begin{tabular}{l l} \hline Parameter & Prior distribution \\ \hline Luminosity distance \(d_{L}\) & Uniform \(\mathcal{U}\)(50 Mpc, 20000 Mpc) \\ Effective luminosity distance \(d^{\text{eff}}\) & Uniform \(\mathcal{U}\)(50 Mpc, 20000 Mpc) \\ Time delay \(dt\) & Uniform \(\mathcal{U}\)(10\({}^{-3}\), 10\({}^{-3}\)) \\ Morse factor \(n\) & Discrete Uniform \(\mathcal{U}\)(10, 0.5, 1) \\ Number of millisignals \(K\) & Discrete Uniform \(\mathcal{U}\)(1, 2, 3, 4, 5, 6) \\ \hline \end{tabular}
\end{table}
Table 2: Prior distributions used for injection runs
the plot, the number of millisignals has been recovered in agreement with the injected value of \(K=4\).
Fig. 5a presents a corner plot of the effective luminosity distance parameters obtained for injection run with SNR 20. It can be seen from the figure that the injected values (orange lines) are well recovered in the posterior distributions for each parameter. Consistent results for higher SNR values were obtained. Similarly, the posterior distribution of time delays of the consecutive millilensed component signals are presented in Fig. 5b. The injected values are recovered within \(1\sigma\) region of the posterior distributions for \(t_{2}\) and \(t_{4}\), and within \(2\sigma\) for \(t_{3}\). As tested with higher SNR runs, the accuracy of the recovered parameters increases with SNR. The recovered BBH source parameters are shown in Fig 6.
## 4 Lens Mapping
The methodology developed so far aims to provide a physically realistic picture of the lensing system, accounting for the fact that the lens located within a galaxy can be gravitationally affected by it. Having performed parameter estimation with the phenomenological approach, the results obtained can be mapped to specific lens models. We present an example mapping to an SIS model below. We choose the SIS model for its simplicity, which allows for analytical mapping from the phenomenological to SIS model parameters and vice versa. We use it as a testing example to demonstrate the accuracy of the method, noting that the method is not sensitive to physical assumptions and can be applied to more complex models. However, the drawback associated with the use of the SIS model is that it is a simplified model, which can not be applied to physically generic mass distributions if we want to take into account other physical effects, such as galactic shear.
### Singular Isothermal Sphere
In the geometrical optics limit, the time delay and magnification of two millilensed signals can be related to SIS lens parameters (\(y\), \(M_{Lz}\)) by
\[t_{d}=8M_{Lz}y\] \[\mu_{\pm}=\pm 1+1/y \tag{10}\]
where \(t_{d}\) is the time delay between two signals and \(\mu_{\pm}\) are the magnifications of the two lensed signals which can be related to effective luminosity distances following Eq. (3), all expressed in \(c=G=1\) units (Takahashi & Nakamura (2003)).
The first step of the mapping is to extract the necessary data from the multi-signal millilensing analysis. If we consider \(y\leq 1\), the SIS model predicts that double-lensed GW signals are formed. Restricting ourselves to this case, we perform a millilensing injection run injecting two millilensed GW signals into detector noise, but
Figure 4: Number of millilensed signals, \(K\), recovered from the injection run with uniform prior. The posterior distribution (shown in purple) is in agreement with the injected value \(K=4\) (orange line).
Figure 5: Results recovered from parameter estimation of the injected millilensed GW with an SNR 20 and GW190408 BBH parameters: (a) effective luminosity distances, (b) time delays of component millilensed signals w.r.t. the earliest arriving signal (\(t_{1}=0\)). The millililens model is an SIS embedded into a strong lensing galaxy, which we model with the SIE lens model. The colours of the two-dimensional corner plots indicate \(1\sigma\), \(2\sigma\) and \(3\sigma\) credible regions. The orange lines show injected values for each parameter and the dashed lines in individual histogram plots correspond to \(1\sigma\) credible intervals. The effective luminosity distances and time delays are recovered with good accuracy and peak at the appropriate injected values.
allowing their number \(K\) to take on values up to and including 6. From the analysis result, we select the posteriors of \((d_{2}^{\rm eff},t_{2})\) corresponding to \(K=2\) case in order to map it to the predicted two-signal SIS model. Then, we construct the likelihood marginalized over all BBH parameters \(\theta\) except for \((d_{2}^{\rm eff},t_{2})\):
\[p(\mathbf{d}|d_{2}^{\rm eff},t_{2})=\int p(\mathbf{d}|d_{2}^{\rm eff},t_{2},\theta)p( \theta)d\theta. \tag{11}\]
Knowing the relation between two sets of parameters (Eq. (10)), we assume that the likelihood can be expressed as
\[p(\mathbf{d}|d_{2}^{\rm eff},t_{2})=p(\mathbf{d}|d_{2}^{\rm eff}(y),t_{2}(M_{Lz},y))= p(\mathbf{d}|y,M_{Lz}). \tag{12}\]
Then, using the likelihood, as well as assuming uniform prior distributions for (\(y\), \(M_{Lz}\)), we perform a nested sampling over the two SIS parameters (\(y\), \(M_{Lz}\)) in order to obtain corresponding posterior distributions and evidence for the SIS model. For the purpose of implementing the mapped likelihood into the Bayesian inference library nhlby, we used the likelihood ratio to perform the nested sampling with the results presented in the following section.
Following Eq. (10), it is also possible to write down the inverse relations
\[\begin{split}& y=\frac{\mu_{\rm rel}-1}{\mu_{\rm rel}+1}\\ & M_{Lz}=\frac{t_{d}}{8}\frac{\mu_{\rm rel}+1}{\mu_{\rm rel}-1} \end{split} \tag{13}\]
where \(\mu_{\rm rel}\) is the relative magnification \(\mu_{+}/\mu_{-}\) between two lensed GW signals. For validation of the results obtained from nested sampling, we construct posterior distributions for the SIS parameters (\(y\), \(M_{Lz}\)) by taking samples from the posterior distributions \(p(t_{d}|\mathbf{d})\), \(p(d_{2}^{\rm eff}|\mathbf{d})\) and using the analytical relations Eq. (13).
### Results
Figure 7 represents the posterior distributions of SIS parameters \(M_{Lz}\) and \(y\), obtained from mapping phenomenological results. As described above, we used two methods to obtain the posterior distributions of the SIS parameters. Firstly, we performed a nested sampling algorithm (plots shown in purple in Fig. 7). Secondly, to validate the results, we also used analytical relations from Eq. (13) to construct the posteriors for \(M_{Lz}\) and \(y\) (plots in yellow in Fig. 7). The prior ranges of \((M_{Lz},y)\) used in the two methods were different, hence the shape and position of the peak of the two distributions are not exactly the same (Fig. 7). Nevertheless, the distributions obtained with the two methods lead to consistent results.
## 5 Discussion and Conclusion
### Discussion
In this work, we have presented a new phenomenological approach to GW millilensing analysis, applicable to astrophysical lenses with masses in the range \(10^{3}-10^{6}\,M_{\odot}\). The novel approach, unlike currently most widely used models, parameterizes the lensed GW signals, which not only enables to include the physical effects due to the presence of other massive objects in the vicinity of the gravitational lens but also provides an efficient lens model selection tool. Having tested the feasibility of the approach we applied the method to simulated injections of millilensed GW signals into detector noise. The parameter estimation of the lensed BBH parameters recovered results that are in agreement with injected values.
Figure 6: Black hole source parameters posterior distributions recovered from parameter estimation of the injected millillensed GW: chirp mass \(\mathcal{M}\), dimensionless spin parameters \(\alpha_{1}\), \(\alpha_{2}\), tilt angles \(\theta_{1}\), \(\theta_{2}\). The orange lines represent the injected values for each parameter. The posterior distributions recovered are consistent with injected values.
Figure 7: Posterior distribution of the redshifted lens mass \(M_{Lz}\) (top) and source position parameter \(y\) (bottom), obtained from mapping phenomenological results to an SIS lens model. The mapping was performed: i) analytically (histogram in yellow), ii) by nested sampling algorithm (histogram in purple). The results confirm that we are able to map the results from observables to lensing parameters with nested sampling.
The question raised by this study aims to provide a physically realistic description of an astrophysical lens within a galaxy. However, the generalizability of the results is subject to the limitation of assuming a single-lens system. More broadly, the study should be repeated addressing systems with multiple lenses located in the vicinity of each other (within the lens plane) or located along the line of sight. This would be a fruitful area for further work, providing a more generic description of gravitationally millilensed systems in different physical configurations.
Moreover, further research should be undertaken to investigate the distinction of millilensing effects on the gravitational waveform from other physical processes, such as spin precession or non-GR effects, which may also lead to frequency-dependent beating patterns in the waveform, mimicking gravitational lensing. It would be of crucial importance to distinguish between those effects once a GW signal with a potentially lensed waveform is detected.
The methodology can be applied to a range of problems within theoretical physics, astrophysics and cosmology. A potential interesting example application is the problem of dark matter subhalos. Despite many observations supporting the presence of dark matter, the nature of dark matter remains one of the key open questions within the field (Zackrisson and Reihm, 2010; Ellis, 2010; Bertone and Hooper, 2016; Bertone and Tait, 2018). To date, a number of dark matter models have been proposed and some of them predict dark matter halos to be formed hierarchically from smaller subhalos which can further be formed from even smaller subhalos (Moore et al., 1999; Metcalf and Madau, 2001; Liao et al., 2018; Dai et al., 2018). However, current dark matter models show discrepancies at the smallest scales, often referred to as the substructure crisis (Metcalf and Madau, 2001; Somerville, 2002; Moore et al., 2006). In particular, some models predict dark matter subhalos cannot be present below a certain scale (Kravtsov, 2010; Oguri and Takahashi, 2020). Therefore, in order to test the most feasible dark matter models, it is necessary to study dark matter subhalos down to the smallest scales. Moreover, if dark matter subhalos with masses of order \(10^{3}-10^{6}\)\(M_{\odot}\) exist within galaxies as predicted by some models, they could potentially act as millilenses, directly influencing distant GW signals and producing lensing distortions. The probe of dark matter subhalos with gravitational millilensing has been proposed within EM studies of lensed quasars (Wambsgans and Paczynski, 1992; Mao and Schneider, 1997; Metcalf and Madau, 2001), however, due to telescope resolution and propagation effects, EM observations can be subject to uncertainties. GW millilensing is subject to different systematics and could potentially be used as a direct measurement of the unknown dark matter subhalo mass function. Therefore, the dark matter substructures problem is a promising direction to study with the millilensing approach developed, which could become an alternative probe of the nature of dark matter at small scales. However, more investigation is needed to study the detailed capabilities of GW millilensing as a probe of dark matter subhalos.
### Conclusions
The present study was designed to develop a phenomenological approach to gravitational millilensing studies which accounts for effects not included in currently used lens models. The simulations confirmed the feasibility of the method and the results support the idea that it is possible to study millilens configurations embedded in macro systems with mutual gravitational interactions. These findings suggest that in general, we can analyse non-isolated and non-symmetric gravitational lenses. Furthermore, results obtained from millilensing analysis can be mapped to existing lens models, providing a useful tool to distinguish between the most feasible models. These results add to the rapidly expanding field of gravitational-wave lensing and will prove useful in developing analysis tools for observational GW data with lensed GW detections predicted to take place in the coming years. The major limitation of this study is the geometrical optics approximation which limits the millilens mass range considered and the target lens population. Notwithstanding the relatively limited lens sample, this work offers valuable insights into the studies of gravitational millilensing analysis and can be further developed into population studies of potential millilens candidates. Non-observation of millilensing of predicted lenses at typical redshifts could also shed light on lens populations. The approach can also be expanded into generic millilensing studies of multiply lensed GW signals. Moreover, the millilensing framework could be applied to existing problems in astrophysics and cosmology, such as studies of the dark matter subhalos and primordial black holes with masses expected to lie within the corresponding millilensing mass range. More work will be needed, however, to study the detailed science case.
## Acknowledgements
The work is partially supported by grants from the Research Grants Council of the Hong Kong (CUHK 14306218), The Research Foundation - Flanders (G086722N) and KU Leuven (STG/21/061). The analysed data and the corresponding power spectral densities are publicly available at the online Gravitational-Wave Open Science Center (Abbott et al., 2021). The authors are grateful for computational resources provided by the CIT cluster of the LIGO Laboratory and supported by National Science Foundation Grants PHY-0757058 and PHY-0823459. This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science Foundation. This manuscript has LIGO DCC number P2200365.
## Data Availability
This research has made use of data or software obtained from the Gravitational Wave Open Science Center (gwosc.org), a service of LIGO Laboratory, the LIGO Scientific Collaboration, the Virgo Collaboration, and KAGRA. LIGO Laboratory and Advanced LIGO are funded by the United States National Science Foundation (NSF) as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. Virgo is funded, through the European Gravitational Observatory (EGO), by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale di Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by institutions from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal, Spain. KAGRA is supported by Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan Society for the Promotion of Science (JSPS) in Japan; National Research Foundation (NRF) and Ministry of Science and ICT (MSIT) in Korea; Academia Sinica (AS) and National Science and Technology Council (NSTC) in Taiwan. |
2310.19872 | Investigating APOKASC Red Giant Stars with Abnormal Carbon to Nitrogen
Ratios | The success of galactic archaeology and the reconstruction of the formation
history of our galaxy critically relies on precise ages for large populations
of stars. For evolved stars in the red clump and red giant branch, the carbon
to nitrogen ratio ([C/N]) has recently been identified as a powerful diagnostic
of mass and age that can be applied to stellar samples from spectroscopic
surveys such as SDSS/APOGEE. Here, we show that at least 10\% of red clump
stars and %$\approx 10\%$ of red giant branch stars deviate from the standard
relationship between [C/N] and mass. {We use the APOGEE-\kepler\ (APOKASC)
overlap sample to show that binary interactions are %the majority contributors
to these responsible for the majority of these outliers and that stars with
%any indicators of current or previous binarity should be excluded from
galactic archaeology analyses that rely on [C/N] abundances to infer stellar
masses. We also show that the %standard DR14 APOGEE analysis overestimates the
surface gravities for even moderately rotating giants (vsini$>2$ km/s)} | Erica Bufanda, Jamie Tayar, Daniel Huber, Sten Hasselquist, Richard Lane | 2023-10-30T18:00:01Z | http://arxiv.org/abs/2310.19872v1 | # Investigating APOKASC Red Giant Stars with Abnormal Carbon to Nitrogen Ratios
###### Abstract
The success of galactic archaeology and the reconstruction of the formation history of our galaxy critically relies on precise ages for large populations of stars. For evolved stars in the red clump and red giant branch, the carbon to nitrogen ratio ([C/N]) has recently been identified as a powerful diagnostic of mass and age that can be applied to stellar samples from spectroscopic surveys such as SDSS/APOGEE. Here, we show that at least 10% of red clump stars and red giant branch stars deviate from the standard relationship between [C/N] and mass. We use the APOGEE-_Kepler_ (APOKASC) overlap sample to show that binary interactions are responsible for the majority of these outliers and that stars with indicators of current or previous binarity should be excluded from galactic archaeology analyses that rely on [C/N] abundances to infer stellar masses. We also show that the DR14 APOGEE analysis overestimates the surface gravities for even moderately rotating giants (vsini\(>2\) km/s).
0000-0002-4880-7885]Erica Bufanda
0000-0002-4882-7885]Jamie Tayar
0000-0002-4880-7885]Daniel Huber
0000-0002-4880-7885]Sten Hasselquist
0000-0002-4880-7885]Richard R. Lane
## 1 Introduction
Galactic archaeology aims to reconstruct the formation and evolution of the Milky Way and other galaxies. Critical ingredients to achieve this are ages, chemistry, and kinematics of stars to trace them back to their birth locations (Hogg et al., 2016; Freeman & Bland-Hawthorn, 2002). Evolved red giant stars are prime candidates for constructing age maps across the galaxy because they are intrinsically bright. Thus knowledge of their masses and metallicities can be used to infer ages from theoretical evolutionary tracks. While metallicities can be measured spectroscopically, masses are more challenging to obtain. The most precise method of obtaining stellar masses of single field stars is asteroseismology, the study of stellar oscillations (Kjeldsen & Bedding, 1995; Miglio et al., 2013). However, the detection of oscillations requires high-precision and high-cadence photometry for each individual star, and thus is impractical for very large populations (Pinsonneault et al., 2018).
Alternatively, spectroscopic surveys such as LAMOST, APOGEE and GALAH (Zhao et al., 2012; Holtzman et al., 2015; De Silva et al., 2015) observe many more stars than we have asteroseismic detections. With this in mind, a promising alternative to measuring masses of red giant stars directly has been the use of mass-dependent mixing diagnostics measured from stellar spectra.
As stars ascend onto the red giant branch, the growing surface convection zone dredges up material that has undergone nuclear processing via the CNO cycle. Since both the maximum depth of the surface convection zone and the rate of CNO burning are temperature dependent, it is expected that the ratio of the amount of carbon to the amount of nitrogen on the surface of red giants should correlate with stellar mass (Iben, 1964). Masseron & Gilmore (2015) showed that the the observed carbon-to-nitrogen ratio [C/N] was indeed mass dependent, and Martig et al. (2016) and Ness et al. (2016) have used these empirical relationships, which were calibrated using asteroseismic masses, to estimate masses and ages for thousands of giants across the galaxy.
A limitation of these relationships is that a substantial fraction of red giants deviated from the expected relationship between [C/N] and mass. Non-canonical mixing in the stellar interior may affect [C/N] in evolved stars. For example, Lower [C/N] than expected given a star's mass may be explained by extra mixing which is common in low metallicity stars above the red giant branch bump (e.g. Gratton et al., 2000; Shetrone et al., 2019; Masseron et al., 2017). However, stars with higher
[C/N] than expected are harder to explain with stellar interior processes. They could theoretically be formed through mass accretion from an unprocessed companion [i.e. a large planet, or stars that have not undergone the first dredge up.] Alternatively, it has been suggested that the high nitrogen abundances observed in some red clump stars (Masseron et al., 2017) could be connected to the helium flash, although such a mechanism is challenging to understand theoretically.
Since the [C/N]-mass relationship is used as a tool to systematically calculate the masses for hundreds of thousands of evolved giant stars (e.g Ness et al., 2016; Ho et al., 2017; Mackereth et al., 2019), it is critical to physically understand the source and prevalence of these outliers. In particular, creating a diagnostic which indicates whether a [C/N] measurement will accurately predict mass is key to applying this method on a very large scale. In this paper we analyze evolved stars that do not follow the typical [C/N]-mass relation using _Kepler_, APOGEE, and _Gaia_ data. Our goal is to provide recommendations on the constraints of using this relation to measure mass and age for large stellar populations.
## 2 Observations
### APOGEE Data
Our main data set is the APOGEE-_Kepler_-2 catalog (Pinsonneault et al., 2018, APOKASC2), a large stellar catalog of over 6000 giant stars with stellar properties and evolutionary states derived from APOGEE spectroscopic parameters (Elsworth et al., 2017; Holtzman et al., 2018) and asteroseismic data. The APOKASC-2 sample uses spectroscopic parameters and uncertainties taken from the fourteenth data release (hereafter DR14, Abolfathi et al. (2018)) of the Sloan Digital Sky Survey (SDSS) (Eisenstein et al., 2011) from the Apache Point Observatory Galactic Experiment (APOGEE) (Majewski et al., 2017) which were obtained during SDSS-IV (Blanton et al., 2017) operating on the Sloan 2.5 meter telescope (Gunn et al., 2006). APOGEE has acquired over half a million high (R \(\sim\) 22,500) resolution infrared spectra (Wilson et al., 2012, 2019).
Updated spectroscopic parameters for these stars are now available from the more recent Data Release 17 (DR17) (Abdurro'uf et al., 2022), which includes improvements to the spectroscopic pipeline (Jonsson et al., 2020) including updated line-lists (Smith et al., 2021), improved atmospheric models, and so forth. For consistency with the asteroseismic analysis, we continue to use DR14 spectroscopic parameters for this work, although initial investigations indicate that the majority of our outliers are still anomalous in DR17, and the overall fraction of outliers is likely to be roughly comparable.
The APOGEE Stellar Parameters and Abundances Pipeline (Nidever et al., 2015; Garcia Perez et al., 2016, ASPCAP) derives stellar parameters through a global chi-squared minimization to the entire spectrum and then individual chemical abundances are derived using windows around the relevant lines for each species. For the DR14 catalog, chemical abundances are measured for C, N, O, Na, Mg, Al, Si, K, Ca, Ti, V, Mn, Fe, Co, and Ni. These initial results are then calibrated using data from stars of known parameters, including asteroseismic stars, open clusters, and low extinction fields (Holtzman et al., 2018).
### Kepler Light Curves
We use the masses and other seismically derived properties reported in the APOKASC-2 catalog, which were measured from analyzing light curves from the _Kepler_ mission (Borucki et al., 2010; Gilliland et al., 2010).
For the APOKASC-2 sample, members of the Kepler Asteroseismic Science Consortium analyzed the _Kepler_ long cadence photometry using five independent methods (known in the literature as A2Z, CAN, COR, OCT, and SYD) to estimate \(\nu_{\rm max}\) and \(\Delta\nu\)(Garcia et al., 2011; Handberg and Lund, 2014) [see Serenelli et al. (2017) for a detailed overview of these methods.] Systematics between the analysis pipelines were then corrected, and theoretical corrections to \(\Delta\nu\) were applied. The results were then put on an empirical scale using the results from open clusters in the _Kepler_ field (Pinsonneault et al., 2018). It is worth noting that the corrections applied were different for core helium burning (red clump) stars and shell hydrogen burning (RGB) stars, and that evolutionary state was determined directly from the asteroseismic analysis (Elsworth et al., 2019)
## 3 Outlier Diagnostics
### Identifying Outliers
Figure 1 shows the relation between [C/N] and asteroseismic mass for the stars in our sample. We expect that the mixing as a function of mass should depend on metallicity and evolutionary state. Our initial investigations confirmed that the [C/N]-mass trends were indeed weakly metallicity sensitive, and significantly dependent on evolutionary phase. We therefore first separated the sample into two bins based on their asteroseismic evolutionary state: core helium burning, including stars labeled as primary or secondary clump (also known as'red clump' [RC] stars), and those in shell burning phases, both those marked as first ascent red giants (RGB) and those whose energy source is ambiguous (RGB/AGB). We removed stars with unidentified or ambiguous evolutionary states from our analysis.
Due to the limited number of stars with very low (\(\leq-0.6\)) and high metallicities (\(\geq 0.2\)) in our sample, we exclude these stars from our analysis. Based on this and the weak evolution of the [C/N]-mass slope as a function of metallicity (see Figure 1) we divide our sample into one 'low' (-0.6 to -0.1) and 'high' metallicity (-0.1 to 0.2) bin. This yields 2222 RC and 3166 RGB stars in our final sample, with 2564 and 2615 low and high metallicity giants respectively. The resulting [C/N] relationship is shown for each of the populations in Figure 2.
To separate outliers from the [C/N] trend for each of our populations we binned the stars in terms of mass in steps of 0.05 solar masses from 0.7 to 3 solar masses. We calculated the average in each bin, and defined outliers as 1.5 sigma away from the calculated average. The four resulting outlier populations are shown in Figure 2. The [C/N]-mass relation flattens above 2.0 solar masses, so we concentrate on the mass regime 0.7-2.0 solar masses for this paper.
### Known Binaries
We explore different diagnostics as a cause for a star to become an outlier in the [C/N] - stellar mass diagram. We first flag binary stars in our outlier sample because binarity may affect the measurement of spectral parameters and chemical abundances. Line blending in a double lined spectroscopic binary, for example, could impact the inferred temperature, gravity, abundances and rotation rates. It is also possible that binary evolution could impact the expected chemical evolution, either through the accretion of unprocessed material, or by driving additional mixing (e.g. Casey et al., 2019). For example, Jofre et al. (2016, 2023) show that seemingly young and massive stars show evidence of binarity through radial-velocity analyses and do not follow the expected [C/N]-mass trend. We flag stars that are known eclipsing binaries (Slawson et al., 2011), although most known binaries may have already been eliminated from the APOKASC-2 sample (Tayar et al., 2015).
We additionally flag binaries identified using radial velocity variations, known as "vscatters" from multi-epoch APOGEE spectroscopy. The parameter vscatter measures the maximum radial velocity difference between epochs that have been taken divided by the square root of the number of observations. High vscatter is shown to correlate with high radial-velocity measurements, and thus can be used as a flag for evidence of binarity (Jofre et al., 2023).
The detection limit of the APOGEE spectrograph and the expected radial velocity jitter for evolved stars are both around 0.5 km s\({}^{-1}\)(Badenes et al., 2018). Hence we consider a vscatter above 1 km s\({}^{-1}\) as a significant detection of a multiple star system. While the number of observations for each star is not enough to derive an orbital solution for a companion, the variability of radial velocities between observations is sometimes enough to reveal a companion (Badenes et al., 2018).
We note that an insignificant or low vscatter does not necessarily mean the star is not a binary, as the max difference of radial velocities between observations may not reflect the true maximum amplitude of the velocity of the star. To partially combat this bias, Price-Whelan et al. (2020) has identified a sample of binary candidates in APOGEE through a more sophisticated Monte Carlo search of the available radial velocity measurements. We match their data with stars in our sample to identify 28 additional binaries in the sample.
Lastly, Berger et al. (2020) has calculated the Renormalised Unit Weight Error (RUWE) for each star in the _Kepler_ field using information from _Gaia_. RUWE \(>1.2\) indicates the presence of a close (\(<1\)") binary by deteriorates the astrometric solution from Gaia (Evans, 2018). We thus flag additional binaries in our sample that have _Gaia_ RUWE \(>1.2\).
### Asteroseismic Measurement Bias
Stars may also deviate from the observed empirical [C/N]-mass trend is because of ill-defined asteroseismic parameters. In particular, Pinsonneault et al. (2018) noted that high-luminosity red giants (\(\mathrm{numax}<10\)\(\mu\)Hz) only oscillate in a small number of radial orders. This makes measurements of traditional asteroseismic observables difficult (Stello et al., 2014) and confuses mapping from \(\nu_{\mathrm{max}}\) and \(\Delta\nu\) to stellar parameters. We therefore flag all such stars in our analysis.
Figure 1: The [C/N]-mass relation color-coded by metallicity [Fe/H] for evolved stars in the APOKASC sample. Due to the slight dependence of the slope of the relationship on metallicity, we divide the data into low metallicity (\(-0.6<\)[Fe/H]\(<-0.1\)) and high-metallicity ([Fe/H]\(>-0.1\)) samples.
### Rotation
#### 3.4.1 Spectroscopic Rotation Velocities
Figure 3: Left: The _Kepler_ light curve for KIC 11808481 after subtracting long-period sinuisodal signatures (\(>200\) days) that are likely due to telescope systematics. The photospheric modulation of the light curve from spots is clear, with a rotation period of 54 days measured from the Lomb-Scargle periodogram (right).
Figure 2: A 2D histogram binned in 0.05 solar masses and 0.05 dex in [C/N] displaying the population density of carbon-to-nitrogen as a function of mass for our different populations, in bins of metallicity (top:low metallicity, bottom: high metallicity) and evolutionary state (left:RC, right:RGB). Red points represent the average and standard deviation in each bin; the red trendline is a moving average. Outliers are marked as white points and are at least \(1.5\sigma\) away from the bin average. Because of small numbers, we have excluded from our analysis RGB stars above \(2.0\rm M_{\odot}\) and RC stars above \(2.5\rm M_{\odot}\).
Because of angular momentum conservation, the large radii of red giant stars suggest that most stars in this regime should be rotating slowly (\(v\sin i\)\(<\)1 km s\({}^{-1}\); Tayar et al., 2015) Additionally, for core-helium burning stars, helium flashes have the potential the slow surface rates even further (Sills and Pinsonneault, 2000). At these slow rotation rates, the effect of rotational broadening on spectral lines is rendered unmeasurable by the significantly larger microturbulent and macroturbulent broadening in these stars. Hence detectable rotational broadening for evolved stars may be indicative of additional evolutionary or physical processes such as merger or accretion events, as these have the ability to cause the spin rate of the star to increase (e.g. Patton et al., 2023).
Tayar et al. (2015) measured \(v\sin i\)_s_ for the APOGEE DR10 sample through a cross correlation with broadened versions of the APOGEE template spectra. We perform a similar analysis on the DR14 data, reporting detections for \(v\sin i\)\(>\)5 km s\({}^{-1}\), and flagging as potentially rotating any stars with inferred rotation velocities between 2 and 5 km s\({}^{-1}\).
#### 3.4.2 Rotational Modulation
Rotational modulation due to starspots causes brightness variations that can be used to infer rotation periods. To determine photometric rotation periods we analyzed the Simple Aperture Photometry (SAP) Kepler light curves for our sample (Jenkins et al., 2010). We apply a high-pass filter of 200 days to remove systematic trends while preparing star-spot signals and correcting for quarter gaps and other discontinuities with a simple linear fit.
Given the correlation between rotation rate and star-spot activity (Noyes et al., 1984; Mamajek and Hillenbrand, 2008) we expect the rapidly rotating stars will be more likely to show star-spot modulation. We visually inspect the 66 stars with measurable rotation velocities. We estimated these stars' rotation periods with a Lomb-Scargle periodogram, and estimated the error by fitting a Gaussian to the significant peak (Figure 3). The results are shown in Table 1.
Figure 4 compares the values obtained from our method with literature results obtained using auto-correlation and wavelet analysis (Ceillier et al., 2017). We removed from our sample 6 stars that were flagged in the Ceillier et al. (2017) analysis as likely to be contaminated. Every star in our velocity-broadened sample that overlapped with Cellier (2017) had comparable measured rotation and detectable star-spots (Table 1). For 20 stars in common, we find that 18 stars (90 percent) agree to better than 1-\(\sigma\). We therefore expect that the additional 12 stars identified by our analysis as new spotted stars are likely real detections of stellar rotation. There are two exceptions; one where we measured twice the rotation period as Ceillier et al. (2017), a common systematic challenge in star-spot modulation measurements **(see**Aigrain et al., 2015)**. One target in our sample does not match the value calculated by Ceillier et al. (2017) or the 2:1 or 1:2 of the rotation. We currently can not explain the discrepancy in measured rotation for this star.
More recently, Gaulme et al. (2020) searched a subset of the Kepler stars for periodic rotation signals. We have three stars that overlap with that sample and our periods are in good agreement with the published rotation periods (Table 1). We also ensured that all our measured rotation rates are larger than the critical period for which a star would be ripped apart by the centrifugal force (Ceillier et al., 2017):
\[T_{crit}=\sqrt{\frac{27\pi^{2}R^{3}}{2GM}} \tag{1}\]
Where R and M are the radius and the mass of the star respectively. In general, the critical rate for our stars is between 7 to 10 days, which was not significantly close to any of our measured rotation periods.
### Chemical Anomalies
In addition to looking at the carbon and nitrogen abundances, we check for offsets in other abundances that could represent either data processing problems or some unusual event. In particular, Weinberg et al. (2019) found that the individual abundances derived by APOGEE can be well predicted using just the [Fe/H]
Figure 4: Our rotation periods in comparison to Ceillier et al. (2017). In red is the 1:1 line. Dashed lines show 1:2 and 2:1 ratios.
and [\(\alpha\)/Fe] measurements. This is because these values are intimately related to the ratio enrichment from core collapse supernova to Type Ia supernova, which depends on a star's formation environment and age. We therefore suggest that individual elemental abundances that deviate significantly from the expectation given [Fe/H] and [\(\alpha\)/Fe] could be the result of contamination from a binary companion (e.g. barium stars, McClure et al., 1980) or due to some unusual mixing event. We show this analysis in Figure 5, where the [C/N] vs mass (top) and [Co/Fe] vs [Fe/H] for the general population (blue, top; black, bottom) and for a star from our outlier sample, KIC3735699 (red cross). The green crosses represent four stars that do follow the [C/N]-mass trend that have similar stellar parameters as KIC3735699, and were chosen by minimizing the combined weighted difference (or chi-square statistic) of metallicity, alpha abundance, surface gravity and temperature. We include surface gravity and temperature as a criteria for star-matching because these parameters have a strong influence on spectral lines, and thus could impact the inferred chemical abundances. In comparison, KIC3735699 has a significantly lower [Co/Fe] abundance in comparison to the four matched stars.
While we see no obvious evidence that either the goodness of fit metric or radial velocities of the chemically anomalous stars are correlated in a way that would indicate mechanical issues with the spectroscopic analysis e.g. improperly subtracted skylines, further investigations are encouraged to determine if these trends are astrophysical, or related to correlated issues in the abundance determinations. We flag stars that have greater than 3-\(\sigma\) offsets in any elemental abundances measured by APOGEE compared to the four matched stars that follow the [C/N]-mass trend (For example, Figure 5, bottom panel).
### Alpha/Age Inconsistencies
Figure 6 shows [\(\alpha\)/Fe] versus asteroseismic age for the APOKASC sample. For most stars we observe a strong correlation between \(\alpha\)-element abundances and age, which originates from galactic chemical evolution (Fuhrmann, 1998). There are two regions that have stars
Figure 5: Carbon-to-Nitrogen abundances versus stellar mass (top panel) and Cobalt-to-Iron abundances versus metallicity (bottom panel) for our sample. Colored crosses mark the same stars in both panels. The red cross marks an outlier star (KIC 3735699) which is depleted in Cobalt compared to the rest of the sample.
Figure 6: [\(\alpha\)/Fe] as a function of age for the APOKASC sample. Ages were calculated from the asteroseismic data by Pinsonneault et al. (2018). The \(\alpha\)-rich and \(\alpha\)-poor galactic populations are both visible, as is the slight evolution in \(\alpha\)-element abundance with time in the thin disk population. We have marked anomalous stars that fall significantly outside this standard trend, shown by the blue arrows, including young \(\alpha\)-rich stars (e.g. Martig et al., 2015), as well as old \(\alpha\)-poor stars. We have also flagged stars with quoted ages older than the age of the universe (region to the right of the red line).
whose \(\alpha\)-element abundances do not follow these age trends (Figure 6).One of the arrows points to a region that includes the so-called "young" \(\alpha\)-rich population (\(\alpha>0.1\), age \(<6\) gigayears), identified as potentially the result of stellar mergers (Martig et al., 2015; Sun et al., 2020) or as stars that have migrated from an unusual part of the inner galaxy (Chiappini et al., 2015). The other arrow points to the old \(\alpha\)-poor stars (\(\alpha<0.1\), age \(>10\) gigayears) which are also significantly offset from the normal chemical evolution trend. We flag these stars in our sample.
Next, to the right of the red dashed line in Figure 6 are stars whose age estimates from Pinsonneault et al. (2018) are older than the age of the universe (13.8 Gyr). While some of these stars may but pushed to older ages by random or systematic uncertainties in their age calculations, we suspect that many of these stars have lost mass, and are therefore being inferred to be older than they truly are (Li et al., 2022). This may be because the models used to calculate the ages do not include mass loss, or are not correct for these stars. We also flag these stars in our sample.
## 4 Results
### Impact of APOGEE log(g) offset on \(v\sin i\)
APOGEE preforms a simultaneous chi-squared minimization to estimate the temperature, gravity, and bulk composition. Thus, offsets in one parameter could produce incorrect measurements of other properties inferred from the spectra, including the measured [C/N] ratio. Furthermore, an error in the calculation of the base spectrum can also affect the way we measure rotation from spectra. Generally speaking, the projected rotation velocity (vsini) is estimated by broadening the template fit to the APOGEE spectrum. If the lines in this spectrum are over-broadened, for example, because the assumed gravity is too high, then we would expect to underestimate the rotational broadening.
In general, the asteroseismic and spectroscopic estimates of surface gravity agree extremely well (see Fig. 7, left) because the spectroscopic results are quite precise and are calibrated on the asteroseismology (Holtzman et al., 2018; Jonsson et al., 2020). However there are two issues specifically associated with red clump stars. The first is that all of the APOGEE surface gravities of these stars are systematically offset from their asteroseismic counter-parts (Figure 7, middle). While there are offsets in the raw spectroscopic results (Masseron and Hawkins, 2017), these are supposed to be corrected by a calibration to the asteroseismic scale (Holtzman et al., 2018). However, in this particular data release, there were some errors in that calibration that cause the systematic offset seen in the middle panel. More interesting from our perspective however, is that even after accounting for this offset, we find an additional offset that seems to be present mostly in rapidly rotating stars (Fig. 7, right).
The measured spectroscopic gravity of the clump is strongly correlated with stellar mass, and significant outliers from this trend are often rapidly rotating outliers from the [C/N]-mass trend, suggesting correlated errors between the spectroscopic parameters when rotation is not included in the fits.
Since rotation is not being fit directly in the giant regime, the APOGEE ASPCAP pipeline is instead trying to fit the broader lines by incorrectly increasing the surface gravity. Thus, a fraction of rotating giants are likely to have poor-quality spectra fits which potentially contributes to their offsets from the [C/N]-mass relation. To investigate whether this is generally true for the APOKASC sample, we used star-spot modulation, periods, measured vinis, and asteroseismic radii to calculate rotation velocities, which combined with vsini can inform us about the line-of-sight inclination of each star.In a field population, we expect the inclination angles to be distributed isotropically (see e.g. Ceillier et al., 2017). For our sample, we observe a signficantly skewed distribution of inclination angles, peaking around sin(i)=0.7 (Figure 8, top). This is consistent with overestimated surface gravities causing rotation velocities to be underestimated.
To fix our under-estimated vsin(i) values we adopt the asteroseismic log(g) as the true surface gravity and match each star that has an offset APOGEE logg with a ASPCAP star template. The template is associated with a star that is not rotating detectably but has a surface gravity that matches the asteroseismic gravity of our target as well as its temperature, metallicity, and \(\alpha\)-element abundance (Figure 9). Next, we recalculated the surface rotation velocity as described in Section 3.4.2, and compared that to our original measurement. We do this for RC stars that deviate by at least 1 sigma from the APOGEE log(g) vs Kepler log(g) trend (Figure 7, middle panel), which includes both stars from our outlier sample and the parent sample (stars that follow the [C/N]-mass trend).
As expected, we find that the \(v\sin i\)s calculated with the correct surface gravity are systematically higher, by 45% on average (Figure 9) and that the inferred inclination angles are now consistent with being randomly distributed (Figure 8, bottom). Furthermore, we identify 6 new stars with measurable vsinis from spectra that were originally hidden by the over-broadened lines in the template. Of these new RC rotators, all but one are
stars that fit the [C/N]-mass trend. The new vsinis for stars the outlier sample with overestimated APOGEE log(g)s are in Table (1). We found that 62% of our stars with measured rotation needed log(g) and vsini correction. Overall we estimate that about 20% of outliers had over-estimated log(g)s in comparison to only 5% of stars in the parent sample, suggesting that [C/N] outliers and rotation are correlated.
### Stellar Interactions
The main categorizations of outliers are summarized in Figure 10. We compared the characteristics of our outlier sample to the non-outlier sample (i.e. the stars that do follow the [C/N]-mass trend) in Table 2. For the low-metallicity red clump population (Figure 10d), we find a significant number of outliers from the [C/N]-mass relation also have rapid rotation, disagreements between their seismic and spectroscopic surface gravities, and ages that are younger than their \(\alpha\)-element abundances would suggest. These properties are more prominent in the outlier sample in comparison to the non-outlier sample (Table 2, bold items). Many of these are consistent with binary evolution, which is consistent with close binaries being more common in low metallicity stars (Badenes et al., 2018; Moe et al., 2019) and that red clump stars have already passed through the tip of the red giant branch where their large size would have increased the cross-section for interaction.
For the high-metallicity red clump population (Figure 10) we find a still significant number of outliers with \(\alpha\)-element abundances higher than their age would suggest, but this is comparable to the non-outlier population, and there are fewer stars showing rapid rotation, which could be consistent with suggestions that the binary mass ratio and semi-major axis distribution depends on metallicity (Moe and Di Stefano, 2017). We also note that the significant excess of stars above and to the right of the normal [C/N]-mass relationship could be the result of the accretion of unprocessed material, which would increase a star's mass while simultaneously decreasing its carbon-to-nitrogen ratio **(see, for comparison, simulations by e.g.**Izzard et al., 2018)**.
We show the results for the low-metallicity red giant branch population in Figure 10c. In this case, outliers are dominated by low-mass stars with low \(\nu_{\rm max}\) values and ages that are much older than would be expected given their \(\alpha\)-element abundances. These stars account for 50% of outliers but only 5% of total RGB population (Table 2). Given the challenges of extending asteroseismic techniques to the low-\(\nu_{\rm max}\) regime (see e.g. Stello et al., 2014; Pinsonneault et al., 2018), we suggest that these could be stars whose seismic analysis needs to be treated more carefully than the ensemble analysis of Pinsonneault et al. (2018) could allow. On the other hand, if their seismic parameters are reliable, these could be extremely interesting objects for further study of binary stellar evolution, since their large sizes would have increased their interaction cross-sections, and their low masses could suggest that material from the envelope has been lost, **an interaction that may be expected theoretically to result in lower [C/N] abundances (Izzard et al., 2018)**. If these old, low-metallicity stars do turn out to survive more careful asteroseismic analyses, it would also be interesting to determine whether any of them have chemical anomalies and kinematic properties consistent with formation in
Figure 7: Left: Surface gravity measured by APOGEE vs Surface gravity measured by Kepler through asteroseismology for the APOKASC sample (black-grey density, light grey = most dense). The red line shows a 1:1 correspondance. Center: Same axes and 1:1 trend for only the red clump stars in the APOKASC sample. Right: Surface gravity measured by APOGEE vs Mass measured by Kepler for the Red clump sample, differentiating between stars that follow the [C/N]-mass trend (“parent sample”, circles) and stars from the outlier sample (boxes). The stars are color coded by vsini.
another galaxy and accretion into the Milky Way (e.g. Grunblatt et al., 2021).
Finally, we show in Figure 10a the results for high-metallicity red giant branch stars. In this population, we have a smaller population of outliers 10% versus 7% in the low metallicity red clump. As in the low-metallicity red giant branch, the majority of outliers are low-mass, old giants identified as low \(\nu_{\rm max}\), which may be the result either of asteroseismic measurement error, or mass loss.
Lastly, we find that eclipsing binaries or stars with large vscatter do not make up a large fraction of any of our outlier populations. This is consistent with previous results that suggested that the APOKASC analysis preferentially excluded binary systems (Tayar et al., 2015), likely due to the complexity of analysis and decrease in oscillation amplitude in some of these systems(Gaulme et al., 2014). We do detect two new binary systems from high vscatter, KIC 5446355 and KIC 8127707.
There are rarely conclusive signs that any individual star has undergone an interaction with another star or planet. However, from our analyses we find that most outliers have a signature that is more consistent with binary interactions than internal mixing processes. Using the different diagnostics that we discussed in Section 3, we find strong evidence that many of the outliers from the [C/N]-mass relation are the result of stellar interactions.
### [C/N] correlation with Iron Peak Elements
Figure 8: Histogram distribution of the number of stars as a function of sini (inclination angle) before (top) and after (bottom) the surface gravity correction for 32 rapid-rotating stars. The black line shows the expected trend of the histogram for a simulated randomly distributed inclination angles for a sample of the same size. The observed distribution is skewed to small angles, peaking at sini\(=0.4\)
Figure 9: Top: Flux versus Wavelength for an individual star observed by APOGEE for a portion of its spectral region. In black is the measured spectrum, red shows the spectral line template by ASPCAP with an incorrect surface gravity parameter (Template 1), orange shows the the same template with a corrected surface gravity using the logg measurement from Kepler (Template 2), and blue shows the line fit after using Template 2 and correcting for rotational broadening (vsini). Bottom: Corrected vsini versus original vsini for stars flagged to have different loggs measured by APOGEE and Kepler. The blue line shows a 1:1 correspondence. Vsinis for our sample are higher on average by 1.7 km s\({}^{-1}\).
The results in the previous section suggest that many outliers in the [C/N] - stellar mass relation may have gained mass from an interaction with a companion. We therefore search for other unusual abundances which could indicate the source of the accreted mass. If the source of the additional material is an unevolved low-mass star or substellar companion, then we do not expect offsets in other elements available in the infrared (APOGEE), although optical measurements of lithium abundance could be informative (e.g. Aguilera-Gomez et al., 2016; Soares-Furtado et al., 2020). On the other hand, material gained from a AGB companion is likely to be enhanced in s-process elements, with an increased total of carbon and nitrogen (Han et al., 1995) and pollution from a Type Ia supernova explosion may enhance the abundances of iron-peak elements such as nickel at the stellar surface (Gonzalez Hernandez et al., 2009).
Figure 10 shows stars that are offset from similar stars by more than 0.5 dex in O, Mg, Al, Si, S, K, Ca, Ti, V, Mn, Fe, Co, and Ni. using the procedure discussed in Section 3.5. Approximately 20-30% of outliers have chemical offsets in comparison to 5-10% of non-outlier stars. In Figure 11 (right) we compare the fraction of stars offset in each element between stars that follow the standard relationship between [C/N] and mass (blue) and those we identify as outliers (red).
The most significant chemical offset is observed for cobalt abundances. We show in Figure 12 one example of a star whose cobalt is significantly lower than stars similar metallicity, \(\alpha\) abundance, temperature, and gravity. However, we caution that the wavelength windows used to infer the cobalt abundance in the ASPCAP pipeline have significant overlap with the windows used to derive carbon and nitrogen. Therefore, it is possible that the cobalt anomalies in stars with anomalous carbon and nitrogen abundances could be due to correlated measurement errors between abundances within the ASPCAP pipeline. More generally, it is also possible that some offset in one or more of the bulk parameters of the star such as temperature or surface gravity due to e.g. enhanced rotation may propagate into offsets in all of the chemical abundances that are subsequently fit using the assumed (erroneous) parameter.
A line-by-line analysis, opposed to the spectral synthesis used by the ASPCAP pipeline, could clarify whether these offsets in cobalt, as well as those in nickel,
Figure 10: A summary of the additional anomalies identified in our outliers from the [C/N]-mass relation for high-metallicity (top) and low-metallicity (bottom) stars as well as first ascent red giants (left) and red clump stars (right).
chromium, and manganese, which have similar issues, are real. We also note a slight excess of stars whose sodium abundance is slightly offset from expectations. However, we caution that there are only two spectral windows being used by the ASPCAP pipeline to compute the sodium abundance, and that both are close to telluric features in the spectrum, which suggests that this may be an analysis issue rather than a true physical offset. More generally, we find that none of our stars show conclusive evidence of pollution by a particular source, such as an AGB star or a nearby supernova, although we cannot rule out the possibility that more careful and exhaustive searches for abundance anomalies may clarify the sources of the mass gained in some cases.
## 5 Conclusions
Galactic Archaeology depends on precise measurements of stellar ages, which requires a robust measurement of stellar mass. In this paper, we have investigated outliers in the [C/N] - stellar mass relation, which is a promising tool to measure ages for large stellar populations. Our main conclusions are as follows:
* \(\approx\) 10 % of red clump stars and first ascent giants do not follow the expected relationship between the carbon-to-nitrogen ratio and mass; there are more deviations at lower metallicities and in the red clump.
* For low-metallicity red clump stars that are outliers to the [C/N]-mass trend, we tend to underpredict their mass and thus overpredict their age. For first ascent giants that are outliers to the [C/N]-mass trend, we tend to overpredict the mass and thus underpredict the age.
* Stars on the upper giant branch (log(g)\(<\) 1) seem to deviate from the expected relationship, although we caution that this could be issues with measuring global asteroseismic parameters for evolved stars or known issues for the spectroscopic parameters of the coolest most luminous giants in Data Release 14 (Jonsson et al., 2020) rather than a physical deviation from the [C/N]-mass relation.
* Many of the stars that deviate from the [C/N]-mass relation have other properties that are strongly suggestive of having undergone some sort of interaction with another star or sub-stellar companion. Stars with indications of current binarity or past interactions, including rapid rotation, activity, radial velocity variability, and chemical anomalies, are less likely to follow the expected relationship between [C/N] and mass and should be excluded from galactic population studies using [C/N] to estimate masses and ages
* We also note that rapidly rotating red clump stars in the APOKASC sample tend to have significantly overestimated spectroscopic surface gravities, affecting 20% of our outliers and 5% of the non-outlier sample. Although we have not demonstrated it here, previous work indicates that this may also be correlated with errors in the other stellar parameters measured spectroscopically (Dixon et al., 2020; Patton et al., 2023).
Figure 11: We compare the fraction of stars anomalous in each element for our outlier sample (red) as well as the stars that fall within the expected [C/N]-mass relationship (blue). Our outlier stars are significantly more likely to have e.g. cobalt anomalies than the stars that fit the trend, but it is not clear whether this is due to surface pollution or correlated measurement errors.
Figure 12: For one of our stars KIC 7553192 with an anomalous cobalt abundance, we show a small region around the most highly weighted cobalt line (grey band) and show that the outlier star (blue) has significantly shallower absorption in that region than the four spectroscopically similar stars with standard cobalt abundance (grey lines)
This is an exciting time for galactic archaeology studies, because the combination of precise asteroseismic data and large spectroscopic surveys will continue to provide the opportunity to estimate the masses and ages of large numbers of stars across the galaxy. Machine learning techniques provide a useful tool for extending what can be computed for small samples of stars to estimate the ages of large samples across the galaxy (Ness et al., 2016; Mackereth et al., 2019; Hon et al., 2021). However, as shown here these tools should be applied carefully, as stars that deviate from the normal trend can bias the inferred properties if not taken into account, and we provide some guidance here on how to exclude those stars.
The pathways and results of binary stellar evolution is one of the largest uncertainties in our understanding of low-mass stars and understanding outliers in [C/N]-stellar mass relation. Phases that involve mass transfer and common envelopes are particularly hard to model from first principles. Future combinations of spectroscopic, photometric, and asteroseismic diagnostics may provide a reliable way to identify stars that have undergone a recent interaction, and thus provide more robust tools to better constrain the physics of this important phase.
## Acknowledgments
We thank D. Schneider for pointing out helpful references. We thank the anonymous referee for helpful comments that improved this manuscript. E.B. and D.H. acknowledge support from the National Aeronautics and Space Administration (80NSSC19K0597). D.H. also acknowledges support from the Alfred P. Sloan Foundation. Support for this work was provided by NASA through the NASA Hubble Fellowship grant No.51424 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS website is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrofisica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, Lawrence Berkeley National Laboratory, Leibniz Institut fur Astrophysik Potsdam (AIP), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astrophysik (MPA Garching), Max-Planck-Institut fur Extraterrestrische Physik (MPE), National Astronomical Observatory of China, New Mexico State University, New York University, University of Notre Dame, Observatario Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autonoma de Mexico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{ Kepler ID} & \(\nu_{max}\) & \(\Delta_{\nu}\) & T\({}_{eff}\) & logg (A) & logg (K) & Mass & P\({}_{rot}\) & \(\sigma_{P_{rot}}\) & P\({}_{rot}\) (C) & P\({}_{rot}\) (G) & vsini (old) & vsini (new) \\ & & & (K) & & & (M\({}_{\odot}\)) & (days) & (days) & (days) & (days) & (km/s) & (km/s) \\ \hline
[MISSING_PAGE_POST]
33 & - & 7.48 & 9.39 \\ \hline \end{tabular}
**Notes:**: P\({}_{rot}\), P\({}_{rot}\) (C) and P\({}_{rot}\) (G) refer to rotation period from this paper, Cellier et. al (2017) and Gaulme et al. (2020) respectively. Old vsini and new vsini refer to vsini from this paper before and after logg correction (see section 4.1).
\end{table}
Table 1: APOKASC-2 outlier stars with measured rotation periods |
2310.07494 | Impact of filaments on galaxy cluster properties in The Three Hundred
simulation | Galaxy clusters and their filamentary outskirts reveal useful laboratories to
test cosmological models and investigate Universe composition and evolution.
Their environment, in particular the filaments of the Cosmic Web to which they
are connected, plays an important role in shaping the properties of galaxy
clusters. In this project, we analyse the gas filamentary structures present in
324 regions of The Three Hundred hydrodynamical simulation extracted with the
DisPerSE filament finder. We estimate the number of gas filaments globally
connected to several galaxy clusters, i.e. the connectivity k, with a mass
range of $10^{13} \leq M_{200} \, h^{-1} \, M_{\odot} \leq 10^{15} $ at
redshift $z=0$. We study the positive correlation between the connectivity and
mass of galaxy clusters. Moreover, we explore the impact of filaments on the
dynamical state of clusters, quantified by the degree of relaxation parameter
$\chi$. | Sara Santoni, Marco De Petris, Antonio Ferragamo, Gustavo Yepes, Weiguang Cui | 2023-10-11T13:39:37Z | http://arxiv.org/abs/2310.07494v1 | # Impact of filaments on galaxy cluster properties in the Three Hundred simulation
###### Abstract
Galaxy clusters and their filamentary outskirts reveal useful laboratories to test cosmological models and investigate Universe composition and evolution. Their environment, in particular the filaments of the Cosmic Web to which they are connected, plays an important role in shaping the properties of galaxy clusters. In this project, we analyse the gas filamentary structures present in 324 regions of The Three Hundred hydrodynamical simulation extracted with the DisPerSE filament finder. We estimate the number of gas filaments globally connected to several galaxy clusters, i.e. the connectivity \(k\), with a mass range of \(10^{13}\leq M_{200}\,h^{-1}\,M_{\odot}\leq 10^{15}\) at redshift \(z=0\). We study the positive correlation between the connectivity and mass of galaxy clusters. Moreover, we explore the impact of filaments on the dynamical state of clusters, quantified by the degree of relaxation parameter \(\chi\).
## 1 Introduction
Clusters of galaxies, the largest gravitationally bound systems in the Universe, reside at the nodes of the Cosmic Web [1] and are connected by a multitude of filamentary structures. In the outskirts of the clusters, matter and galaxies are funneled towards the centre through filaments. A comprehensive knowledge of the filaments connected to galaxy clusters is essential to understand the influence of the environment on galaxy cluster properties and evolution. One way to quantify the filamentary skeleton around the cluster is through the so-called connectivity \(k\)[2], defined as the number of filaments globally connected to the galaxy cluster, estimated at a specific aperture. This project aims to investigate the impact of filaments on the main galaxy cluster properties, in particular in this work we focus on their masses and dynamical state.
## 2 The Three Hundred **project**
In this work, we analyse the multiple zoom-in regions of The Three Hundred hydrodynamical simulation [3]. The Three Hundred project2 aims to model 324 regions whose volumes have cubic side lengths of \(30\,h^{-1}\,Mpc\), centered on massive galaxy clusters, with a mass
\(M_{200}>6.42\times 10^{14}\,h^{-1}\,M_{\odot}\). The Three Hundred regions are re-simulated with higher resolution from the 324 most massive galaxy clusters at \(z=0\) of the \(1\,h^{-1}\,Gpc\) Dark Matter-only MDPL2 MultiDark [4] simulation. The cosmological parameters used in the MDPL2 and The Three Hundred simulations are those measured by the _Planck_ mission [5]. The regions have been re-simulated with different baryonic models: Gadget-MUSIC [6], Gadget-X [7], which is used in this work, and more recently Gizmo-Simba [8]. For each region, 128 snapshots are available from redshift \(z=17\) to \(z=0\). The simulated regions were analysed using the AHF halo finder [9] which self-consistently includes both gas and stars in the halo finding process. The halo finder extracts haloes and estimates their properties, such as the radius \(R_{\Delta}\)2, mass \(M_{\Delta}\) and density profile.
Footnote 2: The subscript \(\Delta\) indicates the overdensity, i.e. value of the ratio between the density of the cluster at that radius and the critical density of the Universe \(\rho_{c}=3H^{2}/(8\pi G)\) at the cluster’s redshift.
## 3 Methods
### Cosmic Web extraction
The gas particle distribution of The Three Hundred regions are analysed with DisPerSE [10], a topological structure finder, designed to extract the structures of the Cosmic Web. The finder identifies topologically significant features in the input density field, which is obtained through a Delaunay tesselation in the case of a 2D or 3D discrete distribution. The noise introduced by the finite sampling of the distribution is quantified and reduced with the persistence and topological simplification theories. The persistence parameter, which quantifies the robustness of a topological pair, is defined as the difference of the values of the critical points in the pair and is used to filter low significant filaments. As an output, DisPerSE provides the positions of the extreme points found in the distribution: maxima, minima, saddle points and bifurcation points, where a filament splits in two. The filaments are given as a set of segments connecting a maximum and a saddle point.
For our analysis, we first binned the gas particle distribution of each region at redshift \(z=0\) in a three-dimensional grid of \(30\,h^{-1}\,Mpc\) per side, and within each region we give a pixel resolution of \(150\,h^{-1}\,kpc\). Then, to avoid sharp variations from one pixel to another, we applied a Gaussian smoothing with a \(\sigma\) of 4 pixels. Finally we applied an absolute persistence cut of 0.2, to focus on significant filaments connecting clusters and haloes. The node distribution extracted from DisPerSE was compared with the AHF halo catalogue of each region, to match the DisPerSE maximum points to the simulated haloes and clusters. To avoid a possible low-resolution contamination near the borders, we consider only the haloes inside a sphere of \(13\,h^{-1}\,Mpc\) radius from the centre of each region. The final data set includes \(3\times 10^{3}\) haloes and clusters with a mass range from \(10^{13}\leq M_{200}\,h^{-1}\,M_{\odot}\leq 5\times 10^{15}\).
The gas skeleton extracted from The Three Hundred simulated regions is a good tracer of the overall matter distribution and accretion to galaxy clusters. In particular, throughout the 324 simulated regions there is a good spatial agreement between the gas filaments and both the Dark Matter and mock galaxy filaments, both from 3D and 2D extractions [11; 12].
### Connectivity measurements
For each halo we estimated the connectivity \(k\), which is defined in [2] as the number of filaments globally connected to a cluster. Different definitions are used in literature to estimate this parameter. In this work, we compute the connectivity as the number of filaments crossing a specific spherical surface at a radius \(R_{\Delta}\) from the centre of the halo. In particular, we
estimate the connectivity at \(R_{200}\) and at \(R_{500}\), defined respectively as \(k_{200}\) and \(k_{500}\). With this definition, we are taking into consideration also the filaments that are coming from substructures and bifurcation points that lie within the sphere, which contribute to the cluster's properties and therefore also to its connectivity.
## 4 Results
In this work we investigate if the number of filaments connected to a cluster, quantified by the connectivity, is correlated with the main properties of the cluster itself, mainly its mass and dynamical state.
### Connectivity and galaxy cluster mass
The number of filaments connected to a cluster is expected to correlate with the mass of the cluster itself, as many studies show [13; 14; 15; 16]. The expected trend is for the connectivity to increase with the mass of the cluster.
Taking the advantage to investigate a cluster's sample with the largest mass range, we analyse the connectivity of a set of haloes and clusters extracted from The Three Hundred simulation at redshift \(z=0\). Figure 1 shows the values of the connectivity \(k_{200}\) as a function of the clusters mass \(M_{200}\). We measured the mean and the standard deviation for each mass bin, chosen by taking into account the overall mass distribution in The Three Hundred simulation at \(z=0\), shown in the bottom panel of the figure. We performed a linear fitting, whose parameters are shown in Table 1, as the following: \(\log k_{200}=A\cdot\log M_{200}+B\).
We compare our estimates of the connectivity to similar measurements from literature studies, as shown in Figure 2. We compare our sample both to simulated, as [13], and observed data, such as [14; 15; 16]. The trend of the connectivity with the mass is well in agreement, within the errors, with the ones from literature, despite the differences presented in Table 2, and over a larger mass range. More specifically, the connectivity is estimated at a fixed aperture in [13; 14], while the connectivity from our work and [15; 16] is estimated at an overdensity radius. Other differences can arise from the Cosmic Web component chosen to extract the filamentary skeletons, such as gas, Dark Matter or galaxies particles and from the different filament finder used in the analyses. We refer to [13; 14; 15; 16] for a more detailed description of the datasets used.
\begin{table}
\begin{tabular}{l c} \hline \multicolumn{2}{c}{\(\log k_{200}-\log M_{200}\)} \\ \hline A & 0.298 \(\pm\) 0.016 \\ B & -3.78 \(\pm\) 0.23 \\ \hline \end{tabular}
\end{table}
Table 1: The fitted parameters for the \(\log k_{200}-\log M_{200}\) relation
\begin{table}
\begin{tabular}{l l l l l} \hline \multicolumn{2}{c}{Data} & \(k_{R}\) & M & CW extraction \\ \hline The300 & Hydro simulation & \(R_{200}\) & \(M_{200}\) & 3D gas particles \\ AC+10 [13] & DM simulation & 3 Mpc & \(M_{Vir}\) & 3D DM particles \\ S+19 [14] & Observations & 1.5 Mpc & \(M_{200}\) & 2D galaxies \\ DF+19 [15] & Observations & 1.5 \(R_{Vir}\) & \(M_{200}\) & 2D galaxies \\ M+20 [16] & Observations (Coma Cl.) & \(R_{Vir}\) & \(M_{200}\) & 3D z-slice galaxies \\ \hline \end{tabular}
\end{table}
Table 2: A summary of the parameters of previous studies compared to this work.
### Connectivity and galaxy cluster dynamical state
In this subsection we investigate the correlation between the connectivity and the dynamical state of The Three Hundred clusters. We quantify the dynamical state with the degree of relaxation \(\chi\), as defined by [17]:
\[\chi_{\Delta}=\left(\frac{(\frac{f_{s}}{0.1})^{2}+(\frac{\Delta_{r}}{0.04})^{2} +(\frac{\left|1-\eta\right|}{0.15})^{2}}{3}\right)^{-1/2}\]
where \(f_{s}\) is the sub-halo mass fraction, \(\Delta_{r}\) is the centre-of-mass offset and \(\eta\) is the virial ratio. The threshold values for these parameters were chosen following [18]. A cluster is considered as dynamically relaxed when \(\chi\geq 1\). To study the effect of connectivity on the dynamical state of clusters, independently on their masses, we divide the data set in three different connectivity and mass sub-samples. Respectively, we consider weakly connected (\(k_{200}<4\)), medium connected (\(k_{200}=4\)) and highly connected (\(k_{200}>4\)) clusters along with low mass (\(M_{200}<7\times 10^{13}\,h^{-1}\,M_{\odot}\)), medium (\(7\times 10^{13}\leq M_{200}\,h^{-1}\,M_{\odot}<5.5\times 10^{14}\)) and massive (\(M_{200}\geq 5.5\times 10^{14}\,h^{-1}\,M_{\odot}\)) clusters.
In the left and right panels of Figure 3 we display the degree of relaxation as a function of the mass and connectivity, respectively, for three connectivity and mass bins. At a fixed mass, the left panel of Figure 3 shows that there is no evident correlation between connectivity
Figure 1: The connectivity of haloes and clusters \(\log k_{200}\) plotted as a function of the mass \(\log\left(M_{200}/h^{-1}\,M_{\odot}\right)\) (grey points). In the top panel the mean and standard deviation values are also plotted. The bottom panel shows the mass distribution of haloes and clusters of The Three Hundred hydrodynamical simulation at redshift \(z=0\) analysed in this work.
and the degree of relaxation, as the three sub-samples overlap. On the other hand, at fixed connectivity, the right panel of the figure shows a slight correlation between the mass and \(\chi\), indicating that less massive haloes are on average more dynamically relaxed. This result is in disagreement with that found by [19], where at fixed mass, weakly connected clusters are on average more relaxed than highly connected clusters. These differences may depend on the different data set and Cosmic Web extraction analysed in their work, to which we refer the reader for more details.
Figure 3: _Left panel_: mean degree of relaxation as a function of the mass \(M_{200}\) for three connectivity \(k_{200}\) bins (\(k<4\), \(k=4\) and \(k>4\)). _Right panel_: mean degree of relaxation as a function of the connectivity \(k_{200}\) for three mass \(M_{200}\) bins. In both panels, the errors bars represent the errors on the mean values.
Figure 2: Connectivity of The Three Hundred clusters, compared with literature values. For each work, we plot the mean values of the connectivity. The error bars refer to the standard deviation values for this work, [13; 14; 16], while for [15] they represent the errors on the mean values.
## 5 Conclusions
In this work we analysed the gas filamentary structures connected to The Three Hundred hydrodynamical simulation clusters and their impact on galaxy cluster properties. We extracted the gas skeletons at \(z=0\) with the DisPerSE filament finder in the 324 regions of the simulation and we estimated the connectivity \(k_{200}\) of haloes and clusters.
The main conclusions of this work can be summarized in the following:
1. The connectivity is correlated with the mass of haloes and clusters, with more massive clusters being on average more connected. This result is compatible with previous results from literature, both from simulations and observations;
2. We do not find a correlation between the connectivity and the dynamical state of clusters, quantified in terms of the degree of relaxation \(\chi\).
|
2305.03444 | Local Gaussian Modifiers (LGMs): UAV dynamic trajectory generation for
onboard computation | Agile autonomous drones are becoming increasingly popular in research due to
the challenges they represent in fields like control, state estimation, or
perception at high speeds. When all algorithms are computed onboard the uav,
the computational limitations make the task of agile and robust flight even
more difficult. One of the most computationally expensive tasks in agile flight
is the generation of optimal trajectories that tackles the problem of planning
a minimum time trajectory for a quadrotor over a sequence of specified
waypoints. When these trajectories must be updated online due to changes in the
environment or uncertainties, this high computational cost can leverage to not
reach the desired waypoints or even crash in cluttered environments. In this
paper, a fast lightweight dynamic trajectory modification approach is presented
to allow modifying computational heavy trajectories using Local Gaussian
Modifiers (LGMs), when recalculating a trajectory is not possible due to the
time of computation.
Our approach was validated in simulation, being able to pass through a race
circuit with dynamic gates with top speeds up to 16.0 m/s, and was also
validated in real flight reaching speeds up to 4.0 m/s in a fully autonomous
onboard computing condition. | Miguel Fernandez-Cortizas, David Perez-Saura, Javier Rodriguez-Vazquez, Pascual Campoy | 2023-05-05T11:43:52Z | http://arxiv.org/abs/2305.03444v1 | # Local Gaussian Modifiers (LGMs): UAV dynamic trajectory generation for onboard computation
###### Abstract
Agile autonomous drones are becoming increasingly popular in research due to the challenges they represent in fields like control, state estimation, or perception at high speeds. When all algorithms are computed onboard the uav, the computational limitations make the task of agile and robust flight even more difficult. One of the most computationally expensive tasks in agile flight is the generation of optimal trajectories that tackles the problem of planning a minimum time trajectory for a quadrotor over a sequence of specified waypoints. When these trajectories must be updated online due to changes in the environment or uncertainties, this high computational cost can leverage to not reach the desired waypoints or even crash in cluttered environments. In this paper, a fast lightweight dynamic trajectory modification approach is presented to allow modifying computational heavy trajectories using Local Gaussian Modifiers (LGMs), when recalculating a trajectory is not possible due to the time of computation.
Our approach was validated in simulation, being able to pass through a race circuit with dynamic gates with top speeds up to 16.0 m/s, and was also validated in real flight reaching speeds up to 4.0 m/s in a fully autonomous onboard computing condition.
## Supplementary Material
Video of the experiments: [https://vimeo.com/683638197](https://vimeo.com/683638197). Released code : [https://github.com/miferco97/dynamic_trajectory_generator](https://github.com/miferco97/dynamic_trajectory_generator)
## I Introduction
Multirotors are highly versatile and agile aerial robotic platforms, thanks to their maneuverability and simplicity. Nowadays, these vehicles are being used in several tasks such as inspection, delivery, cinematography, or search-and-rescue [1]. Nowadays, most drone applications require a human pilot who is in charge of controlling them. The research community and the industry are working to achieve a higher level of autonomy in drones, which will leverage to perform complex tasks without needing human intervention.
Drones can carry on-board computers, which allows the drone to perform complex tasks like interpret the environment, generate a map, or compute complex trajectories, without relying on stable and fast communication between the aircraft and the ground, what improves the robustness of the system. However, due to the limited weight that a drone is capable of carrying, the power of these computers is limited.
Autonomous drone flight needs various components working together in a coordinate way, such as state estimation, control, environment perception, or planning components [2]. When all this components are wanted to be computed onboard the drone in real time, computing resources become an important limitation.
When UAVs fly in the real world, they have to deal with lots of uncertainties in self-localization, environment recognition, and dynamic modeling, which often are combined with changes in the environment, this is why being able to adapt to different conditions is fundamental.
For the control modules, the most popular strategies, like geometric controllers [3], quaternion-based controllers [4] or Model Predictive controllers (MPC) [5], relies on a previously computed dynamic feasible sequence of states and inputs to track. The problem of generating this sequence is called trajectory generation.
Generating trajectories that change continuously can be a computational expensive task, involving an important amount of time. During the time spent modifying the trajectory, the drone can be flying, following the previous trajectory. If the time for the trajectory generation is too long, it can result in a collision because the new trajectory is computed too late. When all computations are done on a small computer onboard the drone, the calculation time increases.
All these things make it necessary to have a computational cheap and fast way to modify the trajectory when there is not time enough to generate a new trajectory. In this work, we focus on developing a fast and lightweight dynamic trajectory generation that is able to adapt to environmental changes as long as the drone is flying through it.
### _Related work_
The formulation of the trajectory planning problem for multirotors has evolved from the simple shortest path approach to complex minimum time optimization problems. For simple point-mass systems, time optimal trajectories can be computed in closed-form, resulting in bang-bang acceleration trajectories, which can easily be sampled over multiple waypoints. However, multirotors are under-actuated systems [6][7], which means that there is a coupling between linear and rotational accelerations. This coupling causes problems at the moment of generating time optimal trajectories [8].
There are two main approaches for trajectory generation of drones. On the one hand, polynomial trajectory planning [9][10], which is efficient computationally and exploits differentially flat output states, but the smoothness of the
polynomials cannot take advantage of the full actuator potential of the aircraft. On the other hand, there are discretized state space formulation approaches that uses nonlinear optimization to plan in a time-discretized state space using a more complex quadrotor model by taking advantage of the full quadrotor dynamics, such as the Complementary Progress Constraints (CPC) trajectory generator [11]. This approaches are computationally demanding, taking on the order of minutes or even hours to generate a trajectory. Due to this high computational cost, these trajectories are precomputed offline for a fixed and invariant circuit.
Alternatively, there are other approaches that try to solve the problem of control and trajectory generation simultaneously, such as Model Predictive Contouring Control (MPCC) techniques [1].
In real world applications, we have to deal with uncertainties and with changing or unknown environments in real time. That means we will have changes in the path that generate the necessity of changing the trajectory during the fly.
### _Contribution_
In this work, we present a fast and lightweight methodology for generating adaptative trajectories that react to changes in the waypoints set in a smooth and agile way. Our approach consists of combining a polynomial trajectory generator for generating an optimal trajectory in snap, which constitutes the base trajectory (baseline), with local Gaussian modifiers (LGMs) that modify the baseline trajectory when recomputing this baseline trajectory is not feasible. Moreover, we present a strategy for stitching two polynomial trajectories in a smooth way. Compared to other trajectory generators, this one presents an organic approach in which it is taken into account that the drone is sampling his trajectory to generate smooth trajectories in the simplest and most transparent way for the user.
## II Methodology
### _Notation_
In this paper, we use the global frame \(W\) to plan and generate all trajectories. For vectorial variables, functions, and constants, we use bold letters, as \(\mathbf{x}\). Tilde notation represents the new update of a variable, before been taken into the trajectory, e.g. \(\mathbf{\hat{w}}\) means the waypoint position update before recalculating the new trajectory.
### _Problem formulation_
Given a set of \(N\) dynamic waypoints, we aim to compute agile dynamic trajectories \(\mathbf{F}(t)\) that traverse through each waypoint as optimal as possible with onboard computation limitations, being able to modify the position of the dynamic waypoints as the quadrotor runs it.
Due to computational constraints, when reaching a waypoint, there is a temporal threshold \(T_{security}\) from when we do not have enough time to recompute the trajectory using conventional methods.
Inside this threshold, we are flying blindly, so we are not able to correct the trajectory to ensure the waypoint traverse. Our goal is to develop a cheap trajectory modification method that allows us to keep recalculating trajectories to the very last moment, increasing the success rate of reaching those waypoints, although these modifications can lead to follow a suboptimal trajectory.
We define a dynamic waypoint \(\mathbf{w}=[x,y,z]^{t}\) as a 3D point with an ID, whose position can change over time. Each waypoint may have other restrictions such as the velocity \(\mathbf{\dot{w}}\) or acceleration \(\mathbf{\ddot{w}}\) that the aircraft must have when passing through it.
For generating a base trajectory \(\mathbf{P}(t)\) we rely on a polynomial trajectory generator based on the Ritcher et al. [10] work. This approach can be used with other trajectory generators more sophisticated, but in this work we decided to use a simple one with a good trade between performance and computational cost. In this work, they generate piecewise polynomial minimum snap trajectories based on the differential flatness property of the quadrotor dynamics. This trajectory is expressed as:
\[\mathbf{P}(t)=\left\{\begin{aligned} &\sum_{i=0}^{n}c_{i,1}\,t^{i}&& t_{0}\leq t<t_{1}\\ &\sum_{i=0}^{n}c_{i,2}\,t^{i}&& t_{1}\leq t<t_{1} \\ &\vdots&&\\ &\sum_{i=0}^{n}c_{i,N}\,t^{i}&& t_{N-2}\leq t<t_{N-1} \end{aligned}\right. \tag{1}\]
where \(N\) represents the number of waypoints, \(n\) the order of the polynomial, and \(c_{i,j}\) ; \(i=0,..,n\) ; \(j=1,...,N\) the coefficients of each polynomial. More details about how to compute these trajectories can be found in [10][12].
In our approach, we compute an adaptative trajectory that combines a polynomial trajectory \(\mathbf{P}(t)\) with Local Gaussian Modifiers (LGMs). We define a \(\mathbf{LGM}(t):\mathbb{R}^{+}\rightarrow\mathbb{R}^{3}\) as:
\[\mathbf{LGM}(t)=\mathbf{A}e^{-\frac{(t-\mathbf{\dot{w}})^{2}}{2\sigma^{2}}} \tag{2}\]
where \(\mathbf{A}\in\mathbb{R}^{3}\) represents the magnitude of the modification in the position of the waypoint on each axis and \(\mathbf{\dot{\mu}},\sigma\in\mathbb{R}^{+}\) are constants that are computed when each LGM is created. Each dynamic waypoint can have multiple LGMs associated with them.
With these components, we define the time evaluation of our dynamic trajectory \(\mathbf{F}(\mathbf{t})\) as:
\[\mathbf{F}(\mathbf{t})=\mathbf{P}(\mathbf{t})+\sum_{i=0}^{N_{w}}\sum_{j=0}^{ N_{w_{i}}}\mathbf{LGM}_{i,j}(t) \tag{3}\]
Where \(N_{w}\) represents the number of waypoints of the trajectory and \(N_{m_{i}}\) the number of modifications of the i-nth waypoint.
In this work, we assume that we are not reaching the UAV dynamics limits when polynomial trajectories are computed, so we can afford to not take into account the limits in speed and acceleration when LGMs are taken into account, allowing us to leverage the computational constraints even more.
### _Dynamic Trajectory Generation_
When an UAV is flying at high speeds, and the waypoint position changes, fast, reactiveness is fundamental for avoiding collisions. This reactivity is limited by the computational cost involved in generating these trajectories. When these trajectories are computed on onboard computational systems, this effect is even more notorious.
In this approach, we consider different ways to modify a trajectory depending on how fast a new trajectory can be generated in a safe way, ensuring that the trajectory generated will pass through this modified waypoint.
For this task, we define a security time \(T_{security}\):
\[T_{security}=C_{security}\cdot T_{computation}(n); \tag{4}\]
where \(T_{computation}(n)\) is a estimation of how much time a n-waypoints trajectory needs for being calculated, and \(C_{security}\) a security constant for ensure that the UAV will have enough time for reacting after the trajectory were modified, in this work we use \(C_{security}=5\).
The estimation of \(T_{computation}(n)\), is calculated online, based on the average time that previous trajectories of \(n\) waypoints took to be calculated. This estimation is updated as long as new trajectories are calculated, taking into account the computational load of the the onboard computing during the flight.
With this \(T_{security}\) we can define a security zone \(SZ\) as the period of time where the UAV is less time away from the next waypoint than the safety time \(T_{security}\), see Fig. 1.
In this work, we divide our problem in 3 subproblems depending on the state of the aircraft:
1. Generate base trajectories.
2. Modify trajectories outside security zone
3. Modify trajectories inside security zone
The following will discuss how to deal with each of these problems in detail.
### _Generating base trajectories_
When no trajectory is generated yet or the UAV finishes following the current trajectory, the next trajectory is generated from scratch, which means that does not take in account the computation time.
For generating this trajectory, we use minimum snap polynomial-based multiwaypoint trajectory planning algorithms [10][12] due to their simplicity and computation speed.
For generating these trajectories, a set of ordered dynamic waypoints must be provided, the order of each waypoint represents the order in which the UAV will reach each waypoint. When a trajectory is generated from scratch, we constraint the maximum speed and acceleration to ensure the feasibility of the trajectory generated.
For generating trajectories in this way, it is necessary to know the state of the uav when the trajectory generation process starts, for generating trajectories that start from the position of the uav.
### _Modify trajectories outside security zone_
If the aircraft is outside the security zone, the trajectory can be modified by generating a new base trajectory \(\mathbf{\tilde{P}}(t)\) from scratch, updating the position of the dynamic waypoints, or adding new ones.
While the new trajectory is being generated, the UAV is going to continue following the old trajectory until the new trajectory is computed and the old trajectory is replaced. To ensure the smoothness during the whole track, this new trajectory must be generated taking into account an smooth stitching between the old trajectory and the new trajectory.
If both trajectories are too different at the swapping moment, the trajectory followed by the aircraft would have a discontinuity that breaks the smoothness of the whole trajectory. To minimize this discontinuity in the trajectory swapping, the new trajectory will be computed using a set of \(N_{smooth}\) waypoints (swapping waypoints) for smoothing this swapping, and the set of waypoints through which the trajectory is going to pass, see Fig. 2.
This set of swapping waypoints are obtained from the old trajectory to ensure that the new base trajectory is similar to
Fig. 1: Diagram representing the security zone based in \(t_{security}\) respect the next waypoint \(w_{i}\), when the UAV is at \(t_{uav}\)
Fig. 2: New base trajectory (blue) generated from another trajectory (red) using swapping waypoints (green circles) and waypoints modified (blue circles). Red crosses represent the previous localization of each waypoint and the red triangle the position of the uav when the new trajectory is calculated.
the former one during the computational time \(T_{computation}(n)\):
\[\mathbf{w_{i}} =\mathbf{F}(t_{gen}+i\cdot t_{offset}) i=0,..,N_{smooth} \tag{5}\] \[\mathbf{\hat{w_{i}}} =\mathbf{\hat{F}}(t_{gen}+i\cdot t_{offset}) i=0,..,N_{smooth}\] (6) \[\mathbf{\hat{w_{i}}} =\mathbf{\hat{F}}(t_{gen}+i\cdot t_{offset}) i=0,..,N_{smooth} \tag{7}\]
where \(t_{gen}\) is the time where the new trajectory starts to be generated, \(t_{offset}\) represents a temporal displacement between each waypoint. To ensure that all this waypoints are between the computation time of the new trajectory, \(t_{offset}=\alpha\cdot T_{computation}(n)/N_{smooth}\) with \(\alpha=1.5\). In this work, we tried different values for \(N_{smooth}\), finding that 1 and 2 are the most convenient values.
Finally, when the new trajectory is computed, it replaces the previous one from that point in time.
### _Modify trajectories inside security zone_
When the aircraft is inside the security zone means that it is not able to recompute a base trajectory in a safe way, without having the possibility to correct the trajectory in a robust way. In this situation, we have to use faster but sub-optimal approaches in exchange for being able to make these modifications.
In order to be able to perform small modifications in the trajectory, we propose to do local modifications in the trajectory near the modified waypoint so the trajectory will pass through them in a smooth and agile way. For doing these modifications, we use LGMs, which computation time is more that two orders of magnitude lower than generating a base trajectory.
Each LGM has 3 constants related to it: \(\mathbf{A}\), \(\sigma\) and \(\mu\). To obtain the desired behavior, each constant must be computed before adding the modificator to the trajectory.
Given a trajectory \(\mathbf{F}(t)\) consisting on \(N\) waypoints, each waypoint \(w_{i}\) has a position \(\mathbf{x}_{i}\) and a time in the trajectory \(t_{w_{i}}\), which means that \(\mathbf{F}(t_{w_{i}})=\mathbf{w_{i}}\). When a modification \(\mathbf{\hat{w_{i}}}\) occurs in a time \(t_{mod}\), a new modifier \(\mathbf{LGM}_{i}(t)\) is generated, see Fig. 3. Each constant of the modifiers are computed in this way:
\[\mathbf{A} =\begin{bmatrix}A_{x}\\ A_{y}\\ A_{z}\end{bmatrix}=\mathbf{\hat{w_{i}}}-\mathbf{w_{i}}=\begin{bmatrix}\hat{w_{ ix}}-w_{ix}\\ \hat{w_{iy}}-w_{iy}\\ \hat{w_{iz}}-w_{iz}\end{bmatrix} \tag{8}\] \[\mu =t_{w_{i}}\] (9) \[\sigma =\frac{|t_{mod}-\mu|}{3.5} \tag{10}\]
Where \(\mathbf{A}\) represents the amplitude of the modification, \(\mu\) represents the temporal position of the waypoint in the current trajectory and \(\sigma\) the variance of the Gaussian. These parameters are chosen in such way that \(\mathbf{LGM}(t_{mod})\approx\mathbf{0}\), with this we guarantee an smooth change in the trajectory when each modification is done. Due to the properties of the Gaussian function, the \(99.98\%\) of the contribution of \(\mathbf{LGM}(t)\) is between \(t\in[-3.5\sigma<t-\mu<3.5\sigma]\), so choosing sigma in this way let us consider that adding this modifiers to \(\mathbf{F}(t)\) maintain the conditions of continuity and derivability of the trajectory.
The low computational cost of generating and evaluating LGMs allows us to append multiples of them in the same waypoint, obtaining a high reactiveness to the trajectory changes while maintaining the smoothness condition over the trajectory.
### _Implementation details._
The implementation of this dynamic trajectory generator has been done in C++ and it is public accessible1. We use a library developed at the ETH Zurich University, for generating polynomial trajectories 2 based on the work of Ritcher et al. [10]
Footnote 1: [https://github.com/miferco97/dynamic_trajectory_generator](https://github.com/miferco97/dynamic_trajectory_generator)
Footnote 2: [https://github.com/ethz-asl/mav_trajectory_generation](https://github.com/ethz-asl/mav_trajectory_generation)
## III Experiments and Results
### _Experimental setup_
For the simulation experiments, a laptop with Ubuntu 20.04 and an Intel Processor i7-10870H @2.2GHz has been used. In the real flight we use a Nvidia Jetson Xavier NX with a CPU NVIDIA Carmel [email protected] of 64 bits. We perform profiling tests in both computers.
All Flight experiments have been performed using a ROS2 version of the recent framework for autonomous drone racing based on Aerostack 5.0 [2].
### _Local Gaussian Modification tests_
The first experiment consists on testing the performance of the dynamic trajectories generated only using LGMs for updating the trajectory.
For this experiment, we generate a trajectory with 5 waypoints and as long as the UAV goes to it, some waypoints increase his distance to the former trajectory. Fig. 4 shows the different modifications realized over the main trajectory, Fig. 5 shows the references obtained by the quadrotor during the whole track.
Fig. 3: Representation 1-dimensional of a trajectory generated (green) with a base trajectory (red) and a Local Gaussian Modification (blue) of the waypoint \(w\) with a displacement \(A\) in x axis. The red triangle, represents the position of the UAV when the modification is done, red crosses represent the original position of two waypoints and the blue circle the modification of the waypoint \(w\).
Some profiling tests about the time spent in computing base trajectories (TABLE I) and the time spent generating and evaluating LGMs, have been done both in the high end computer and the onboard computer.
### _Simulation flights_
For this experiment, 4 dynamic gates have been placed in the circuit. These gates moved from side to side up to 1 meter and at a constant speed of 0.1m/s. All simulation has been done in the gazebo simulator, using ROS2 for the communication between modules.
To study the effect of the computation time of the system on the generation and modification of the trajectories, 3 rounds of experiments were carried out: one with the computation time taken by the computer to generate a trajectory, and then increased by 0.5s and 1 second, see TABLES III IV V.
\begin{tabular}{l|c|c} & Computer & Jetson Xavier NX \\
6 points & \(13.71\pm 0.14\ ms\) & \(51.47\pm 1.57\ ms\) \\
14 points & \(88.93\pm 3.11\ ms\) & \(321.07\pm 22.11\ ms\) \\
26 points & \(309.08\pm 10.20\ ms\) & \(1027.76\pm 19.36\ ms\) \\ \end{tabular}
### _Real flight experiment._
The aerial platform used for the real experiments was a custom quadrotor based on the DJI F330 frame, shown in Fig. 11. This platform was equipped with a Pixhawk 4 mini as the aircraft autopilot, an Intel Realsense T265 Tracking Module used for state estimation, and an USB fish-eye camera for gate detection. Additionally, the aerial platform was equipped with a Single Board Computer (SBC) NVIDIA Jetson Xavier NX with an 6-core ARM v8.2. Fig. 12.
For validating the approach in a real environment, an additional experiment in real was done. The experiment consisted of passing through a small circuit of two gates each lap faster, beginning with a maximum speed of 0.5m/s and increasing it 0.5 m/s each lap until 3.5m/s, see Fig. 13.
## IV Discussion
In the first testbed, we can see that the trajectory modificated with the LGMs reaches all the waypoints in a smooth
\begin{table}
\begin{tabular}{c|c|c} & **Laptop computer** & **Jetson Xavier NX** \\
6 points & \(13.71\pm 0.14\ ms\) & \(51.47\pm 1.57\ ms\) \\
14 points & \(88.93\pm 3.11\ ms\) & \(321.07\pm 22.11\ ms\) \\
26 points & \(309.08\pm 10.20\ ms\) & \(1027.76\pm 19.36\ ms\) \\ \end{tabular}
\end{table} TABLE I: Time spent generating base polynomial trajectories for different number of points, both in a computer and in a on board computer.
Fig. 12: Position, velocity and acceleration references generated by the trajectory generation during the following of the dynamic trajectory shown above in Fig. 11.
\begin{table}
\begin{tabular}{c|c|c} & **Laptop computer** & **Jetson Xavier NX** \\
6 points & \(13.71\pm 0.14\ ms\) & \(51.47\pm 1.57\ ms\) \\
14 points & \(88.93\pm 3.11\ ms\) & \(321.07\pm 22.11\ ms\) \\
26 points & \(309.08\pm 10.20\ ms\) & \(1027.76\pm 19.36\ ms\) \\ \end{tabular}
\end{table} TABLE II: Time spent generating and evaluating the sum of multiple LGMs acting on the same trajectory.
way, generating smooth references for position, velocity and acceleration, as we can see in Fig. 4 and Fig. 5. Moreover, for the profiling tests we observe that modifying a trajectory with LGMs is more than two orders of magnitude faster than generating a new polynomial trajectory for the usual number of LGMs in a trajectory (between 8 to 64 LGMs).
In the simulated experiments, it has been possible to prove that with a very small calculation time, like that of a high capacity computer, it is capable of constantly regenerating the trajectory each time it detects a change in the position of one of the waypoints. Moreover, when increasing the calculation time, trying to make it similar to the time of computers with lower computation capacity, it is no longer able to generate trajectories and has to make modifications to them using LGMs. This effect increases considerably with speed. With all this, speeds in excess of 14 m/s are achieved by consistently passing through the moving doors.
In the last experiment, we tested our approach in the real world, being able to achieve peak speeds up to 4 m/s, with an average speed of 1 m/s during the experiment. We were not able to fly faster because of the uncertainties in the perception and state estimation module, which added noise to the trajectory following leading to crash.
## V Conclusions
In this work, a novel method to modify base trajectories, whose calculation is computationally expensive, when the reaction time and computing resources are limited, was presented, being able to achieve high speeds up to 16m/s combined with a polynomial trajectory generator in a simulated dynamic environment, being robust to changes in the
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline speed limit & Max speed & Mean speed & Max speed & mean speed & time elapsed & success \\ \hline
5 & 4.71 & 2.25 & 4.7 & 2.29 & 45.9 0 & 0.92 \\
10 & 9.63 & 3.33 & 11.08 & 3.46 & 31.1 & 0.85 \\
15 & 13.61 & 4.77 & 13.82 & 4.87 & 21.80 & 0.90 \\
20 & 15.58 & 4.06 & 16.72 & 3.24 & 27.47 & 0.65 \\ \hline \end{tabular}
\end{table} TABLE V: Results for simulated flight with computing time + 1.0s
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline speed limit & Max speed & Mean speed & Max speed & mean speed & time elapsed & success \\ \hline
5 & 5.15 & 1.54 & 5.25 & 2.23 & 48.5 & 0.88 \\
10 & 8.59 & 2.34 & 9.07 & 2.94 & 35.96 & 0.85 \\
15 & 13.05 & 2.78 & 11.41 & 3.55 & 31.1 & 0.9 \\
20 & 10.28 & 1.9 & 11.19 & 2.45 & 44.7 & 0.88 \\ \hline \end{tabular}
\end{table} TABLE III: Results for simulated flight with computing time
Fig. 8: Plot per axis of the Trajectory Speed generated during the real flight (blue), and the trajectory speed followed by the UAV (orange)
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline speed limit & Max speed & Mean speed & Max speed & mean speed & time elapsed & success \\ \hline
5 & 4.47 & 1.87 & 5.10 & 2.39 & 42.30 & 0.90 \\
10 & 9.50 & 3.53 & 9.48 & 3.61 & 29.54 & 0.90 \\
15 & 13.63 & 3.27 & 13.80 & 3.35 & 31.25 & 0.95 \\
20 & 15.64 & 3.58 & 16.01 & 3.70 & 29.95 & 0.75 \\ \hline \end{tabular}
\end{table} TABLE IV: Results for simulated flight with computing time + 0.5s
Fig. 6: Quadrotor used for real flight experiments
Fig. 7: Trajectory generated during the real flight (blue), and the trajectory followed by the UAV (orange)
base trajectory computation time. In real flight, the lack of accuracy in gate estimation and state estimation results in poorer system performance. This work can be really useful for very low computational power devices as a main way to generate dynamic trajectories.
Although the use of Local Gaussian Modifier as a modifier function has benefits such as the increase in the reactivity of the trajectory generator in onboard computing circumstances, they also show some shortcomings for this technique. For example, the unbounded influence of Gaussian function can cause nearby points to add effects, causing the trajectory to be modified beyond the desired point. In addition, modifying the generated trajectory to meet the physical limits of the platform may cause the speed and acceleration limits to be exceeded. These problems can be faced looking for other modifier functions that take into account these limitations and modify the trajectory in a different way.
Another possibility to study can be to scale this philosophy to be used with very long time computation trajectory generator algorithms, such as CPC [11], combined with a polynomial trajectory generator as the local modifier, to be able to exploit the optimally of the CPC with the low computational cost of the polynomial trajectory.
## Acknowledgment
This work has been supported by the project COMCSE RTI2018-100847-B-C21, funded by the Spanish Ministry of Science, Innovation and Universities (MCIU/AEI/FEDER, UE) and the project "COPILOT: Control, Supervision y Operacion Optimizada de Plantas Fotovoltaacas mediante Integracion Sinergica de Drones, IoT y Tecnologias Avanzadas de Comunicaciones" Ref: Y2020/EMT6368 Funded by Madrid Government under the R&D Sinergic Projects Program.
|
2304.14211 | LLT: An R package for Linear Law-based Feature Space Transformation | The goal of the linear law-based feature space transformation (LLT) algorithm
is to assist with the classification of univariate and multivariate time
series. The presented R package, called LLT, implements this algorithm in a
flexible yet user-friendly way. This package first splits the instances into
training and test sets. It then utilizes time-delay embedding and spectral
decomposition techniques to identify the governing patterns (called linear
laws) of each input sequence (initial feature) within the training set.
Finally, it applies the linear laws of the training set to transform the
initial features of the test set. These steps are performed by three separate
functions called trainTest, trainLaw, and testTrans. Their application requires
a predefined data structure; however, for fast calculation, they use only
built-in functions. The LLT R package and a sample dataset with the appropriate
data structure are publicly available on GitHub. | Marcell T. Kurbucz, Péter Pósfay, Antal Jakovác | 2023-04-27T14:18:29Z | http://arxiv.org/abs/2304.14211v2 | # LLT: An R package for Linear Law-based Feature Space Transformation
###### Abstract
The goal of the linear law-based feature space transformation (LLT) algorithm is to assist with the classification of univariate and multivariate time series. The presented R package, called LLT, implements this algorithm in a flexible yet user-friendly way. This package first splits the instances into training and test sets. It then utilizes time-delay embedding and spectral decomposition techniques to identify the governing patterns (called linear laws) of each input sequence (initial feature) within the training set. Finally, it applies the linear laws of the training set to transform the initial features of the test set. These steps are performed by three separate functions called trainTest, trainLaw, and testTrans. Their application requires a predefined data structure; however, for fast calculation, they use only built-in functions. The LLT R package and a sample dataset with the appropriate data structure are publicly available on GitHub.
keywords: Software, Time series classification, Linear law, Feature space transformation, Artificial intelligence +
Footnote †: journal: arXiv
## 1 Introduction
Over the past decade, time series classification (TSC) has become a crucial task of machine learning and data mining. While its growing popularity is primarily due to the rapidly increasing amount of temporal data collected by widespread sensors (Marussy & Buza, 2013), TSC is extensively studied across a wide variety of fields, including finance (Chao, Zhipeng & Yuanjie, 2019; Kwon, Kim, Heo, Kim & Han, 2019; Fons, Dawson, Zeng, Keane & Iosifidis, 2020; Feo, Giordano,
Niglio & Parrella, 2022; Assis, Machado, Pereira & Carrano, 2018), activity recognition (Mocanu, Ammar, Lowet, Driessens, Liotta, Weiss & Tuyls, 2015; Karim, Majumdar, Darabi & Harford, 2019; Wang, Chen, Hao, Peng & Hu, 2019; Yang, Jiang & Guo, 2019; Kurbucz, Posfay & Jakovac, 2022a; Vidya & Sasikumar, 2022), and biology (Schafer & Leser, 2017; Rajan & Thiagarajan, 2018; Elsayed, Maida & Bayoumi, 2019; Tripto, Kabir, Bayzid & Rahman, 2020; Bock, Moor, Jutzeler & Borgwardt, 2021). Despite the large effort dedicated to this topic, it remains a challenging task due to the nature of time series data, which have large data sizes and high dimensionality and are continuously updated (Fu, Chung, Luk & Ng, 2008; Fu, 2011; Zhao, Lu, Chen, Liu & Wu, 2017; Gao, Murphey & Zhu, 2018).
Depending on whether one or more values (features) are observed at a given time, the TSC problem can be defined as a univariate (Sun, Yang, Liu, Chen, Rao & Bai, 2019; del Campo, Neri, Villegas, Sanchez, Dominguez & Jimenez, 2021; Khan, Wang, Riaz, Elfatyany & Karim, 2021) or multivariate (Baydogan & Runger, 2015; Ruiz, Flynn, Large, Middlehurst & Bagnall, 2021; Hao, Wang, Alexander, Yuan & Zhang, 2023) task. In the related literature, a number of approaches have been proposed to solve both tasks, and these approaches can be divided into feature-based and distance-based methods (see, e.g., Susto, Cenedese & Terzi, 2018; Hao et al., 2023). The most commonly used feature-based methods are the discrete wavelet transform (DWT) (Gupta, Seethalekshmi & Datta, 2021), wavelet packet transform (WPT) (Ray & Mishra, 2016), and discrete Fourier transform (DFT) (Kriegel, Kohler, Bayat-Sarmadi, Bayerl, Hauser, Niesner, Luch & Csesenyes, 2018), which are used in conjunction with a classification algorithm, where dynamic time warping with the one-nearest neighbor (DTW-1NN) (Berndt & Clifford, 1994) is a typical distance-based approach.
The recently published linear law-based feature space transformation (LLT) (Kurbucz et al., 2022a) aims to facilitate univariate and multivariate time series classification tasks by transforming the structure of the feature set (or the original time series) to make the data easier to classify. As a first step, this algorithm splits the instances into training and test sets. Then, it applies time-delay embedding and spectral decomposition techniques to identify the governing patterns (called linear laws) of each input sequence (initial feature) within the training set. Finally, it utilizes the linear laws of the training set to transform the initial features of the test set. This transformation procedure has low computational complexity and provides the opportunity to develop a learning
algorithm.
This paper presents an R package called LLT, which is the first implementation of the LLT algorithm. This package implements LLT in a flexible yet user-friendly way while using separate functions for each computational step, which facilitates the further development of the algorithm. In addition, it does not rely on functions written by the community, which results in low computational demand. The LLT R package and a sample dataset with the appropriate data structure are publicly available on GitHub (Kurbucz, Posfay & Jakovac, 2023). The metadata of the package is presented in Table 1.
The rest of this paper is organized as follows. Section 2 presents the concept of linear laws and briefly introduces the LLT algorithm. Section 3 and 4 describe the structure and use of the software in detail. In Section 5, the application of the software is presented on an electric power consumption dataset. Finally, Section 6 discusses the impacts of the software and provides conclusions.
## 2 LLT algorithm
This section briefly overviews the definition of linear laws and how this concept can be applied to feature space transformation. Note that the LLT algorithm is described in detail by Kurbucz et al. (2022), while derivations and proofs related to the linear laws can be found in Jakovac (2021).
\begin{table}
\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline \multicolumn{2}{|c|}{**Metadata description**} & \multicolumn{2}{|c|}{**Metadata contents**} \\ \hline Current code version & v0.1.0 \\ \hline Permanent link & [https://github.com/mtkurbucz/LLT](https://github.com/mtkurbucz/LLT) \\ \hline Legal code license & GNU General Public License v3.0 \\ \hline Code versioning system & Git \\ \hline Software code languages & R \\ \hline Operating environments and dependencies & R 4.2.2 or later. OS agnostic (Linux, OS X, MS Windows). \\ \hline Link to developer documentation and user manual & [https://github.com/mtkurbucz/LLT/blob/master/README.md](https://github.com/mtkurbucz/LLT/blob/master/README.md) \\ \hline Support email for questions & [email protected] \\ \hline \end{tabular}
\end{table}
Table 1: Metadata of the LLT package
### Linear laws of time series
First, consider a generic time series \(\mathbf{z}_{t}\) where \(t\in\{1,2,...,k\}\) represents the time. The \(l^{\text{th}}\) order (\(l\in\mathbb{Z}^{+}\) and \(l<k\)) time-delay embedding (Takens, 1981) of this series is defined by:
\[\mathbf{A}=\begin{pmatrix}\mathbf{z}_{1}&\mathbf{z}_{2}&\cdots&\mathbf{z}_{l}\\ \mathbf{z}_{2}&\ddots&\ddots&\vdots\\ \vdots&\ddots&\ddots&\vdots\\ \mathbf{z}_{k-l}&\cdots&\cdots&\mathbf{z}_{k}\end{pmatrix}. \tag{1}\]
Then, a symmetric \(l\times l\) matrix \(\mathbf{S}\) is generated from \(\mathbf{A}\) as follows:
\[\mathbf{S}=\mathbf{A}^{\intercal}\mathbf{A}. \tag{2}\]
The term law in our case implies that we are seeking those weights that transform the values of the \(\mathbf{S}\) matrix so that they are close to zero; that is, we seek the coefficients (\(\mathbf{v}\)) that satisfy the following equation:
\[\mathbf{S}\mathbf{v}\approx\mathbf{0}, \tag{3}\]
where \(\mathbf{0}\) is a column vector containing \(l\) elements of null value, \(\mathbf{v}\) is a column vector with \(l\) elements and \(\mathbf{v}\neq\mathbf{0}\). To find the \(\mathbf{v}\) coefficients of Eq. (3), we first perform eigendecomposition on the \(\mathbf{S}\) matrix. Then, we select the eigenvector that is related to the smallest eigenvalue. Finally, we apply this eigenvector as \(\mathbf{v}\) coefficients, and hereinafter, we refer to it as the linear law of \(\mathbf{z}_{t}\). Note that this logic is related to principal component analysis (PCA) (Pearson, 1901; Hotelling, 1933); however, in contrast to PCA, we look for components that minimize the variance of the projected data (see Jakovac, 2021; Jakovac, Kurbucz & Posfay, 2022; Kurbucz et al., 2022).
### Feature space transformation
Let us consider input data as \(\mathbf{X}=\{\mathbf{X}_{t}\mid t\in\{1,2,\ldots,k\}\}\) sets (time series), where \(t\) represents the observation times. The composition of this input data can be expressed as \(\mathbf{X}_{t}=\{\mathbf{x}_{t}^{i,j}\mid i\in\{1,2,\ldots,n\},\ j\in\{1,2,\ldots,m\}\}\), where \(i\) denotes the instances and \(j\) identifies the different input series (initial features) belonging to a given instance. The output \(\mathbf{y}\in\{1,2,\ldots,c\}\) is a vector that records the classes (\(c\)) of instances (\(\mathbf{y}=\{y^{i}\in\mathbb{R}\mid i\in\{1,2,\ldots,n\}\}\)).
During the first step of the LLT algorithm, instances \((i)\) are separated into training (\(tr\in\{1,2,\ldots,\tau\}\)) and test (\(te\in\{\tau+1,\tau+2,\ldots,n\}\)) sets in such a way that ensures a balanced representation of the instance classes across both sets. (For transparency, we assume that the arrangement of the instances within the dataset meets this condition for the \(tr\) and \(te\) sets.) We then identify the linear law (see \(\mathbf{v}\) in Eq. (3)) of each input series of the training set \((\mathbf{x}_{t}^{1,1},\mathbf{x}_{t}^{2,1},\ldots,\mathbf{x}_{t}^{\tau,m})\), thus obtaining a total of \(\tau\times m\) laws (eigenvectors). These laws are grouped by input series and classes as follows: \(\mathbf{V}^{j}=\{\mathbf{V}_{1}^{j},\mathbf{V}_{2}^{j},\ldots,\mathbf{V}_{c}^{j}\}\), where \(\mathbf{V}_{c}^{j}\) refers to the laws of the training set associated with input series \(j\) and class \(c\).
In the next step, \(\mathbf{S}^{te,j}\) matrices (see Eq. (2)) are calculated from the input series of the test instance, which results in \(m\) matrices per instance (one for each initial feature). We then left-multiply the \(\mathbf{V}^{j}\) matrices obtained from the training set by the \(\mathbf{S}^{te,j}\) matrices of the test set related to the same initial feature \((\mathbf{S}^{\tau+1,1}\mathbf{V}^{1},\mathbf{S}^{\tau+1,2}\mathbf{V}^{2},\ldots,\mathbf{S}^{n,m} \mathbf{V}^{m})\). The laws of the \(\mathbf{V}^{j}\) matrices provide an estimate of whether the \(\mathbf{S}^{te,j}\) matrices of the test set belong to the same class as them. That is, only those columns of the \(\mathbf{S}^{te,j}\mathbf{V}^{j}\) matrices are in proximity to the null vector with relatively small variance, for which the classes of the corresponding training and testing data match.
Finally, the dimension of the resulting matrices is reduced by a function that selects the column vectors with the smallest variance and/or absolute mean from the \(\mathbf{S}^{te,j}\mathbf{V}^{j}\) matrices for each class. After these calculation steps, the transformed feature space of the test set has \(((n-\tau)l)\times((mc)+1)\) dimensions with the output variable.
The calculation steps are illustrated in Fig. 1.
Figure 1: Steps of the LLT algorithm
## 3 Software description
The LLT R package is the first to implement the LLT algorithm. This package contains three main functions (trainTest, trainLaw, and testTrans) and two auxiliary functions (embed and linlaw). The auxiliary functions are called by the main functions, so the user does not need to use them to perform the LLT algorithm.
Description of the main functions:
* trainTest(path,test_ratio,seed) (_trainTest.R_): This function generates a two-level _list_ that splits the instances into training and test sets. The first level separates the training and test sets, and the second level groups the instances by class (see Fig. 1). It has two mandatory arguments and one optional user-defined argument as follows:
* path (_character_): The path to the directory that contains the instances grouped by class.
* test_ratio (_double_\(\in[0,1]\)): The ratio of instances in the training and test sets.
* seed (_integer_): The initial value of the random number seed. By default, it is not fixed.
* trainLaw(path,train_test,dim,lag) (_trainLaw.R_): This function creates a _data.frame_ containing the set of laws generated from the instances of the training set. It has three mandatory and two optional user-defined arguments as follows:
* path (_character_): The path to the directory that contains the instances grouped by class.
* train_test (_list_): A two-level list that splits the instances into training and test sets. It can be generated by the trainTest function or defined by the user manually. Fig. 1 presents an example of the appropriate structure of this object.
* dim (_integer_\(\in[2,k]\)): It defines the row and column dimension (\(l\)) of the symmetric matrix \(\mathbf{S}\). (The value \(k\) is the length of the input series.)
* lag (_integer_\(\in[1,l]\)): It defines the successive row lag of the \(\mathbf{A}\) matrix. By default, it is 1 (see Eq. (1)). (The value \(l\) is the order of the time-delayed embedding.)
* testTrans(path,train_test,train_law,lag,select) (_testTrans.R_): This function transforms the instances of the test set by using the LLT algorithm. It generates a _data.frame_
object in which columns are new features and rows are the dim-length time series created from the test instances and placed one below the other. It has three mandatory and two optional user-defined arguments as follows: * path (_character_): The path to the directory that contains the instances grouped by class. * train_test (_list_): A two-level list that splits the instances into training and test sets. It can be generated by the trainTest function or defined by the user manually. Fig. A1 presents an example of the appropriate structure of this object. * train_law (_data.frame_): The set of laws generated from the training instances. It can be generated by the trainLaw function. (For development purposes, e.g., for the creation of a learning algorithm, the user can easily modify this _data.frame_.) * lag (_integer_\(\in[1,l]\)): It defines the successive row lag of the \(\mathbf{A}\) matrix. By default, it is 1 (see Eq. (1)). (The value \(l\) is the order of the time-delayed embedding.) * select (_character_\(\in\) ["rank","var","mean"]): New features are defined based on this (\(f\)) function (see Feature space transformation section). The "var" option selects a column vector per class and input series with the smallest variance, while the "mean" option performs this selection based on the minimum absolute mean value. The "rank" minimizes both at the same time by ranking the columns by variance and absolute mean and selecting the column with the smallest sum of ranks. All three selection criteria result in as many new features as the number of classes multiplied by the number of input series. The default value is "rank".
Description of the auxiliary functions:
* embed(series,dim,lag) (_embed.R_): This function generates the \(\mathbf{S}\) matrix from a time series (see Eq. (2)). It has two mandatory arguments and one optional user-defined argument as follows:
* series (_numeric_): A time series in a column vector without missing values.
* dim (_integer_\(\in[2,k]\)): It defines the row and column dimension (\(l\)) of the symmetric matrix \(\mathbf{S}\). (The value \(k\) is the length of the input series.)
* lag (_integer_\(\in[1,l]\)): It defines the successive row lag of the \(\mathbf{A}\) matrix. By default, it is 1 (see Eq. (1)). (The value \(l\) is the order of the time-delayed embedding.)
* linlaw(series,dim,lag) (_linlaw.R_): By applying the embed function, it generates the law (\(\mathbf{v}\)) of a time series (see Eq. (3)). It has two mandatory arguments and one optional user-defined argument as follows:
* series (_numeric_): A time series in a column vector without missing values.
* dim (_integer_\(\in[2,k]\)): It defines the row and column dimension (\(l\)) of the symmetric matrix \(\mathbf{S}\). (The value \(k\) is the length of the input series.)
* lag (_integer_\(\in[1,l]\)): It defines the successive row lag of the \(\mathbf{A}\) matrix. By default, it is 1 (see Eq. (1)). (The value \(l\) is the order of the time-delayed embedding.)
The LLT R package and a sample dataset with the appropriate data structure are publicly available on GitHub (Kurbucz et al., 2023).
## 4 Usage
### Installation
The LLT can be installed by using the devtools R package as follows.
```
1#install.packages("devtools")
2#library(devtools)
3devtools::install_github("mtkurbucz/LLT")
```
### Data preparation
After installation, the dataset to be transformed must be converted into a data structure in which instances are grouped by classes. Furthermore, time series features must be tab-separated column vectors with the name of the feature in the header. The appropriate data structure is presented in Fig. 2.
### Data transformation
A dataset with the appropriate structure can be transformed in the following way using the LLT package.
```
1#Loadingpackage
2library(LLT)
3
4#Settingparameters
5path<-"./data"
6test_ratio<-0.30
7dim<-9
8seed<-12345
9lag<-9
10select<-"var"
11
12#Calculation
13train_test<-LLT::trainTest(path,seed,test_ratio)
14train_law<-LLT::trainLaw(path,train_test,dim,lag)
15result<-LLT::testTrans(path,train_test,train_law,lag,select)
```
Figure 2: Appropriate data structure for 2 classes and 6 features
## 5 Illustrative examples
This section presents a simple example of using the LLT package. In this example, we employ the PowerCons dataset collected by the Research and Development branch of Electricite de France (EDF) in Clamart (France), which is publicly available in the UCR Time Series Classification Archive (Dau, Keogh, Kamgar, Yeh, Zhu, Gharghabi, Ratanamahatana, Yanping, Hu, Begum, Bagnall, Mueen, Batista & Hexagon-ML, 2018). It contains the individual household electric power consumption over the course of one year, categorized into two seasonal classes: "Warm" and "Cold", based on whether the power consumption was recorded during the warm seasons (from April to September) or the cold seasons (from October to March). Each instance in the dataset represents a day, with electric power consumption recorded at a sampling rate of ten minutes. Instances are associated with a class and comprise 144 consecutive values. Fig. 3 displays examples of daily power consumption from each class.
Before the transformation, we merged the training and test sets of instances that were previously separated by the authors. Then, we repeated the transformation 300 times based on the dim = 5 and test_ratio = 0.1 parameter setting. After each transformation, we calculated the mean absolute value of the resulting features for both classes and as a predicted class, we chose the class whose law resulted in a smaller absolute mean value. Based on the result of the repeated calculation procedure, we obtained an average accuracy of 87.204% with a standard deviation of 5.536%. The histogram of accuracies achieved after each transformation is shown in Fig. 4.
Figure 3: Examples of the time series belonging to each class
Note that in the case of more difficult classification tasks, it may be worthwhile to compute additional statistics (such as variance) from the new features and then apply a classification algorithm on the obtained feature space. Based on our preliminary results (see, e.g., Kurbucz et al., 2022a), we achieve the most accurate result with the least computational demand by combining the LLT and the k-nearest neighbor (KNN) (Fix, 1985; Cover and Hart, 1967) algorithms.
An additional application example is provided by Kurbucz et al. (2022a). In this paper, the efficiency of LLT combined with various classifiers is examined on a real-world human activity recognition (HAR) dataset called the Activity Recognition system based on Multisensor data fusion (AReM) (Palumbo, Gallicchio, Pucci and Micheli, 2016). According to the results, LLT vastly increased the accuracy of traditional classifiers, which outperformed state-of-the-art methods after the proposed feature space transformation.
## 6 Impact and conclusion
The goal of the linear law-based feature space transformation (LLT) algorithm is to assist with the classification of univariate and multivariate time series. The presented R package, called LLT, implements this algorithm in a flexible yet user-friendly way. This package first splits the instances into training and test sets. It then utilizes time-delay embedding and spectral decomposition techniques to identify the governing patterns (called linear laws) of each input sequence (initial feature) within the training set. Finally, it applies the linear laws of the training set to transform the initial features of the test set. These steps are performed by three separate functions called trainTest, trainLaw, and testTrans. Their application requires a predefined data structure; however, for fast calculation, they use only built-in functions.
Figure 4: Histogram of accuracies
A rudimentary version of the LLT R package has been utilized in Jakovac et al. (2022); Kurbucz et al. (2022), and Kurbucz, Posfay & Jakovac (2022). Both the package and a sample dataset with the appropriate data structure are publicly available on GitHub (Kurbucz et al., 2023).
In conclusion, the value of the LLT R package can be summarized as follows:
* The LLT package implements the linear law-based feature space transformation (LLT) algorithm in the R programming language.
* The calculation steps are performed by separate functions, which facilitate the further development of the algorithm.
* Despite the flexibility of the package, its functions have been designed in a user-friendly way and require only the most important parameters.
* To maintain low computational requirements, the LLT package only uses built-in functions.
## Data availability
The PowerCons dataset was collected by the Research and Development branch of Electricite de France (EDF) in Clamart (France). It is publicly available in the UCR Time Series Classification Archive (Dau et al., 2018) at [http://www.timeseriesclassification.com/description.php?D](http://www.timeseriesclassification.com/description.php?D)
ataset=PowerCons, retrieved: 5 May 2023.
## 7 Acknowledgements
Project no. PD142593 was implemented with the support provided by the Ministry of Culture and Innovation of Hungary from the National Research, Development, and Innovation Fund, financed under the PD_22 "OTKA" funding scheme. The research was supported by the Ministry of Innovation and Technology NRDI Office within the framework of the MILAB Artificial Intelligence National Laboratory Program. A.J. received support from the Hungarian Scientific Research Fund (OTKA/NRDI Office) under contract number K123815.
|
2305.15226 | Topological defects reveal the plasticity of glasses | Mixing theoretical topological structures with cutting-edge simulation
methods, a recent study in Nature Communications has finally confirmed the
existence of topological defects in glasses and their crucial role for
plasticity. | Matteo Baggioli | 2023-05-24T15:08:36Z | http://arxiv.org/abs/2305.15226v1 | # Topological defects reveal the plasticity of glasses
###### Abstract
**Mixing theoretical topological structures with cutting-edge simulation methods, a recent study in Nature Communications has finally confirmed the existence of topological defects in glasses and their crucial role for plasticity.**
In a crystalline solid, atoms are not randomly arranged. On the contrary, they sit in preferred positions, forming a periodic and ordered structure as depicted by the white sheeps in Fig.1a. In technical words, this is known as long-range order, and it is the key behind several of the physical properties of crystals, from their rigidity to the propagation of sound and heat transport. Whenever such an organized configuration exists, it is almost immediate to find out a particle which does not follow the rules and breaks the order. In physics, we call such a rebel, like the blue sheep in Fig.1a, a defect.
If all solids were similarly well-behaved, the life of a physicist would be rather boring. Nevertheless, many solids in Nature are amorphous, and they do not respect the ordered and periodic atomic arrangement described above. Glasses are the most famous example of that sort. The structure of a glass is disordered and more akin to the colorful and heterogeneous herd of sheeps shown in Fig.1b. As a consequence, identifying a defect therein appears as an almost impossible task, if not an ill-defined concept altogether.
The reason why we should care about finding defects in that mess is very practical. In crystalline solids, defects play a fundamental role in predicting mechanical failure and the onset of plasticity [1], the irreversible deformations which bring to the breakdown of the elastic response. In other words, they are critical to connect structure with dynamics, and to predict when and where a certain material will break. A common idea is that, even in glasses, plastic deformations take place in soft spots with abnormally low elastic constants and increased mobility, analogous to dislocations in crystalline systems. The remaining question is how to locate those weak zones from structural information, or even better how to relate them to the presence of defects which potentially control the plastic flow. In order to understand the difficulty behind this task, we need to dive into the mathematical tools that physicists use to define and quantify order and disorder.
Topological defects appear in many disparate areas of physics from cosmic strings in the universe, to vortices in superfluids and even in the patterns of human fingerprints. They represent a beautiful bridge between physics and a branch of mathematics known as topology. Given an ordered medium, like the sheeps in Fig.1a, its configuration can be mathematically described by
Figure 1: **Finding defects in disorder systems is a difficult task (a)** It is rather simple to find a defect (blue sheep) in a periodic ordered structure (white sheeps), as in crystalline matter with long-range order. **(b)** It is (almost) impossible to define a defect in a disordered structure like this herd of colored sheeps, or like in a real glass.
an order parameter, which defines a map between static structures in the real space and the allowed ground states in the energy space of the system. Defects are singularities of this order parameter field which cannot be removed by a smooth deformation, without gluing and tearing parts. They are characterized by a winding number, which corresponds to the total angle through which the order parameter rotates as one surrounds the defect with a closed loop. This number can be thought as a (topological) charge for the defects, by analogy with point-like charged particles in electromagnetism. More mathematically, defects can be classified by their homotopy group, which measures the topological properties of a certain manifold, such as the number of holes in it. A practical application of these concepts leads mathematicians, and quirky potters (see Fig.2a), to conclude that a coffee mug and a doughnut are equivalent, since they share the same type of defect.
To avoid confusion, let us clarify that with the word topological we do not mean defects in coordination or in the number of neighbors, as sometimes used in the context of disordered materials. In the mathematical terms outlined above, those are not topological since they can be removed by a continuous transformation. Beside all the jargon, the main point is that the definition of defects necessitates the existence of an unbroken subset of symmetries that leaves the ground state of the system invariant. For crystalline solids, these are just the rigid translations of the ordered lattice. For amorphous systems, there is no remaining isotropy group. The whole shebang collapses from the start. To iterate the concept, a certain degree of order is needed to define disorder.
The search for topological defects in amorphous solids has a long and controversial history. The idea that dislocations lines could exist in glasses is almost 50 years old [3], and has been extensively scrutinized and questioned using theoretical arguments and numerical simulations [4, 5, 6]. Studies have shown that the properties of amorphous systems are undeniably structure sensitive and that regions of high stress and low symmetry, resembling dislocation cores in crystals, can be identified in glasses [6].
The conviction that plasticity in glasses could still be related to structure led to the introduction of a plethora of structural, thermodynamic and mechanical indicators, including the local shear modulus, energetically favoured regions, linear and non-linear vibrational modes, local thermal energy and more abstract measures of softness. A throughout study [7] concluded that most of these indicators are excellent at locating plastic events over short strain scales, but they do not provide a first principles understanding of plasticity, as in crystals. On the other hand, several successful theories such as shear transformation zones [8], and elasto-plastic models [9], have postulated the existence of such defects without any precise definition.
Against this background, one could just rely on these successful but phenomenological structural indicators or search for defects in glasses beyond their real-space structure. This second choice is what all these new developments are about. The first idea in Ref. [10] was to hunt for defects in the dynamical displacement field rather than in the static structure, and look for singularities upon deforming the system. The inspiration came by thinking about the incompatibility of the deformation, which naturally arises because of non-affinity, and which can be related using mathematical objects known as higher-form symmetries to a strain-free formulation of elasticity [11]. The results of [10] indicated that standard topological concepts applied to the dynamical displacement field allow for a precise identification of defects, which well correlate with the major plastic events and successfully predict the location of the global yielding instability.
Still, the study in [10] gave up on probing the static structure. In other words, the problem of relating structural defects to dynamics was bypassed focusing only on the dynamics itself. That was still unsatisfactory, until Wu and colleagues' recent
Figure 2: **The revealed link between topology and plasticity (a)** A doughnut can be continuously deformed into a coffee mug, hence the two objects are topologically equivalent: they both have one hole. Ceramic model by Keenan Crane and Henry Segerman. **(b)** The identification of topological defects with positive (red) and negative (blue) charges in the normal modes eigenvectors in the work by Wu et al. [2]. **(c)** The correlation between the plastic events (white crosses) and the density of topological defects with negative charge (color map) presented in the work by Wu et al. [2].
paper, published in Nature Communications [2], came out and provided a potential breakthrough in this story.
Wu and colleagues [2] discovered that the long sought structural defects in glasses were hiding in the topology of the vibrational eigenmodes. Differently from the original idea of [10], that is still a property of the static and undeformed configuration, but not of the real-space structure. Wu and colleagues noticed that the spatial distribution of the eigenvectors display a collection of whirls and curls and eye-visible vortex structures (see Fig.2b), with manifest singular behavior. By surrounding these defects with closed loops, and measuring the angular decift of the vectorial field around them, they were able to obtain the corresponding topological charges and identify positive and negative defects. Positive ones are just perfect vortices, like those in the swirling water of your bathtub. Negative ones correspond to frustrated interfaces with a saddle shape. Using advanced statistical methods, Wu and colleagues were able to show a solid correlation between the density of negative defects and the location of the plastic events, defined using the widely accepted concept of non-affine displacement [8]. In Fig.2c, visual evidence of this result is presented, where the plastic events, shown with white crosses, nicely correlate with the darker area with higher density of negative defects.
To make it short, Wu and colleagues [2] managed for the first time to identify topological defects in the static structure of glasses and to provide a direct link between the properties of glasses before deformation and the plastic behavior during it. As for the case of dislocations in crystalline solids, this realizes a connection between structure and dynamics in amorphous systems which goes far beyond all the phenomenological structural indicators considered before and highlights the role of topology and geometry in the context of disordered systems and their plastic behavior.
The revealed topological information displays striking similarities with the quadrupolar Eshelby-like structures believed to be fundamental for the plasticity of amorphous solids [12], and the vortex-like formations possibly constituting the shear transformation zones in glasses [13]. Finally, these defects could be formally related to geometric charges in the metric formulation of elasticity [14], and to magnetic currents in the gauge theories for emergent elasticity in granular matter [15].
In summary, the topological defects discovered by Wu and colleagues could play a pivotal role in our understanding of glasses, from the boson peak feud, to the nature of the glass transition as a topological phase transition, up to the fundamental origin of yielding. We just need to sit with a coffee mug and a doughnut and wait to see how topology will help us to make order out of disorder.
## Acknowledgements
I would like to thank Alessio Zaccone, Michael Landry, Yuliang Jin, Deng Pang, Wanzhou Zhang, Yunjiang Wang and Jie Zhang for discussions about symmetries, topological defects and plasticity in amorphous systems, and for useful comments about a preliminary version of this manuscript. I acknowledge the support of the Shanghai Municipal Science and Technology Major Project (Grant No.2019SHZDZX01) and the sponsorship from the Yangyang Development Fund.
## Author contributions
M.B. did everything.
## Competing interests
The author declares no competing interests.
|
2307.05894 | On Maximal Functions Associated to Families of Curves in the Plane | We consider the $L^p$ mapping properties of maximal averages associated to
families of curves, and thickened curves, in the plane. These include the
(planar) Kakeya maximal function, the circular maximal functions of Wolff and
Bourgain, and their multi-parameter analogues. We propose a framework that
allows for a unified study of such maximal functions, and prove sharp $L^p\to
L^p$ operator bounds in this setting. A key ingredient is an estimate from
discretized incidence geometry that controls the number of higher order
approximate tangencies spanned by a collection of plane curves. We discuss
applications to the F\"assler-Orponen restricted projection problem, and the
dimension of Furstenberg-type sets associated to families of curves. | Joshua Zahl | 2023-07-12T03:51:18Z | http://arxiv.org/abs/2307.05894v1 | # On Maximal Functions Associated to Families of Curves in the Plane
###### Abstract
We consider the \(L^{p}\) mapping properties of maximal averages associated to families of curves, and thickened curves, in the plane. These include the (planar) Kakeya maximal function, the circular maximal functions of Wolff and Bourgain, and their multi-parameter analogues. We propose a framework that allows for a unified study of such maximal functions, and prove sharp \(L^{p}\to L^{p}\) operator bounds in this setting. A key ingredient is an estimate from discretized incidence geometry that controls the number of higher order approximate tangencies spanned by a collection of plane curves. We discuss applications to the Fassler-Orponen restricted projection problem, and the dimension of Furstenberg-type sets associated to families of curves.
## 1 Introduction
In this paper, we study the \(L^{p}\) mapping properties of maximal functions associated to families of curves in the plane. The prototypical example is the (planar) Kakeya maximal function
\[K_{\delta}f(e)=\frac{1}{\delta}\sup_{\ell\mid e}\int_{\ell^{ \delta}}|f|,\quad e\in S^{1}. \tag{1.1}\]
In the above expression, \(\delta>0\) is a small parameter; the supremum is taken over all unit line segments \(\ell\) parallel to the vector \(e\); and \(\ell^{\delta}\) denotes the \(\delta\) neighborhood of \(\ell\). Cordoba [10] obtained the estimate \(\|K_{\delta}f\|_{p}\leq C(\log 1/\delta)^{1/2}\|f\|_{p}\) for \(p\geq 2\). This is the sharp range of Lebesgue exponents, and the dependence of the operator norm on \(\delta\) is also best possible (up to the choice of constant \(C\)). In particular, the existence of measure zero Besicovitch sets (compact sets in the plane that contain a unit line segment pointing in every direction) shows that for \(p<\infty\) the operator \(K_{\delta}\) cannot be bounded in \(L^{p}\) with operator norm independent of \(\delta\).
A second Kakeya-type maximal function was introduced by Wolff [36]. Let \(C^{\delta}(x,y,r)\) denote the \(\delta\)-neighborhood of the circle centered at \((x,y)\) of radius \(r\), and define
\[W_{\delta}f(r)=\frac{1}{\delta}\sup_{(x,y)\in\mathbb{R}^{2}}\int _{C(x,y,r)^{\delta}}|f|,\quad r\in[1,2]. \tag{1.2}\]
Wolff [36] obtained the estimate \(\|W_{\delta}f\|_{p}\leq C_{\varepsilon}\delta^{-\varepsilon}\|f\|_{p}\) for \(p\geq 3\). This is the sharp range of Lebesgue exponents, and the existence of measure zero Besicovitch-Rado-Kinney sets (compact sets in the plane that contain a circle of every radius \(r\in[1,2]\)) shows that for \(p<\infty\), the operator \(W_{\delta}\) cannot be bounded in \(L^{p}\) with operator norm independent of \(\delta\).
A second class of maximal functions contains the Bourgain circular maximal function and its generalizations. For \((x,y)\in\mathbb{R}^{2}\), let
\[Bf(x,y)=\sup_{1\leq r\leq 2}\int_{C(x,y,r)}|f|. \tag{1.3}\]
Bourgain [6, 7] proved that \(B\) is bounded from \(L^{p}\to L^{p}\) for \(p>2\). This is the sharp range of Lebesgue exponents for \(L^{p}\to L^{p}\) bounds (the full range of exponents for which \(B\) is bounded from \(L^{p}\to L^{q}\) is slightly more complicated; see [32] for details). As a consequence, if \(K\subset\mathbb{R}^{2}\) has positive measure and if \(X\subset\mathbb{R}^{2}\) contains a circle centered at every point of \(K\), then \(|X|>0\), i.e. there are no analogues of measure-zero Besicovitch sets or Besicovitch-Rado-Kinney sets in this setting.
Finally, we recall the Erdogan elliptic maximal function
\[Ef(x,y)=\sup_{W}\int_{W}|f|, \tag{1.4}\]
where the supremum is taken over all ellipses centered at \((x,y)\) whose semi-major and semi-minor axes have lengths in \([-1/2,2]\). This is a multi-parameter generalizations of the Bourgain circular maximal function. Erdogan [12] conjectured that \(E\) should be bounded from \(L^{p}\to L^{p}\) for \(p>4\). Prior to this work, the best-known bound is \(p>12\) by Lee, Lee, and Oh [28].
### The Setup
The above maximal functions can be described as follows: We have a family of plane curves \(\mathcal{C}\) (i.e. lines, circles, ellipses) and a projection \(\Phi\colon\mathcal{C}\to\mathbb{R}^{d}\) (i.e. the map sending a line to its slope, a circle to its radius, a circle to its center, etc.). For each \(z\in\mathbb{R}^{d}\), the maximal function \(Mf(z)\) is a maximal average of \(f\) taken over all (possibly thickened) curves \(\gamma\in\mathcal{C}\) with \(\Phi(\gamma)=z\); this is a subvariety of \(\mathcal{C}\) of codimension \(d\).
The above maximal functions exhibit two phenomena. First, when \(d=1\), we have examples of measure zero Besicovitch-type sets (and hence no operator norm bounds that are independent of \(\delta\)), while for \(d>1\) we have not seen such examples. Second, the dimension of the fibers \(\Phi^{-1}(z)\) determine the range of Lebesgue exponents for which \(L^{p}\to L^{p}\) bounds can hold.
Our first task is to describe the family of curves associated to our maximal function. It will be convenient to describe such curves as the graphs of functions.
**Definition 1.1**.: _Let \(\mathcal{C}\) be a \(m\)-dimensional manifold and let \(I\subset\mathbb{R}\) be an interval. Let \(h\colon\mathcal{C}\times I\to\mathbb{R}\) and define_
\[F_{t}^{h}(u)=\big{(}h(u;t),\ \partial_{t}h(u;t),\ldots,\partial_{t}^{m-1}h(u;t) \big{)}.\]
_We say that \(h\) parameterizes a \(m\)-dimensional family of cinematic curves if \(F_{t}^{h}\colon\mathcal{C}\to\mathbb{R}^{m}\) is a local diffeomorphism for each \(t\in I\)._
Next we will discuss a transversality condition the controls the behavior of the fibers \(\Phi^{-1}(z)\). Let \(1\leq s<m\). For \((u,t)\in\mathcal{C}\times I\), define
\[V_{u;t}=\{u^{\prime}\in\mathcal{C}\colon\partial_{t}^{j}h(u^{\prime};t)= \partial_{t}^{j}h(u;t),\quad j=0,\ldots,s\}. \tag{1.5}\]
The restriction of \(V_{u;t}\) to a small neighborhood of \(u\) is a \((m-s-1)\)-dimensional manifold.
**Definition 1.2**.: _We say a smooth function \(\Phi\colon\mathcal{C}\to\mathbb{R}^{m-s}\) is transverse to \(h\) if for each \((u,t)\in\mathcal{C}\times I\), the derivative of \(\Phi|_{V_{u;t}}\) has maximal rank (i.e. rank \(m-s+1\)) at \(u\). Note that this condition is vacuously satisfied if \(s=m-1\)._
With these definitions, we can now describe our class of maximal functions.
**Definition 1.3**.: _Let \(1\leq s<m\), let \(h\colon\mathcal{C}\times I\to\mathbb{R}\) parameterize a \(m\)-dimensional family of cinematic curves, and let \(\Phi\colon\mathcal{C}\to\mathbb{R}^{m-s}\) be transverse to \(h\). Fix a compact set \(\mathcal{C}_{0}\subset\mathcal{C}\), and a compact interval \(I_{0}\subset I\). Abusing notation, we restrict \(h\) and \(\Phi\) to \(\mathcal{C}_{0}\times I_{0}\) and \(\mathcal{C}_{0}\), respectively. For each \(u\in\mathcal{C}_{0}\), define the curve_
\[\gamma_{u}=\big{\{}\big{(}t,h(u;t)\big{)}\colon t\in I_{0}\big{\}}.\]
_We define the maximal functions \(M_{\delta}\) and \(M\) by_
\[M_{\delta}f(v)=\frac{1}{\delta}\sup_{u\in\Phi^{-1}(v)}\Big{|}\int_{\gamma_{u} ^{\delta}}f\Big{|}, \tag{1.6}\]
\[Mf(v)=\sup_{u\in\Phi^{-1}(v)}\Big{|}\int_{\gamma_{u}}f\Big{|}. \tag{1.7}\]
_We call these s-parameter maximal functions associated to a \(m\)-dimensional family of cinematic curves._
We remark that the \(L^{p}\) mapping properties of these operators remain unchanged if we replace the integrand \(f\) by \(|f|\), but for technical reasons (see Section 1.3) we adopt the formulation above. The Kakeya, Wolff, Bourgain, and Erdogan maximal functions can be re-written in the above framework, with \((m,s)\) equal to \((2,1),(3,2)\), \((3,1)\), \((5,3)\), respectively. This is a straightforward computation, which is described in Appendix A.
### Kakeya-type maximal functions
Our main result is a sharp \(L^{p}\to L^{p}\) bound for the Kakeya-type maximal function \(M_{\delta}\).
**Theorem 1.4**.: _Let \(m>s\geq 1\) be integers, and let \(M_{\delta}\) be a s-parameter maximal function associated to a \(m\)-dimensional family of cinematic curves. Let \(\varepsilon>0\). Then for all \(\delta>0\) sufficiently small, we have_
\[\|M_{\delta}f\|_{p}\leq\delta^{-\varepsilon}\|f\|_{p},\quad p\geq s+1. \tag{1.8}\]
Previous work in this setting has focused on the cases \(m=2,s=1\)[10]; \(m=3,s=1\)[34, 7]; and \(m=3,s=2\)[27, 31, 36, 41, 42]. The most interesting case is when \(s=m-1\); the case \(s<m-1\) can be reduced to \(s=m-1\) by slicing. The stated range of \(p\) in (1.8) is sharp. This can be seen by selecting \(\mathcal{C}=\mathbb{R}^{m}\), \(h(u;t)=(1,t,t^{2},\ldots,t^{m-1})\cdot u\); \(\Phi\) the projection to the first \(m-s\) coordinates; and \(f\) the characteristic function of the Knapp rectangle \([0,\delta^{1/s}]\times\delta\).
When \(s=m-1\), the existence of measure-zero Besicovitch sets shows that for \(p<\infty\) the operator \(M_{\delta}\) cannot in general be bounded in \(L^{p}\) with operator norm independent of \(\delta\). This can be seen by choosing \(\mathcal{C}\) and \(h\) as above; \(\Phi(u_{0},u_{1},\ldots,u_{m-1})=u_{1}\); and \(f\) the characteristic function of the \(\delta\)-thickening of a measure-zero Besicovitch set. More generally, Besicovitch and Rado [5] describe a procedure for constructing a measure-zero set that contains a translated copy of every algebraic curve from a one-parameter family.
### Bourgain-type maximal functions
In certain circumstances, Theorem 1.4 can be used to obtain sharp \(L^{p}\to L^{p}\) bounds for the maximal function \(Mf\) from Definition 1.3.
**Definition 1.5**.: _For \(f\colon\mathbb{R}^{2}\to\mathbb{C}\), let \(P_{k}f\) denote the Littlewood-Paley projection to the frequency annulus of magnitude \(\sim 2^{k}\). We say that a sublinear operator \(M\) has high frequency decay if there exists \(p<\infty\) and \(C,c>0\) so that_
\[\|M(P_{k}f)\|_{p}<C2^{-ck}\|f\|_{p},\quad f\in L^{p}(\mathbb{R}^{2}). \tag{1.9}\]
Bourgain [7] (see also [33]) observed that if a maximal function \(M\) has high frequency decay, then the estimate (1.9) can be interpolated with an estimate of the form (1.8) to obtain \(L^{p}\to L^{p}\) operator norm bounds for \(M\), for all \(p\) strictly larger than the range in (1.8). Bourgain [7] followed this strategy (with slightly different notation) to obtain sharp \(L^{p}\) bounds for his circular maximal function, and Chen, Guo, and Yang [8] followed this strategy to obtain sharp \(L^{p}\) bounds for the axis-parallel elliptic maximal function (see also [28] for previous results on this operator).
These maximal functions are translation invariant, in the sense that for each point \((x,y)\in\mathbb{R}^{2}\), the operator is a maximal average over a fixed family of curves that have been translated to the point \((x,y)\). We formalize this as follows:
**Definition 1.6**.: _Let \(M\) be an \(s\)-parameter maximal function associated to a \(s+2\)-dimensional family of cinematic curves. Let \(h\colon\mathcal{C}\times I\to\mathbb{R}\) and \(\Phi\colon\mathcal{C}\to\mathbb{R}^{2}\) be the associated parameterization and projection functions. We say that \(M\) is translation invariant if in a neighborhood of each point of \(\mathcal{C}\times I\), we can choose local coordinates \(u=(x,y,w_{1},\ldots,w_{s})\) so that \(\Phi\) has the form \(\Phi(u)=(x,y)\) and \(h\) has the form \(h(u;t)=g(w_{1},\ldots,w_{s};t-x)+y\)._
The Bourgain circular maximal function and the elliptic maximal function are translation invariant according to this definition.
Lee, Lee, and Oh [28] recently proved a sharp local smoothing estimate for the elliptic and axis-parallel elliptic maximal functions, and in doing so they showed that these maximal functions have high frequency decay. Shortly thereafter, Chen, Guo, and Yang [8] proved that every translation invariant maximal function (in the sense of Definition 1.6) has high frequency decay (their result uses slightly different notation and applies to a slightly modified form of the maximal function (1.7); see Proposition 7.2 and the surrounding discussion for a precise statement). The Lee-Lee-Oh and Chen-Guo-Yang result has the following consequence.
**Theorem 1.7**.: _Let \(s\geq 1\) be an integer and let \(M\) be an \(s\)-parameter translation invariant maximal function associated to a \((s+2)\)-dimensional family of cinematic curves. Then_
\[\|Mf\|_{p}\leq C_{p}\|f\|_{p},\quad p>s+1. \tag{1.10}\]
The stated range of \(p\) is sharp, as can be seen by modifying an example due to Schlag [32]; see Appendix A.1 for details. In particular, Theorem 1.7 resolves Erdogan's conjecture by showing that the elliptic maximal operator is bounded from \(L^{p}\to L^{p}\) in the sharp range \(p>4\). Previously, Lee, Lee, and Oh [28] (in the elliptic and axis-parallel elliptic case) and Chen, Guo, and Yang [8] (in the general case) proved a variant of Theorem 1.7 for \(p>s(s+1)\).
We conjecture that when \(m=s+2\), every maximal function of the form (1.7) has high frequency decay. This was proved by Sogge [33] when \(s=1\). If true, such a result could be combined with Theorem 1.4 and a slicing argument (see Section 6) to yield the analogue of Theorem 1.7 for all \(m\geq s+2\) and all \(s\)-parameter maximal functions associated to a \(m\)-dimensional family of cinematic curves.
It is natural to ask about analogues of Theorems 1.4 and 1.7 for curves in \(\mathbb{R}^{d}\), in the spirit of the helical maximal function and its generalizations [4, 25, 26]. This appears to be rather difficult at present, since our proof of Theorem 1.7 uses Theorem 1.4, and the latter is at least as difficult as the Kakeya conjecture, which is open in dimension \(3\) and higher.
### A \(L^{p}\) estimate for collections of plane curves
To prove Theorem 1.4, we begin by establishing (1.8) when \(s=m-1\). This will be a consequence of a slightly more general maximal function estimate associated to collections of thickened curves in the plane. The setting is as follows
**Definition 1.8**.: _We say that a set \(\mathcal{F}\subset C^{k}(I)\) forbids \(k\)-th order tangency if there exists a constant \(c>0\) so that for all \(f,g\in\mathcal{F}\), we have_
\[\inf_{t}\sum_{i=0}^{k}|f^{(i)}(t)-g^{(i)}(t)|\geq c\|f-g\|_{C^{k}(I)}. \tag{1.11}\]
_Examples._
1. On a compact interval, linear functions forbid 1st order tangency. More generally, polynomials of degree \(\leq k\) forbid \(k\)-th order tangency.
2. A \(m\)-dimensional family \(\mathcal{C}\) of cinematic curves restricted to a sufficiently small compact set forbid \((m-1)\)-st order tangency.
Recall that a set \(\mathcal{F}\subset C^{\infty}(I)\) is _uniformly smooth_ if \(\sup_{f\in F}\|f^{(i)}\|_{\infty}<\infty\) for each \(i\geq 0\). The functions in Example 2 are uniformly smooth. The functions in Example 1 are uniformly smooth if we restrict the coefficients to a bounded set. With this definition, we can now state the main technical result of the paper.
**Theorem 1.9**.: _Let \(k\geq 1\), let \(I\) be a compact interval, and let \(\mathcal{F}\subset C^{\infty}(I)\) be uniformly smooth and forbid \(k\)-th order tangency. Let \(\varepsilon>0\). Then the following is true for all \(\delta>0\) sufficiently small. Let \(F\subset\mathcal{F}\) satisfy the non-concentration condition_
\[\#(F\cap B_{r})\leq r/\delta\quad\text{for all balls $B_{r}\subset C^{k}(I)$ of radius $r$}. \tag{1.12}\]
_Then_
\[\Big{\|}\sum_{f\in F}\chi_{f^{\delta}}\Big{\|}_{\frac{k+1}{k}}\leq\delta^{- \varepsilon}(\delta\#F)^{\frac{k}{k+1}}, \tag{1.13}\]
_where \(f^{\delta}\) is the \(\delta\) neighborhood of the graph of \(f\)._
The bound (1.13) is a Kakeya-type estimate for families of curves that forbid \(k\)-th order tangency. The range of \(p\) is best-possible, and the existence of measure zero Besicovitch sets shows that the the \(\delta^{-\varepsilon}\) term (or at least some quantity that becomes unbounded as \(\delta\searrow 0\)) is also necessary.
We will prove a slightly more technical technical version of Theorem 1.9, where the ball condition (1.12) is replaced by a Frostman-type condition, and the sets \(f^{\delta}\) are replaced by subsets that satisfy a similar Frostman-type condition. This more technical version will be called Theorem 1.9\({}^{\prime}\). Theorem 1.9\({}^{\prime}\) implies Theorem 1.4 in the special case \(s=m-1\). The result is also connected to questions in geometric measure theory. We discuss some of these connections below.
### Applications to geometric measure theory
Restricted projectionsIn [23], Kaenmaki, Orponen, and Venieri discovered a connection between maximal function estimates for families of plane curves, and Marstrand-type results for projections in a restricted set of directions; the latter question was first investigated by Fassler and Orponen in [14]. Accordingly, Theorem 1.9 is closely related to the following Kaufman-type estimate for the restricted projection problem. In what follows, "\(\dim\)" refers to Hausdorff dimension.
**Theorem 1.10**.: _Let \(\gamma\colon[0,1]\to\mathbb{R}^{n}\) be smooth and satisfy the non-degeneracy condition_
\[\det\big{(}\gamma(t),\gamma^{\prime}(t),\ldots,\gamma^{(n-1)}(t)\big{)}\neq 0,\quad t\in[0,1]. \tag{1.14}\]
_Let \(E\subset\mathbb{R}^{n}\) be Borel and let \(0\leq s\leq\min(\dim E,1)\). Then_
\[\dim\big{\{}t\in[0,1]\colon\dim\left(\gamma(t)\cdot E\right)<s\big{\}}\leq s. \tag{1.15}\]
We will comment briefly on this history of this problem. In [14], Fassler and Orponen introduced the non-degeneracy condition (1.14), and they conjectured that if a smooth curve \(\gamma\colon[0,1]\to\mathbb{R}^{3}\) satisfied (1.14), then \(\dim\left(\gamma(t)\cdot E\right)=\min(1,\dim E)\) for a.e. \(t\); they made partial progress towards this conjecture. In [23], Kaenmaki, Orponen, and Venieri used circle tangency bounds proved by Wolff to resolve this conjecture in the special case where \(\gamma(t)=(1,t,t^{2})\). In [31], Pramanik, Yang, and the author used a more general curve tangency bound (corresponding to \(k=2\)) to prove a mild generalization of Theorem 1.10 when \(n=3\); the result in [31] only requires that the curve \(\gamma\) be \(C^{2}\). In [17], Gan, Guth, and Maldague proved an estimate in a similar spirit to (1.15) (sometimes referred to as a "Falconer-type" exceptional set estimate) using techniques related to decoupling. Finally, in [16], Gan, Guo, and Wang proved a Falconer-type exceptional set estimate for general \(n\), again using decoupling.
Furstenberg setsAs noted above, a consequence of Cordoba's Kakeya maximal function bound is that Besicovitch sets in the plane must have Hausdorff dimension \(2\). Similarly, Wolff's circular maximal function bound implies that Besicovitch-Rado-Kinney sets must have Hausdorff dimension \(2\). Theorem 1.9 has a similar consequence; in fact a slightly stronger statement is true in the spirit of the Furstenberg set conjecture. We first define a Furstenberg set of curves.
**Definition 1.11**.: _Let \(\alpha,\beta\geq 0\) and let \(\mathcal{F}\subset C^{k}(I)\). We say a set \(E\subset\mathbb{R}^{2}\) is a \((\alpha,\beta)\) Furstenberg set of curves from \(\mathcal{F}\) if there is a set \(F\subset\mathcal{F}\) with \(\dim\left(F\right)\geq\beta\) (here "\(\dim\)" refers to Hausdorff dimension in the metric space \(C^{k}(I)\)) so that \(\dim\left(\operatorname{graph}(f)\cap E\right)\geq\alpha\) for each \(f\in F\)._
**Theorem 1.12**.: _Let \(k\geq 1\), let \(I\) be a compact interval, and let \(\mathcal{F}\subset C^{\infty}(I)\) be uniformly smooth and forbid \(k\)-th order tangency. Let \(0\leq\beta\leq\alpha\leq 1\). Then every \((\alpha,\beta)\) Furstenberg set of curves from \(\mathcal{F}\) has Hausdorff dimension at least \(\alpha+\beta\)._
We will comment briefly on this history of this problem. In [37], Wolff defined a class of Besicovitch-type sets, inspired by the work of Furstenberg [15], which he called Furstenberg sets. In brief, for \(0\leq\alpha\leq 1\), an \(\alpha\)-Furstenberg set is a compact set \(E\subset\mathbb{R}^{2}\) with the property that for each direction \(e\in S^{1}\), there is a line \(\ell\) parallel to \(e\) with \(\dim\left(E\cap\ell\right)\geq\alpha\). Wolff proved that every set of this type must have dimension at least \(\max\left\{2\alpha,\alpha+\frac{1}{2}\right\}\), and he constructed examples of such sets that have dimension \(\frac{3\alpha}{2}+\frac{1}{2}\). He conjectured that the latter bound is sharp. In [29], Molter and Rela introduced the related notion of an \((\alpha,\beta)\)-Furstenberg set. In the plane, their definition coincides with Definition 1.11, where \(\mathcal{F}\) is the set of linear functions. See [30] and the references therein for an up-to-date survey of progress on problem, and [20] for variants in higher dimensions.
Recently, Fassler, Liu, and Orponen [13] considered the analogous problem where lines are replaced by circles; they formulated the analogous definition of a Furstenberg set of circles, and they proved that if \(0\leq\alpha\leq\beta\leq 1\), then every \((\alpha,\beta)\) Furstenberg set of circles must have dimension at least \(\alpha+\beta\). Theorem 1.12 generalizes the Fassler-Liu-Orponen result from circles to a larger class of curves. Theorem 1.12 is clearly sharp in the stated range \(0\leq\beta\leq\alpha\leq 1\). When \(\alpha<\beta\), it is not obvious what dimension bounds should hold for \((\alpha,\beta)\) Furstenberg sets of curves.
### Curve tangencies, and tangency rectangles
The main input to Theorem 1.9 is a new estimate in discretized incidence geometry that controls the number of approximate higher-order tangencies spanned by a collection of plane curves; this is Theorem 2.6 below. Theorem 2.6 requires several technical definitions. We will give an informal explanation of these definitions and then state an informal version of Theorem 2.6.
A _\((\delta;k)\) tangency rectangle_\(R\) is the \(\delta\)-neighborhood of the graph of a function with \(C^{k}\) norm at most \(1\), above an interval \(I\) of length \(\delta^{1/k}\) (we are abusing notation slightly, since the set \(R\) need not be rectangle in the usual geometric sense). If \(f\) is a function, we say that \(f\) is _tangent_ to \(R\) (denoted \(f\sim R\)) if the graph of \(f\), restricted to \(I\), is contained in \(R\). If \(F\) is a set of functions and \(\mu\geq 1\), we say a tangency rectangle is \(\mu\)-_rich_ with respect to \(F\) if it is tangent to at least \(\mu\) functions \(f\in F\). We say two \((\delta;k)\) tangency rectangles \(R_{1},R_{2}\) are comparable if they are contained in a common \((2^{k}\delta;k)\) tangency rectangle. Otherwise they are incomparable (the factor \(2^{k}\) simplifies certain parts of the proof, but any constant larger than \(1\) would suffice).
Observe that if two functions \(f_{1},f_{2}\) with \(C^{k}\) norm at most \(1\) are both tangent to a common \((\delta;k)\) tangency rectangle \(R\) above the interval \([a,a+\delta^{1/k}]\), then we have
\[|f_{1}(a+t)-f_{2}(a+t)|\lesssim t^{k}+\delta. \tag{1.16}\]
We say that \(R\) is _broad_ if for most pairs of functions \(f_{1},f_{2}\in F\) that are tangent to \(R\), the inequality (1.16) is almost tight, i.e. there is a matching lower bound \(|f_{1}(a+t)-f_{2}(a+t)|\gtrsim t^{k}\) for all \(t\). The precise definition of broadness involves additional quantifiers; see Definition 2.4 for details. With these (informal) definitions, we can now state an informal version of Theorem 2.6
**Theorem 2.6**, informal version.: _Let \(k,\mu\geq 1\) and let \(\delta>0\). Let \(F\) be a set of low degree polynomials, and let \(\mathcal{R}\) be a set of pairwise incomparable \((\delta;k)\) tangency rectangles, each of which are \(\mu\)-rich and broad with respect to \(F\). Provided \(\delta>0\) is sufficiently small, we have_
\[\#\mathcal{R}\leq\delta^{-\varepsilon}\Big{(}\frac{\#F}{\mu}\Big{)}^{\frac{ k+1}{k}}. \tag{1.17}\]
_Remarks_.
1. The requirement that the rectangles in \(\mathcal{R}\) are broad (or some analogous requirement) is necessary. Without this assumption, we could construct a counter-example to Theorem 2.6 as follows. Let \(F\) be a set of functions with \(\#F=\mu\), each of which is an infinitesimal perturbation of the same function \(f_{0}\); and let \(\mathcal{R}\) be a set of \(\delta^{-1/k}\) pairwise incomparable tangency rectangles arranged along the graph of \(f_{0}\).
2. When \(k=1\), the bound (1.17) follows from double-counting triples \((f_{1},f_{2},R)\), where \(f_{1},f_{2}\) are functions whose graphs transversely intersect inside \(R\). When \(k=2\) and the graphs of the functions in \(F\) are (arcs
of) circles, a bilinear variant of (1.17) was proved by Wolff [38] using techniques from computational geometry originating from [9]. This was generalized by the author in [42] for more general curves (again with \(k=2\)). Recently, Pramanik, Yang, and the author [31] proved a variant of Theorem 2.6 for \(k=2\) that works for \(C^{2}\) functions.
3. The exponent \(\frac{k+1}{k}\) follows from the numerology inherent in the polynomial method. For \(k=2\), there are at least three independent proofs of this same bound, using different techniques (see Item 2 above). However, it is not clear whether the exponent \(\frac{k+1}{k}\) in (1.17) is sharp. For \(k=2\) the current best construction comes from Szemeredi-Trotter and yields a lower bound with exponent \(\frac{4}{3}\).
### Main ideas, and a sketch of the proof
In this section, we will sketch the proofs of Theorems 1.9 and 2.6. We begin with Theorem 2.6. For simplicity during this proof sketch, we will suppose that \(\mu\) has size close to \(1\) and \(I=[0,1]\). When writing or describing inequalities, we will ignore constants that are independent of \(\delta\) and \(\#F\). We will prove the result by induction on the cardinality of \(F\). The induction step proceeds as follows. For each curve \(f\in F\) in \(F\), we consider the \((k-1)\)-st order "jet lift"
\[\zeta_{f}=\big{\{}\big{(}t,f(t),f^{\prime}(t),\ldots,f^{(k-1)}(t)\big{)}\colon t \in[0,1]\big{\}}\subset\mathbb{R}^{k+1}.\]
For each tangency rectangle \(R\in\mathcal{R}\), we consider the corresponding "tangency prism," \(\hat{R}\subset\mathbb{R}^{k+1}\) which is a (curvilinear) prism of dimensions roughly \(\delta^{1/k}\times\delta^{k/k}\times\delta^{(k-1)/k}\times\ldots\times\delta^{ 1/k}\). If \(f\in F\) is tangent to a rectangle \(R\in\mathcal{R}\), then \(\zeta_{f}\) intersects \(\hat{R}\) in a curve of length roughly \(\delta^{1/k}\); if this happens then we say \(\zeta_{f}\) is incident to \(\hat{R}\).
We have transformed the problem of estimating the number of robustly broad tangency rectangles in the plane into a problem about incidences between curves and tangency prisms in \(\mathbb{R}^{k+1}\). To attack this latter problem, we use the Guth-Katz polynomial partitioning theorem. Let \(E\) be a large number, and let \(Q\subset\mathbb{R}[t,x_{0},\ldots,x_{k-1}]\) be a polynomial of degree at most \(E\), so that \(\mathbb{R}^{k+1}\backslash\{Q=0\}\) is a union of about \(E^{k+1}\) "cells" (open connected regions), with the property that at most \((\#\mathcal{R})E^{-k-1}\) prisms are contained in each cell (if a prism intersects more than one cell, it is not counted here). Using a variant of Bezout's theorem and the assumption that \(\mathcal{F}\) is uniformly smooth, we can ensure that at most \((\#F)E^{-k}\) curves intersect a typical cell.
Since each prism \(\hat{R}\) is connected, it is either contained inside a cell, or it must intersect the partitioning hypersurface \(\{Q=0\}\). Our argument now divides into two cases: If at least half of prisms are contained inside a cell, then we are in the "cellular case." If at least half of the prisms intersect the partitioning hypersurface, then we are in the "algebraic case."
We handle the cellular case as follows. Using our induction hypothesis, we conclude that since a typical cell \(\Omega\) intersects roughly \((\#F)E^{-k}\) curves, there are at most \(\big{(}(\#F)E^{-k}\big{)}^{\frac{k+1}{k}}=(\#F)^{\frac{k+1}{k}}E^{-k-1}\) rectangles \(R\in\mathcal{R}\) with \(\hat{R}\subset\Omega\). thus the total contribution from all of the cells is at most \(E^{k+1}\cdot(\#F)^{\frac{k+1}{k}}E^{-k-1}=(\#F)^{\frac{k+1}{k}}\). With some care (and a slight weakening of exponents, which introduces the \(\delta^{-\varepsilon}\) term in (1.17)), the induction closes. It is this argument (and the associated numerology) that determines the shape of the bound (1.17).
The ideas described above to handle the cellular case are not new; they were inspired by similar arguments in [18]. To handle the algebraic case, however, new ideas are needed. This is the main innovation in this paper. We now sketch the proof of the algebraic case. We begin with several simplifying assumptions. _Simplifying Assumption (A)_: the surface \(\{Q=0\}\) can be written as a graph \(\{x_{k-1}=L(t,x_{0},\ldots,x_{k-2})\}\), where \(L\) is \(1\)-Lipschitz. As a consequence of Assumption (A), if a tangency prism \(R\) intersects \(\{Q=0\}\), then \(\hat{R}\) is contained in a thin neighborhood of the graph of \(L\), i.e. \(\hat{R}\subset Z^{*}\), where
\[Z^{*}=\big{\{}(t,x_{0},\ldots,x_{k-1})\in[0,1]^{k+1}\colon|x_{k-1}-L(t,x_{0}, \ldots,x_{k-2})|\leq\delta^{1/k}\big{\}}. \tag{1.18}\]
Next we make _Simplifying Assumption (B)_: each curve \(\zeta_{f}\) is contained in \(Z^{*}\). This means that \(f\) almost satisfies the ODE \(f^{(k-1)}(t)=L\big{(}t,f(t),f^{\prime}(t),\ldots,f^{(k-2)}(t)\big{)}\). More precisely, we have
\[\big{|}f^{(k-1)}(t)-L\big{(}t,f(t),f^{\prime}(t),\ldots,f^{(k-2)}(t)\big{)} \big{|}\leq\delta^{1/k},\quad t\in[0,1]. \tag{1.19}\]
If \(\zeta_{f}\) and \(\gamma_{g}\) are both incident to a common prism \(\hat{R}\), then a straightforward calculus exercise shows that there must exist some \(t_{0}\) for which the first \(k-1\) derivatives of \(f\) and \(g\) almost agree, in the sense that
\[|f^{(i)}(t_{0})-g^{(i)}(t_{0})|\leq\delta^{1/k},\quad i=0,\ldots,k-1. \tag{1.20}\]
(1.19) (and its analogue for \(g\)) say that \(f\) and \(g\) almost satisfy the same ODE, and (1.20) says that \(f\) and \(g\) almost have the same initial conditions, and hence \(f\) and \(g\) almost satisfy the same initial value problem. Since \(L\) is \(1\)-Lipschitz, we can use a quantitative version of Gronwall's inequality to conclude that \(|f(t)-g(t)|\) is small for all \(t\in[0,1]\). We conclude that all of the curves tangent to a common rectangle \(R\in\mathcal{R}\) must remain close for all time \(t\in[0,1]\); but this contradicts the requirement that the rectangles in \(\mathcal{R}\) are broad. This implies \(\mathcal{R}\) must be empty. Thus we have established Theorem 2.6, except that we have not yet justified Simplifying Assumptions (A) and (B).
First, we will explain how to remove Simplifying Assumption (B); this is mostly a technical matter. While the curves \(\zeta_{f}\) need not be contained in \(Z^{*}\), each curve intersects \(Z^{*}\) in a small number of curve segments, and the curve-prism incidences occur within these segments. Thus we can find a typical length \(\ell\), so that most curve-prism incidences occur within segments that have length roughly \(\ell\). After partitioning space into rectangular prisms of the appropriate dimensions and re-scaling, we reduce to the case where \(\ell=1\).
Next, we will explain how to remove Simplifying Assumption (A); this issue is more serious. In general, we may suppose that each tangency prism is contained in the \(\delta^{1/k}\) neighborhood of the variety \(\{Q=0\}\). This is a semi-algebraic set, and after restricting to \([-1,1]^{k+1}\), this set has volume roughly \(\delta^{1/k}\). We prove a new structure theorem which says that any semi-algebraic set in \([0,1]^{k+1}\) with small \((k+1)\)-dimensional volume can be decomposed into a union of pieces, each of which is the thin neighborhood of a Lipschitz graph (with controlled Lipschitz constant), plus a final piece whose projection to the first \(k\) coordinates has small \(k\)-dimensional volume. If the majority of prisms and curves are contained in one of the Lipschitz graph pieces, then (a slight weakening of) Simplifying Assumption (A) holds, and we can argue as above. If instead the majority of prisms and curves are contained in the final piece, then we project from \(\mathbb{R}^{k+1}\) to the first \(k\)-coordinates. The Tarski-Seidenberg theorem says that the image under this projection is a semi-algebraic subset of \([0,1]^{k}\), and thus we can apply the same decomposition again. After iterating this procedure at most \(k\) times, we arrive at a situation where Simplifying Assumption (A) holds, and we can apply the arguments described above.
**From Tangency Rectangles to Maximal Functions**
We now sketch the proof of Theorem 1.9. The proof is complicated by the fact that the collection of curves \(F\) can be arranged in many different ways. To begin, we will examine three specific arrangements that will give the reader a sense of the range of possibilities. For clarity when writing inequalities, we will ignore constants that are independent of \(\delta\), and we will sometimes omit terms of the form \(\delta^{-\varepsilon}\).
_Arrangement 1._ Suppose that for a typical pair of functions \(f,g\in F\) for which \(f^{\delta}\cap g^{\delta}\) is non-empty, we have that the graphs of \(f\) and \(g\) intersect transversely. This means that \(|f^{\delta}\cap g^{\delta}|\) typically has size about \(\delta^{2}\), and thus we might expect
\[\Big{\|}\sum_{f\in F}\chi_{f^{\delta}}\Big{\|}_{2}\leq\Big{\|}\sum_{f\in F} \chi_{f^{\delta}}\Big{\|}_{2}\leq\Big{(}\sum_{f,g\in F}|f^{\delta}\cap g^{ \delta}|\Big{)}^{1/2}\leq\delta(\#F). \tag{1.21}\]
On the other hand, we have
\[\Big{\|}\sum_{f\in F}\chi_{f^{\delta}}\Big{\|}_{1}\leq\delta(\#F). \tag{1.22}\]
Interpolating (1.21) and (1.22), we obtain
\[\Big{\|}\sum_{f\in F}\chi_{f^{\delta}}\Big{\|}_{\frac{k+1}{k}}\leq\delta(\#F). \tag{1.23}\]
Note that this is stronger than (1.13), since the ball condition (1.12) implies that \(\delta(\#F)\lesssim 1\).
_Arrangement 2._ Suppose that for a typical pair of functions \(f,g\in F\) for which \(f^{\delta}\cap g^{\delta}\) is non-empty, we have that the graphs of \(f\) and \(g\) are tangent to order \(k-1\). This means that \(|f^{\delta}\cap g^{\delta}|\) is a curvilinear rectangle
of dimensions roughly \(\delta\times\delta^{1/k}\). In this situation, we can find a number \(1\leq\mu\leq\#F\) and a set \(\mathcal{R}\) of \(\mu\)-rich, broad \((\delta;k)\) rectangles, so that
\[\Big{\|}\sum_{f\in F}\chi_{f^{d}}\Big{\|}_{\frac{k+1}{k}}^{\frac{k+1}{k}}\leq \sum_{R\in\mathcal{R}}\int_{R}\Big{(}\sum_{\begin{subarray}{c}f\in F\\ f\sim R\end{subarray}}\chi_{f^{d}}\Big{)}^{\frac{k+1}{k}}. \tag{1.24}\]
By Theorem 2.6, \(\#\mathcal{R}\leq\big{(}\frac{\#F}{\mu}\big{)}^{\frac{k+1}{k}}\), and the contribution from each \(R\in\mathcal{R}\) to the RHS of (1.24) is at most \((\mu\delta)^{\frac{k+1}{k}}\). Thus we again have the bound
\[\Big{\|}\sum_{f\in F}\chi_{f^{d}}\Big{\|}_{\frac{k+1}{k}}\leq\Big{(}\big{(} \frac{\#F}{\mu}\big{)}^{\frac{k+1}{k}}\cdot(\mu\delta)^{\frac{k+1}{k}}\Big{)} ^{\frac{k}{k+1}}=\delta(\#F). \tag{1.25}\]
_Arrangement 3_. Suppose that \(\#F=\{f\}\). Then
\[\Big{\|}\sum_{f\in F}\chi_{f^{d}}\Big{\|}_{\frac{k+1}{k}}=\delta^{\frac{k}{k+1 }}=\big{(}\delta\#F\big{)}^{\frac{k}{k+1}}.\]
Note that our bounds (1.23) and (1.25) for Arrangements 1 and 2 are stronger than the corresponding estimate (1.13) from Theorem 1.9. In this direction, we will first prove a variant of Theorem 1.9, where the non-concentration condition (1.12) is replaced by a (local) two-ends type non-concentration condition on the set of curves passing through each point. This is Proposition 4.4 below. Informally, the statement is as follows
**Proposition 4.4**, informal version.: _Let \(k\geq 1\) and let \(\varepsilon,\delta>0\). Let \(F\) be a set of functions that come from a uniformly smooth family of curves. Suppose that for a typical point \(x\in\mathbb{R}^{2}\), a typical pair of curves from \(F\) whose \(\delta\)-neighborhoods contain \(x\) diverge at speed at least \(t^{k}\) in a neighborhood of \(x\). Then_
\[\Big{\|}\sum_{f\in F}\chi_{f^{d}}\Big{\|}_{\frac{k+1}{k}}\leq\delta^{1- \varepsilon}(\#F). \tag{1.26}\]
Note that if (1.26) is established for some value of \(k\), then the analogous result immediately follows for all larger \(k\) by interpolation with the trivial \(L^{1}\) estimate (1.22). This observation will play an important role in the proof. We prove Proposition 4.4 by induction on \(k\). In the inequalities that follow, we will ignore all constants independent of \(\delta\), and all factors of the form \(\delta^{-\varepsilon}\).
The base case \(k=1\) is essentially the estimate (1.21). For the induction step, we select the smallest \(\rho\in[\delta,1]\) so that the intersection of a typical pair of curves is localized to a \(\rho\times\rho^{1/k}\) curvilinear rectangle. This allows us to find a set \(\mathcal{R}\) of \((\rho;k)\) rectangles, each of which have roughly the same richness \(\mu\), so that
\[\int\Big{(}\sum_{f\in F}\chi_{f^{d}}\Big{)}^{\frac{k+1}{k}}\leq\sum_{R\in \mathcal{R}}\int_{R}\Big{(}\sum_{\begin{subarray}{c}f\in F\\ f\sim R\end{subarray}}\chi_{f^{d}}\Big{)}^{\frac{k+1}{k}}. \tag{1.27}\]
Furthermore, the rectangles in \(\mathcal{R}\) are broad, and hence by Theorem 2.6 we have \(\#\mathcal{R}\leq\big{(}\frac{\#F}{\mu}\big{)}^{\frac{k+1}{k}}.\) If \(\rho\) has size roughly \(\delta\), then we are in the situation of Arrangement 2 and we can immediately apply (1.25). If instead \(\rho\) is substantially larger than \(\delta\) (and hence \(\delta/\rho\) is small), then our definition of \(\rho\) has the following consequence: If we re-scale a rectangle \(R\in\mathcal{R}\) to the unit square, then the images of the functions \(\{f\in F\colon f\sim R\}\) under this re-scaling satisfy the hypothesis of (the informal version of) Proposition 4.4, with \(k-1\) in place of \(k\). Denote the image of \(f\) under this re-scaling by \(\tilde{f}\), and let \(\tilde{\delta}=\delta/\rho\). Then we have
\[\|h\|_{\frac{k+1}{k}}^{\frac{k+1}{k+1}}\leq\|h\|_{1}^{\frac{1}{k}}\|h\|_{\frac{ k}{k-1}}\leq\Big{(}\tilde{\delta}\mu\Big{)}^{\frac{1}{k}}\Big{(}\tilde{\delta}\mu \Big{)}=\big{(}\frac{\delta}{\rho}\mu\big{)}^{\frac{k+1}{k}},\qquad h=\sum_{ \begin{subarray}{c}f\in F\\ f\sim R\end{subarray}}\chi_{\tilde{f}^{\tilde{f}}}. \tag{1.28}\]
In the above inequality, we used (1.22) to obtain a \(L^{1}\) estimate, and we used the induction hypothesis to obtain a \(L^{\frac{k}{k-1}}\) estimate. Note that the re-scaling from \(R\) to the unit square distorts volumes by a factor of \(\rho^{1+1/k}\), and thus (1.28) says that for each \(R\in\mathcal{R}\) we have
\[\int_{R}\Big{(}\sum_{\begin{subarray}{c}f\in F\\ f\sim R\end{subarray}}\chi_{f^{s}}\Big{)}^{\frac{k+1}{k}}\leq\big{(}\delta \mu\big{)}^{\frac{k+1}{k}}. \tag{1.29}\]
Inserting the estimate (1.29) into (1.27) and using our bound on the size of \(\mathcal{R}\), we obtain
\[\int\Big{(}\sum_{f\in F}\chi_{f^{s}}\Big{)}^{\frac{k+1}{k}}\leq\sum_{R\in \mathcal{R}}\big{(}\delta\mu\big{)}^{\frac{k+1}{k}}\leq(\delta\#F)^{\frac{k+1} {k}},\]
which is (1.26). This closes the induction. The details of this argument are discussed in Section 4.
Finally, we remark that Arrangements 1 and 2 satisfy the hypotheses of Proposition 4.4, and thus are amenable to the above argument. Arrangement 3 does not satisfy the hypotheses of Proposition 4.4, and indeed the conclusion of Proposition 4.4 is false for Arrangement 3. The final step in the proof of Theorem 1.9 is to reduce an arbitrary arrangement of curves that forbid \(k\)-th order tangency and satisfy the non-concentration condition (1.12) to a collection of (re-scaled) non-interacting sub-arrangement, each of which satisfy the hypotheses of Proposition 4.4. This is a standard "two-ends rescaling" type argument.
### Paper organization
In Sections 2 and 3, we execute the proof sketch described in Section 1.7 in order to prove Theorem 2.6. In Section 4, we will continue following the proof sketch to show how Theorem 2.6 implies Proposition 4.4. The remaining Sections 5, 6, 7, and 8 are devoted to the proofs of Theorems 1.9, 1.4, 1.7, and 1.10 + 1.12, respectively.
### Thanks
The author would like to thank Young-Heon Kim for helpful conversations and discussions about Gronwall's inequality, which helped shape Section 3.2. The author would like to thank Shaoming Guo for helpful conversations and discussions about local smoothing and its implications for maximal functions over curves, which helped shape Section 7. The author would like to thank Jonathan Hickman and Sanghyuk Lee for suggestions and corrections to an earlier version of this manuscript. The author was supported by a NSERC Discovery grant.
### Notation
We use \(A\lesssim B\) or \(A=O(B)\) or \(B=\Omega(A)\) to mean \(A\leq KB\), where \(K\) is a quantity that may depend on the parameter \(k\) from the statement of Theorem 1.9. If \(K\) is allowed to depend on an additional parameter \(\varepsilon\), then we denote this by \(A\lesssim_{\varepsilon}B\) or \(A=O_{\varepsilon}(B)\) or \(B=\Omega_{\varepsilon}(A)\).
Unless otherwise specified, all functions will be assumed to have domain \([0,1]\) and co-domain \(\mathbb{R}\). We abbreviate \(C^{k}([0,1])\) as \(C^{k}\), and \(\|f\|_{C^{k}([0,1])}\) as \(\|f\|_{C^{k}}\).
## 2 Curves and tangency rectangles
In this section we will state the precise version of Theorem 2.6, and begin the proof. We begin with precise versions of the informal definitions from Section 1.6
**Definition 2.1**.: _Let \(\delta>0\), \(k\geq 1\), and \(T\geq 1\). A \((\delta;k;T)\) tangency rectangle is the vertical \(\delta\) neighborhood of a function with \(C^{k}\) norm at most 1, above an interval of length \((T\delta)^{1/k}\). When \(T=1\), we abbreviate this to \((\delta;k)\) tangency rectangle, or \((\delta;k)\) rectangle._
**Definition 2.2**.: _If \(R\) is a \((\delta;k;T)\) tangency rectangle above an interval \(I\), and \(f\colon[0,1]\to\mathbb{R}\), we say \(f\) is tangent to \(R\) if the graph of \(f\) above \(I\) is contained in \(R\). We denote this by \(f\sim R\)._
Next, we will describe what it means for two tangency rectangles to be distinct.
**Definition 2.3**.: _We say two \((\delta;k;T)\) rectangles are comparable if there is a \((2^{k}\delta;k;T)\) rectangle that contains them both. Otherwise they are incomparable._
The factor \(2^{k}\) in the above definition was chosen to make the following true: if \(R_{1},R_{2}\) are incomparable \((\delta;k;T)\) rectangles above intervals \(I_{1}\) and \(I_{2}\) respectively, and if \(R_{1}\) and \(R_{2}\) are both tangent to a common function \(f\) with \(C^{k}\) norm at most 1, then \(I_{1}\) and \(I_{2}\) are disjoint.
**Definition 2.4**.: _If \(R\) is a \((\delta;k)\) rectangle and \(F\) is a set of functions from \([0,1]\) to \(\mathbb{R}\), we say that \(R\) is \(\mu\)-rich and \(\varepsilon\)-robustly broad with error at most \(B\) if there is a set \(F(R)\subset\{f\in F\colon f\sim R\}\) with \(\#F(R)\geq\mu\) that has the following property: For every \(\rho\in[\delta,1]\), every \(T\in[1,\rho^{-1}]\), and every \((\rho;k;T)\) rectangle \(R^{\prime}\) containing \(R\), we have_
\[\#\{f\in F(R)\colon f\sim R^{\prime}\}\leq BT^{-\varepsilon}\#F(R). \tag{2.1}\]
During informal discussion, we will say that \(R\) is _robustly broad_ if we do not wish to emphasize the role of \(\mu\), \(\varepsilon\), or \(B\).
_Remark 2.5_.: When \(k=1\), a \((\delta;1)\) rectangle \(R\) is robustly broad if many of the pairs \(f_{1},f_{2}\in F(R)\) have graphs that intersect transversely. In \(k>1\) then all of the functions in \(F(R)\) will intersect (almost) tangentially, but if \(R\) is robustly broad then many pairs of functions will diverge outside of \(R\) at speed roughly \(t^{k}\)--this is the fastest possible speed of divergence that is allowed by the geometry of \(R\) and the constraint that the functions have \(C^{k}\) norm at most 1.
With these definitions, we can now precisely state our incidence bound.
**Theorem 2.6**.: _Let \(k\geq 1\) and \(\varepsilon>0\). Then there exists \(\eta,\delta_{0}>0\) so that the following holds for all \(\delta\in(0,\delta_{0}]\)._
_Let \(F\) be a set of (univariate) polynomials of degree at most \(\delta^{-\eta}\), each of which has \(C^{k}\)-norm at most 1. Let \(\mu\geq 1\) and let \(\mathcal{R}\) be a set of pairwise incomparable \((\delta,k)\) rectangles that are \(\mu\)-rich and \(\varepsilon\)-robustly broad with error at most \(\delta^{-\eta}\) with respect to \(F\). Then_
\[\#\mathcal{R}\leq\delta^{-\varepsilon}\Big{(}\frac{\#F}{\mu}\Big{)}^{\frac{k+ 1}{k}}. \tag{2.2}\]
### Initial reductions
We will begin the proof of Theorem 2.6, following the outline discussed in Section 1.7. First, we will reduce Theorem 2.6 to a version that is weaker in several respects. First, the hypotheses are strengthened: we only need to consider the case where \(\mu\) has size roughly 1. Second, the conclusion is weakened: the exponent \(\frac{k+1}{k}\) is weakened to \(\frac{k+1}{k}+\varepsilon\).
**Proposition 2.7**.: _Let \(k\geq 1\), \(\varepsilon>0\). Then there exist (large) constants \(B=B(k)\) and \(C=C(k,\varepsilon)\) and a small constant \(\eta=\eta(k,\varepsilon)>0\) so that the following holds. Let \(F\) be a set of polynomials of degree at most \(\delta^{-\eta}\), each of which has \(C^{k}\) norm at most 1. Let \(\mathcal{R}\) be a set of pairwise incomparable \((\delta,k)\) rectangles that are \(\varepsilon\) robustly broad with error at most \(\delta^{-\eta}\) with respect to \(F\). Then_
\[\#\mathcal{R}\leq C\delta^{-B\varepsilon}(\#F)^{\frac{k+1}{k}+\varepsilon}. \tag{2.3}\]
To reduce to the case where \(\mu\) has size roughly 1, we will refine the set \(F\) by randomly keeping each element with probability roughly \(\mu^{-1}\). To ensure that the resulting refinement satisfies the hypotheses of Proposition 2.7, we will use the following special case of Chernoff's inequality.
**Theorem 2.8** (Chernoff).: _Let \(X_{1},\ldots,X_{n}\) be independent random variables taking value 1 with probability \(p\) and value 0 with probability \(1-p\). Let \(X\) denote their sum. Let \(A\geq 2\). Then_
\[\mathbb{P}\big{(}X\leq pn/2\big{)}<e^{\frac{-pn}{8}};\qquad\mathbb{P}\big{(}X \geq Apn\big{)}<e^{\frac{-Apn}{6}}. \tag{2.4}\]
We can now explain the reduction from Proposition 2.7 to Theorem 2.6.
Proof that Proposition 2.7 implies Theorem 2.6.: Suppose that Proposition 2.7 is true. Let \(k\geq 1\), \(\varepsilon>0\), \(\delta>0\), \(\mu\geq 1\), \(F\), and \(\mathcal{R}\) satisfy the hypotheses of Theorem 2.6. Our goal is to show that if \(\eta>0\) and \(\delta_{0}>0\) are selected appropriately (depending on \(k\) and \(\varepsilon\)), then (2.2) holds.
First, we may suppose that \(\mu\leq\#F\), since otherwise \(\mathcal{R}=\emptyset\) and we are done. Second, we may suppose that \(\#F\leq\delta^{-k}\). If not, then (2.2) follows from the observation that any set of pairwise incomparable \((\delta;k)\)-rectangles has cardinality \(O(\delta^{-k-1})\).
**Step 1: Random sampling**. Let \(\varepsilon_{1}=\varepsilon_{1}(\varepsilon)>0\) be a small quantity to be chosen below. If \(\mu\leq\delta^{-2\varepsilon_{1}}\), define \(F^{\prime}=F\) and proceed to the computation (2.7) below. Otherwise, after dyadic pigeonholing the set \(\mathcal{R}\) and increasing \(\mu\) if necessary, we can suppose that for each \(R\in\mathcal{R}\), there is a set \(F(R)\subset\{f\in F\colon f\sim R\}\) that satisfies (2.1), with \(\mu\leq\#F(R)<2\mu\).
Let \(p=(\delta^{2\varepsilon_{1}}\mu)^{-1}\) (since \(\mu>\delta^{-2\varepsilon_{1}}\), we have \(0<p<1\)). Let \(F^{\prime}\subset F\) be obtained by randomly selecting each \(f\in F\) with probability \(p\). \(F^{\prime}\) has expected cardinality \(p(\#F)\geq\delta^{-2\varepsilon_{1}}\).
**Step 2: Robust broadness with respect to \(F^{\prime}\)**. We claim that with probability at least \(1/2\), the following is true
* \(\#F^{\prime}\leq 2p(\#F)=2\delta^{-2\varepsilon_{1}}\mu^{-1}(\#F)\).
* Each rectangle in \(\mathcal{R}\) is \(\frac{1}{4}p\mu\)-rich and \(\varepsilon_{1}\)-robustly broad with error at most \(O(\delta^{\eta})\) with respect to \(F^{\prime}\).
The first item holds with probability at least \(3/4\) (in fact much higher probability!) by Theorem 2.8.
We will show that the second item also holds with probability at least \(3/4\). Fix \(R\in\mathcal{R}\) with an associated set \(F(R)\). By Theorem 2.8, we have
\[\mathbb{P}\Big{[}\#(F(R)\cap F^{\prime})\leq\frac{1}{2}p(\#F(R))\Big{]}\leq e ^{\frac{-p(\#F)}{8}},\quad\mathbb{P}\Big{[}\#(F(R)\cap F^{\prime})\geq 2p( \#F(R))\Big{]}\leq e^{\frac{-p(\#F)}{3}},\]
and hence the probability that at least one of these events occurs is at most \(e^{-\delta^{-\varepsilon_{1}}}\). Suppose that neither of these events occur, and hence
\[p\mu/4\leq\#(F(R)\cap F^{\prime})\leq 4p\mu.\]
Let \(\rho\in[\delta,1]\), let \(T\in[1,1/\rho]\), and let \(R^{\prime}\supset R\) be a \((\rho;k;T)\) rectangle. We would like to show that with high probability,
\[\#\{f\in(F(R)\cap F^{\prime})\colon f\sim R^{\prime}\}=O\big{(}\delta^{-\eta} T^{-\varepsilon_{1}}\mu p\big{)}. \tag{2.5}\]
We will estimate the probability that (2.6) fails. First, we will estimate the probability that
\[\#\{f\in(F(R)\cap F^{\prime})\colon f\sim R^{\prime}\}>2\delta^{-\eta}T^{- \varepsilon_{1}}\mu p. \tag{2.6}\]
Define \(n=\#\{f\in F(R)\colon f\sim R^{\prime}\}\). By hypothesis, the rectangles in \(\mathcal{R}\) are \(\mu\) rich and \(\varepsilon\) robustly broad with error at most \(\delta^{-\eta}\) with respect to \(F\), and hence \(n\leq\delta^{-\eta}T^{-\varepsilon}\mu\leq\delta^{-\eta}T^{-\varepsilon_{1}}\mu\). Write \(2\delta^{-\eta}T^{-\varepsilon_{1}}\mu p=Apn\), i.e. \(A=\frac{2\delta^{-\eta}T^{-\varepsilon_{1}}\mu}{n}\geq 2\). Applying Theorem 2.8 with \(n\), \(p\), and \(A\) as above and using the fact that \(T\leq\rho^{-1}\leq\delta^{-1}\), we conclude that the probability that (2.6) occurs is at most
\[e^{\frac{-Apn}{\delta}}=e^{-2\delta^{-\eta}T^{-\varepsilon_{1}}(\mu p)/6}\leq e ^{-2\delta^{\varepsilon_{1}-\eta}(\delta^{-2\varepsilon_{1}})/6}\leq e^{- \delta^{-\varepsilon_{1}}}.\]
Our goal is to show that with high probability, (2.5) holds for all \(\rho\in[\delta,1]\); all \(T\in[1,1/\rho]\), and all \((\rho;k;T)\) rectangles \(R^{\prime}\). We claim that it suffices to show that with high probability, (2.6) fails when we consider rectangles with the following three properties: (i) \(\rho\) is of the form \(\delta^{2j}\) for \(j\geq 0\) an integer; (ii) \(T\) is of the form \(2^{\ell}\) for \(\ell\geq 0\) an integer; (iii) \(R^{\prime}\) is the vertical neighborhood of the graph of a function from \(F\). Indeed, by the triangle inequality, if there is a rectangle \(R^{\prime}\) for which (2.6) fails with constant \(C_{0}\), then there is a rectangle \(R^{\prime\prime}\) satisfying Properties (i), (ii), and (iii), for which (2.6) fails with constant \(C_{0}/O(1)\). Conversely, if (2.6) fails with high probability for every rectangle \(R^{\prime}\supset R\) satisfying Properties (i), (ii), and (iii), then (2.5) holds with high probability for every rectangle \(R^{\prime}\supset R\), provided the implicit constant has been chosen appropriately.
We have shown that (2.6) fails with high probability for a particular rectangle \(R^{\prime}\). Since there are \(\delta^{-O(1)}\) rectangles that satisfy Properties (i), (ii), and (iii), we use the union bound to conclude that the probability that (2.6) holds any rectangle satisfying Properties (i), (ii), and (iii) is at most \(e^{-\delta^{-\varepsilon_{1}}}\delta^{-O(1)}\), i.e. the probability that (2.5) fails for a fixed rectangle \(R\) is at most \(e^{-\delta^{-\varepsilon_{1}}}\delta^{-O(1)}\). Since the rectangles in \(\mathcal{R}\) are incomparable, we have \(\#\mathcal{R}=O(\delta^{-k-1})\), and hence the probability that (2.5) fails for at least one rectangle in \(\mathcal{R}\) is at most \(e^{-\delta^{-\varepsilon_{1}}}\delta^{-O(1)}\). If \(\delta_{0}\) (and hence \(\delta\)) is selected sufficiently small depending on \(k\) and \(\varepsilon_{1}\) (recall that \(\varepsilon_{1}\) in turn depends on \(k\) and \(\varepsilon\)), then the probability that (2.5) holds for every rectangle in \(\mathcal{R}\) is at least \(3/4\). This completes the proof of our claim.
**Step 3: Applying Proposition 2.7**.
Next, let \(\varepsilon_{2}=\varepsilon_{2}(\varepsilon)<\varepsilon_{1}\) be a quantity to be determined below, and let \(\eta_{1}=\eta_{1}(k,\varepsilon_{2})\) be the quantity from the statement of Proposition 2.7 (with \(k\) as above and \(\varepsilon_{2}\) in place of \(\varepsilon\)). If \(\eta\leq\eta_{1}/2\) and \(\delta_{0}\) is sufficiently small, then the rectangles in \(\mathcal{R}\) are \(\varepsilon_{2}\) robustly broad with error at most \(\delta^{-\eta_{1}}\) with respect to \(F^{\prime}\). Thus we can apply Proposition 2.7 (with \(\varepsilon_{2}\) in place of \(\varepsilon\) and \(\eta_{1}\) in place of \(\eta\)) to conclude that
\[\#\mathcal{R}\leq C\delta^{-B\varepsilon_{2}}(\#F^{\prime})^{\varepsilon_{2}} (\#F^{\prime})^{\frac{k+1}{k}}\leq C\delta^{-B\varepsilon_{2}}\delta^{-k \varepsilon_{2}}\Big{(}\delta^{-2\varepsilon_{1}}\frac{\#F}{\mu}\Big{)}^{ \frac{k+1}{k}}. \tag{2.7}\]
The result now follows by selecting \(\varepsilon_{1}<\varepsilon/10\); \(\varepsilon_{2}<\varepsilon/(10(B_{1}+k))\); and \(\delta_{0}\) sufficiently small.
### Tangency Rectangles and Tangency prisms
We now turn to the proof of Proposition 2.7. We begin by analyzing the structure of tangency rectangles. Recall that a \((\delta;k)\) rectangle is the vertical \(\delta\)-neighborhood of a function \(f\) with \(C^{k}\) norm at most \(1\), above an interval \(I\) of length \(\delta^{1/k}\). For notational convenience, We will write this as \(R^{f}(I)\) or \(R(I)\). The next says that the tangency rectangle \(R^{f}(I)\) is accurately modeled by the \((k-1)\)-st order Taylor expansion of \(f\).
**Lemma 2.9** (Structure of tangency rectangles).: _Let \(R=R^{f}(I)\) be a \((\delta,k)\) tangency rectangle, with \(I=[a,a+\delta^{1/k}]\). Let \(g(t)=f(a)+\sum_{j=1}^{k-1}\frac{f^{(j)}(a)}{j!}(t-a)^{j}\) be the \((k-1)\)-st order Taylor expansion of \(f\) around \(a\). Then \(R\) is contained in the vertical \(2\delta\) neighborhood of the graph of \(g\) above \(I\)._
This is a consequence of Taylor's theorem. We now define the "tangency prisms" introduced in Section 1.7.
**Definition 2.10**.: _A \((\delta;k)\) tangency prism is a set \(P\) of the form_
\[\Big{\{}(t,y_{0},\ldots,y_{k-1})\in\mathbb{R}^{k+1} \colon t\in[a,a+\delta^{1/k}], \tag{2.8}\] \[\Big{|}y_{j}-\sum_{i=j}^{k-1}\frac{(t-a)^{i-j}}{(i-j)!}b_{i}\Big{|} \leq K\delta^{1-j/k},\ j=0,\ldots,k-1\Big{\}}.\]
_In the above expression, \(a,b_{0},\ldots,b_{k-1}\in[-1,1]\) are parameters that define the tangency prism, and \(K\) is a constant depending on \(k\); the specific choice of \(K\) will be fixed in Lemma 2.14 below. We call \(I=[a,a+\delta^{1/k}]\) the interval associated to \(P\)._
**Definition 2.11**.: _Let \(P\subset\mathbb{R}^{k+1}\) be a \((\delta;k)\) tangency prism with associated interval \(I\subset[0,1]\), and let \(h\colon[0,1]\to\mathbb{R}^{k}\). We say \(h\sim R\) if the graph of \(h\) above \(I\) is contained in \(R\)._
**Definition 2.12**.: _Let \(f\in C^{k}\) and let \(0\leq j\leq k\). We define the \(j\)-th order jet lift of \(f\), denoted \(\mathcal{J}_{j}f\) to be the function \(\mathcal{J}_{j}f(t)=\big{(}f(t),f^{\prime}(t),\ldots,f^{(j)}(t)\big{)}\). When \(j=0\), we have \(\mathcal{J}_{0}f(t)=f(t)\)._
**Definition 2.13**.: _Let \(R=R^{f}(I)\) be a \((\delta,k)\) tangency rectangle. We define the tangency prism \(\hat{R}\) to be a set of the form (2.8), where \(a\) is the left endpoint of \(I\), and \(b_{i}=f^{(i)}(a)\) for \(i=0,\ldots,k-1\)._
**Lemma 2.14**.: _If the quantity \(K=O(1)\) from Lemma 2.10 is chosen appropriately (depending on \(k\)), then the following is true. Let \(R\) be a \((\delta,k)\) tangency rectangle, and let \(f\) be a function with \(C^{k}\) norm at most 1, with \(f\sim R\). Then \(\mathcal{J}_{k-1}f\sim\hat{R}\)._
Proof.: Write \(R=R^{g}(I)\), with \(I=[a,a+\delta^{1/k}]\). Since \(f\sim I\), we have \(|f(t)-g(t)|\leq\delta\) on \(I\), and thus by Lemma B.3 there exists a constant \(K_{1}=K_{1}(k)\) so that \(|f^{(j)}(t)-g^{(j)}(t)|\leq K_{1}\delta^{1-j/k}\) for \(j=0,\ldots,k-1\). On the other hand, by Taylor's theorem, for each index \(j<k\) and each \(t\in I\), there exists \(t_{1}\) between \(a\) and \(t\) so that
\[g^{(j)}(t)=\sum_{i=j}^{k-1}\frac{(t-a)^{i-j}}{(i-j)!}g^{(i)}(a)+\frac{(t-a)^{k- j}}{(k-j)!}g^{(k)}(t_{1}).\]
If we define \(b_{i}=g^{(i)}(a)\) for \(j=0,\ldots,k-1\), we conclude that for each \(j=0,\ldots,k-1\) and each \(t\in I\), we have
\[\begin{split}\Big{|}f^{(j)}(t)-\sum_{i=j}^{k-1}\frac{(t-a)^{j-i}} {(j-i)!}b_{i}\Big{|}&\leq|f^{(j)}(t)-g^{(j)}(t)|+\Big{|}g^{(j)}(t )-\sum_{i-j}^{k-1}\frac{(t-a)^{i-j}}{(i-j)!}b_{i}\Big{|}\\ &\leq|f^{(j)}(t)-g^{(j)}(t)|+\Big{|}\frac{(t-a)^{k-j}}{(k-j)!}g^{( k)}(t_{1})\Big{|}\\ &\leq K_{1}\delta^{1-j/k}+\delta^{1-j/k},\end{split} \tag{2.9}\]
where the final line used the assumption that \(\|g\|_{C^{k}}\leq 1\). Thus the lemma holds with \(K=K_{1}+1\).
### Tangency and re-scaling
In this section, we will explore how re-scaling a tangency rectangle \(R\) induces a re-scaling of functions tangent to \(R\), and also induces a re-scaling of (smaller) tangency rectangles contained in \(R\).
**Definition 2.15**.: _Let \(0<\delta<\rho\leq 1\). Let \(R\) be a \((\rho;K)\) tangency rectangle, and let \(S\) be a \((\delta;K)\) tangency rectangle. We say \(R\)covers \(S\), denoted \(R\succ S\) or \(S\prec R\), if \(\hat{S}\subset\hat{R}\)._
**Definition 2.16**.: _Let \(\rho>0\) and let \(R=R^{g}(I)\) be a \((\rho;k)\) rectangle; here \(\|g\|_{C^{2}}\leq 1\) and \(I=[a,a+\rho^{1/k}]\). Let \(K=K(k)\geq 1\) be the constant from Definition 2.10, and let \(c=1/(k+1)K\). For \(x\in I\), define_
\[\phi^{R}(x,y)=\big{(}\rho^{-1/k}(x-a),\ c\rho^{-1}(y-g(x-a))\big{)}.\]
_For \(f\colon[0,1]\to\mathbb{R}\), define \(f_{R}\) to be the function whose graph is \(\phi^{R}(\operatorname{graph}f|_{I})\), and define_
\[\psi^{R}(x,y_{0},\ldots,y_{k-1}) =\Big{(}\rho^{-1/k}(x-a),\ c\rho^{-1}(y_{0}-g(x-a)),\ c\rho^{-1+1/ k}(y_{1}-g^{\prime}(x-a)),\] \[c\rho^{-1+2/k}(y_{2}-g^{\prime\prime}(x-a)),\ldots,c\rho^{-1/k}( y_{k-1}-g^{(k-1)}(x-a))\Big{)}.\]
**Lemma 2.17**.: _Let \(R\) be a \((\rho;k)\) rectangle, let \(\|f\|_{C^{k}}\leq 1\), and suppose \(f\sim R\). then \(\|f_{R}\|_{C^{k}}\leq 1\)._
Proof.: By the chain rule,
\[\operatorname{graph}\big{(}\mathcal{J}_{k-1}(f_{R})\big{)}=\psi^{R}( \operatorname{graph}\mathcal{J}_{k-1}f|_{I}). \tag{2.10}\]
As a consequence, if \(f\sim R\) and \(\|f\|_{C^{k}}\leq 1\), then by Lemma 2.14 we have \(\mathcal{J}_{k-1}f\sim\hat{R}\), and hence the set (2.10) is contained in \(\psi^{R}(\hat{R})\subset[0,1]\times[-(k+1)^{-1},(k+1)^{-1}]^{k}\). In particular, we have
\[\sup_{x\in[0,1]}|f_{R}^{(j)}(x)|\leq(k+1)^{-1},\quad j=0,\ldots,k-1. \tag{2.11}\]
If \(R=R^{g}(I)\), then we can also use the chain rule and the fact that \(\|f\|_{C^{k}}\leq 1\) and \(\|g\|_{C^{k}}\leq 1\), to compute \(\sup_{x\in[0,1]}|f_{R}^{(k)}(x)|\leq(k+1)^{-1}\). We conclude that \(\|f_{R}\|_{C^{k}}\leq 1\).
Motivated by the above computation, we introduce the following definition.
**Definition 2.18**.: _Let \(S\) be a \((\rho;k)\) tangency rectangle. If \(S\prec R\) is a \((\delta;k)\) tangency rectangle, then \(\phi^{R}(S)\) is the vertical \(c\delta/\rho\) neighborhood of a function \(h\) (which has \(C^{k}\) norm at most 1) above an interval \(J\) of length \((\delta/\rho)^{1/k}\). Define \(S_{R}\) to be the \((\delta/\rho;k)\) tangency rectangle given by the vertical \(\delta/\rho\) neighborhood of \(h\) above
The next lemma says that our definitions of \(f_{R}\) and \(S_{R}\) preserve broadness.
**Lemma 2.19**.: _Let \(R\succ S\) be tangency rectangles. Let \(F\) be a set of functions with \(C^{k}\) norm at most 1, all of which are tangent to \(R\). Let \(F(S)\subset\{f\in F\colon f\sim S\}\) satisfy (2.1). Then the functions \(\{f_{R}\colon f\in F(S)\}\) are tangent to \(S_{R},\) and satisfy the analogue of (2.1) with \(B\) replaced by \(O(B).\)_
Proof.: Suppose there exists a \((\tau;k;T)\)-rectangle \(R^{h}(J)\supset S_{R}\) that is tangent to \(M\) functions from \(\{f_{R}\colon f\in F(S)\}\); denote this set of functions \(F_{1}\). Our goal is to show that
\[M=O(B)T^{-\varepsilon}\#F(S). \tag{2.12}\]
Fix a function \(g_{R}\in F_{1}\). By the triangle inequality, the graph of each \(f_{R}\in F_{1}\) above \(J\) is contained in the vertical \(2\tau\) neighborhood of \(g_{R}\) above \(J\); denote this latter set by \(R_{1}\) (note that \(R_{1}\supset S_{R}\) ).
We have that \((\phi^{R})^{-1}(R_{1})\) is the vertical \((2\tau)(c^{-1}\rho)\) neighborhood of \(g\) (a function of \(C^{k}\) norm at most 1), above an interval of length \((T\rho\tau)^{1/k}\), and this set contains \(S\). In summary, we have constructed a \(\left(\frac{2}{c}\tau\rho;k;\frac{cT}{2}\right)\) tangency rectangle that is tangent to at least \(M\) functions from \(F(S)\). Comparing with (2.1), we conclude that
\[M\leq B(\frac{cT}{2})^{\varepsilon}\#F(S)\leq(2B/c)T^{\varepsilon}\#F(S).\]
Since \(c>0\) depends only on \(k\), this establishes (2.12).
### Proof of Proposition 2.7 Part 1: Space curves, partitioning, and induction
We are now ready to begin the proof of Proposition 2.7. Our basic strategy is as follows. We lift each function \(f\in F\) to its \((k-1)\)-st order jet \(\mathcal{J}_{k-1}f\), and we lift each rectangle \(R\in\mathcal{R}\) to its corresponding tangency prism \(\hat{R}\). Proposition 2.7 then becomes an incidence theorem between (polynomial) curves and prisms in \(\mathbb{R}^{k+1}\). Roughly speaking, the statement is as follows: given a set of \(n\) polynomial curves in \(\mathbb{R}^{k+1}\) that come from the jet lifts of plane curves, there can be at most \(n^{\frac{k+1}{k}+\varepsilon}\) prisms that are (broadly) incident to these curves. We prove this statement by induction on \(n\). For the induction step, we use the Guth-Katz polynomial partitioning theorem to divide \(\mathbb{R}^{k+1}\) into cells, most of which interact with only a small fraction of the (lifted) curves from \(F\). The precise statement is a consequence of the following two theorems. The first is the celebrated Guth-Katz polynomial partitioning theorem [19].
**Theorem 2.20**.: _Let \(\mathcal{P}\subset\mathbb{R}^{d}\) be a finite set of points. Then for each \(E\geq 1\), there is a polynomial \(Q\in\mathbb{R}[x_{1},\ldots,x_{d}]\) so that \(\mathbb{R}^{d}\backslash\{Q=0\}\) is a union of \(O_{d}(E^{d})\) open connected sets, and each of these sets intersects \(O_{d}(E^{-d}\#\mathcal{P})\) points from \(\mathcal{P}\)._
The second is a variant of Bezout's theorem for real varieties. This is a special case of the main result from [2].
**Proposition 2.21**.: _Let \(\zeta\subset\mathbb{R}^{d}\) be a one-dimensional real variety defined by polynomials of degree at most \(D\). Let \(Q\in\mathbb{R}[x_{1},\ldots,x_{d}]\) be a polynomial of degree \(E\geq D\). Then \(\zeta\) intersects \(O_{d}(D^{d-1}E)\) connected components of \(\mathbb{R}^{d}\backslash\{Q=0\}\)._
We apply the induction hypothesis inside each cell, and sum the resulting contributions. The exponent \(\frac{k+1}{k}+\varepsilon\) was chosen so that the induction closes. There is also a contribution from the boundary of the partition. This will be described in greater detail (and dealt with) later. We now turn to the details
Proof of Proposition 2.7.: Fix \(k\) and \(\varepsilon\). We will prove the result by induction on \(\#F\). The induction will close, provided \(B,C,\) and \(\eta\) have been chosen appropriately. When \(F=\emptyset\), there is nothing to prove.
**Step 1. Polynomial partitioning.** Suppose that \(\#F=n\), and that the result has been proved for all sets of curves \(F^{\prime}\) of cardinality less than \(n\). To each \((\delta;k)\) tangency rectangle \(R^{f}(I)\in\mathcal{R}\), associate the point \(p_{R}=(a,f(a),f^{\prime}(a),\ldots,f^{(k-1)}(a))\in\mathbb{R}^{k+1}\), where \(a\) is the left endpoint of \(I\). Observe that \(p_{R}\in\hat{R}\). It is easy to verify that distinct (and hence incomparable) rectangles in \(\mathcal{R}\) give rise to distinct (in fact \(\gtrsim\delta\) separated) points. Let \(\mathcal{P}=\{p_{R}\colon R\in\mathcal{R}\}\).
Let \(E\geq 1\) be a number to be specified below. Use Theorem 2.20 to select a polynomial \(Q\in\mathbb{R}[t,y_{0},\ldots,y_{k-1}]\) of degree at most \(E\), so that \(\mathbb{R}^{k+1}\backslash\{Q=0\}\) is a union of \(O(E^{k+1})\) open connected components, each of which contain \(O(E^{-k-1}\#\mathcal{P})\) points from \(\mathcal{P}\). Let \(\mathcal{O}\) denote the set of connected components.
Define \(Z=Z(Q)\), and define \(Z^{*}\) to be the union of all \((\delta;k)\) tangency prisms that intersect \(Z\). We claim that for each \(R\in\mathcal{R}\), at least one of the following must hold
* There is a cell \(\Omega\in\mathcal{O}\) so that \(\hat{R}\subset\Omega\).
* \(\hat{R}\subset Z^{*}\).
Indeed, if the second item does not hold then \(\hat{R}\) is disjoint from \(Z\). Since \(\hat{R}\) is connected, we must have \(\hat{R}\subset\Omega\) for some \(\Omega\in\mathcal{O}\).
For each \(\Omega\in\mathcal{O}\), define
\[\mathcal{R}_{\Omega}=\{R\in\mathcal{R}\colon\hat{R}\subset\Omega\}.\]
We have \(\#\mathcal{R}_{\Omega}\leq\#(\mathcal{P}\cap\Omega)=O_{k}(E^{-k-1}\#\mathcal{ R})\). If \(R\in\mathcal{R}_{\Omega}\) and \(f\in F\) with \(f\sim R\), then \(\operatorname{graph}(\mathcal{J}_{k-1}f)\cap\hat{R}\neq\emptyset\), and hence \(\operatorname{graph}(\mathcal{J}_{k-1}f)\cap\Omega\neq\emptyset\).
Define \(\mathcal{R}_{Z}=\mathcal{R}\bigcup_{\Omega\in\mathcal{O}}\mathcal{R}_{\Omega}\). We say we are in the _cellular case_ if \(\#\bigcup_{\Omega\in\mathcal{O}}\mathcal{R}_{\Omega}\geq\frac{1}{2}\#\mathcal{ R}\). Otherwise we are in the _algebraic case_. We remark that if \(E^{k+1}\) is substantially larger than \(\#\mathcal{R}\), then the bound \(O(E^{-k-1}\#\mathcal{P})\) from the application of Theorem 2.20 might be smaller than \(1\), i.e. each cell contains fewer than one point from \(\mathcal{P}\). If this happens, then \(\mathcal{P}\subset Z(Q)\), and we are most certainly in the algebraic case.
**Step 2. The cellular case.** Suppose we are in the cellular case. Then we may select a set \(\mathcal{O}^{\prime}\subset\mathcal{O}\) so that \(\sum_{\Omega\in\mathcal{O}^{\prime}}\#\mathcal{R}_{\Omega}\geq\frac{1}{4}\# \mathcal{R}\), and
\[\#\mathcal{R}_{\Omega}\geq c_{1}(k)E^{-k-1}\#\mathcal{R}\quad\text{for each $\Omega\in\mathcal{O}^{\prime}$}, \tag{2.13}\]
where \(c_{1}(k)>0\) is a quantity depending only on \(k\). To simplify notation, write \(\zeta_{f}\) for \(\operatorname{graph}(\mathcal{J}_{k-1}f)\). Note that if \(f\) is a polynomial of degree \(D\), then \(\zeta_{f}\) is a one-dimensional real variety defined by polynomials of degree at most \(D\).
By Proposition 2.21, since each polynomial in \(f\) has degree at most \(\delta^{-\eta}\), there are \(\leq K_{1}(k)\delta^{-k\eta}E\#F\) pairs \((\Omega,f)\in\mathcal{O}^{\prime}\times F\) with \(\zeta_{f}\cap\Omega\neq\emptyset\) (here \(K_{1}(k)\) is a constant depending only on \(k\)). Thus there is a cell \(\Omega\in\mathcal{O}^{\prime}\) with
\[\#\{f\in F\colon\zeta_{f}\cap\Omega\neq\emptyset\}\leq K_{1}(k)\delta^{-k \eta}E^{-k}\#F. \tag{2.14}\]
Denote the above set by \(F_{\Omega}\). If we choose \(E\) sufficiently large (\(E\geq K_{1}(k)\delta^{-\eta}\) will suffice), then \(\#F_{\Omega}<n\), and thus we may apply the induction hypothesis to conclude that
\[\#\mathcal{R}_{\Omega}\leq C\delta^{-B\varepsilon}(\#F_{\Omega})^{\frac{k+1} {k}+\varepsilon}. \tag{2.15}\]
Combining (2.13), (2.14), and (2.15), we conclude that
\[\#\mathcal{R} \leq\Big{(}c_{1}(k)^{-1}E^{k+1}\Big{)}\Big{(}C\delta^{-B\varepsilon }(\#F_{\Omega})^{\frac{k+1}{k}+\varepsilon}\Big{)} \tag{2.16}\] \[\leq\Big{(}c_{1}(k)^{-1}E^{k+1}\Big{)}\Big{(}C\delta^{-B\varepsilon }\big{(}K_{1}(k)\delta^{-k\eta}E^{-k}\#F\big{)}^{\frac{k+1}{k}+\varepsilon} \Big{)}\] \[\leq\Big{(}c_{1}(k)^{-1}K_{1}(k)^{\frac{k+1}{k}+\varepsilon} \delta^{-(k+1)\eta+k\varepsilon\eta}E^{-k\varepsilon}\Big{)}\Big{(}C\delta^{-B \varepsilon}(\#F)^{\frac{k+1}{k}+\varepsilon}\Big{)}.\]
If we select \(E\gtrsim_{\varepsilon^{\prime}}\delta^{-3\eta/\varepsilon}\) sufficiently large and \(B=B(k)\), \(C=C(k,\varepsilon)\) appropriately, then
\[\#\mathcal{R}\leq C\delta^{-B\varepsilon}(\#F)^{\frac{k+1}{k}+\varepsilon},\]
and the induction closes. This completes the proof of Proposition 2.7 when we are in the cellular case.
**Step 3. The algebraic case.** Next we consider the algebraic case. Observe that the tangency prisms associated to rectangles in \(\mathcal{R}_{Z}\) are contained in a thin neighborhood of the variety \(Z\). The following theorem of Wongkew [40] controls the volume of the thin neighborhood of a variety.
**Theorem 2.22**.: _Let \(Z=Z(Q)\subset\mathbb{R}^{d}\), where \(Q\) is a non-zero polynomial. Let \(B\subset\mathbb{R}^{n}\) be a ball of radius \(r\). Then there exists a constant \(C(d)\) depending only on \(d\) so that for all \(\rho>0\), we have_
\[|B\cap N_{\rho}(Z)|\leq C(d)(\deg Q)^{d}\rho^{d-1}r. \tag{2.17}\]
The set in (2.17) is described by a boolean combination of polynomial (in)equalities. Sets of this form are called semi-algebraic; we give a precise definition below.
**Definition 2.23**.: _A set \(S\subset\mathbb{R}^{d}\) is called a semi-algebraic set of complexity at most \(M\) if there exists \(N\leq M\); polynomials \(P_{1},\ldots,P_{N}\), each of degree at most \(M\); and a Boolean formula \(\Phi\colon\{0,1\}^{N}\to\{0,1\}\) such that_
\[S=\big{\{}x\in\mathbb{R}^{d}\colon\Phi(P_{1}(x)\geq 0,\ldots,P_{N}(x)\geq 0)=1 \big{\}}.\]
The next result controls the number of tangencies that can be contained in a semi-algebraic set of small volume.
**Proposition 2.24**.: _Let \(k\geq 1\), \(\varepsilon>0\). Then there exist positive numbers \(c=c(k)\), \(\eta=\eta(k,\varepsilon),\) and \(\delta_{0}=\delta_{0}(k,\varepsilon)\) so that the following holds for all \(\delta\in(0,\delta_{0}]\). Let \(F\) be a set of polynomials of degree at most \(\delta^{-\eta}\), each of which has \(C^{k}\) norm at most 1. Let \(\mathcal{R}\) be a set of pairwise incomparable \((\delta;k)\) rectangles. For each \(R\in\mathcal{R}\), let \(F(R)\subset\{f\in F\colon f\sim R\}\). Define the dual relation \(\mathcal{R}(f)=\{R\in\mathcal{R}\colon f\in F(R)\}\). Suppose that for each \(f\in F\), the rectangles in \(\mathcal{R}(f)\) satisfy the following "two-ends" type non-concentration condition: for each interval \(J\subset[0,1]\), we have_
\[\#\{R=R(I)\in\mathcal{R}(f)\colon I\subset J\}\leq\delta^{-\eta}|J|^{ \varepsilon}\#\mathcal{R}(f). \tag{2.18}\]
_Let \(S\subset[0,1]^{k+1}\) be a semi-algebraic set of complexity at most \(\delta^{-\eta}\) and volume \(|S|\leq\delta^{\varepsilon}\). Suppose that \(\hat{R}\subset S\) for each \(R\in\mathcal{R}\)._
_Then there exists \(R\in\mathcal{R}\), \(\tau\in[\delta,\delta^{c}]\), and a \((\tau;k+1)\) rectangle \(R_{1}\supset R\) with_
\[\#\{f\in F(R)\colon f\sim R_{1}\}\gtrsim\#F(R). \tag{2.19}\]
We defer the proof of Proposition 2.24 to the next section. Using Proposition 2.24, we will handle the algebraic case.
**Step 3.1 A two-ends reduction.** Recall that for each \(R\in\mathcal{R}\), there is a set \(F(R)\subset F\) that satisfies the non-concentration condition (2.1) from Definition 2.4. This necessarily implies that \(\#F(R)\gtrsim\delta^{\eta-\varepsilon}\). After dyadic pigeonholing, we can find a set \(\mathcal{R}_{1}\subset\mathcal{R}\) with \(\#\mathcal{R}_{1}\geq|\log\delta|^{-1}\#\mathcal{R}\) and a number \(\mu\) so that \(\mu\leq\#F_{1}(R)<2\mu\) for each \(R\in\mathcal{R}_{1}\). Define \(\mathcal{I}_{1}=\{(f,R)\colon R\in\mathcal{R}_{1},\ f\in F(R)\}\). We have
\[\mu\#\mathcal{R}_{1}\leq\#\mathcal{I}_{1}<2\mu\#\mathcal{R}_{1}. \tag{2.20}\]
For each \(f\in F\), the curve \(\zeta_{f}\) intersects \(Z^{*}\) in a union of \(O((\delta^{-\eta}E)^{O(1)})=O_{\varepsilon}(\delta^{-O(\eta/\varepsilon)})\) intervals. Let \(\varepsilon_{1}>0\) be a small quantity to be determined below. For each \(f\in F\), apply a two-ends reduction (see [35] for an introduction to this topic) with exponent \(\varepsilon_{1}\); this allows us to select an interval \(I_{f}\subset[0,1]\) so that the restriction of \(\zeta_{f}\) to the interval \(I_{f}\) is contained in \(Z^{*}\), and we have the following re-scaled analogue of (2.18) inside \(I_{f}\): For each interval \(J\subset I_{f}\), we have
\[\#\{R=R(I)\colon(f,R)\in\mathcal{I}_{1},\ I\subset J\}\leq 2(|J|/|I_{f}|)^{ \varepsilon_{1}}\#\{R\colon(f,R)\in\mathcal{I}_{1}\}. \tag{2.21}\]
Define \(\mathcal{I}_{2}\) to be those pairs \((f;R)\in\mathcal{I}_{1}\) where \(R=R(I)\) satisfies \(I\subset I_{f}\); we have \(\#\mathcal{I}_{2}\geq\delta^{\varepsilon_{1}}\#\mathcal{I}_{1}\). After further dyadic pigeonholing, we can select a set \(F_{3}\subset F\), a multiplicity \(\nu\), and a length \(\ell\). so that the following conditions hold.
* \(\ell\leq|I_{f}|<2\ell\) for each \(f\in F_{3}\).
* Each \(f\in F_{3}\) satisfies \[\nu\leq\#\{R\colon(f,R)\in\mathcal{I}_{2}\}<2\nu.\] (2.22)
Define \(\mathcal{I}_{3}=\mathcal{I}_{2}\cap(F_{3}\times\mathcal{R})\). We have the following bounds on the size of \(\mathcal{I}_{3}\)
\[|\log\delta|^{-2}\delta^{\varepsilon_{1}+O(\eta/\varepsilon)}\mu\#\mathcal{R}_{1 }\leq\#\mathcal{I}_{3}<2\mu\#\mathcal{R}_{1},\quad\text{and}\quad\nu\#F\leq\# \mathcal{I}_{3}<2\nu\#F. \tag{2.23}\]
Note that (2.22) continues to hold with \(\mathcal{I}_{3}\) in place of \(\mathcal{I}_{2}\).
**Step 3.2 Graph refinement.** At this point, the functions \(f\in F_{3}\) satisfy a re-scaled analogue of (2.18). Unfortunately, while all of the rectangles \(R\in\mathcal{R}\) satisfied the robust broadness condition (2.1) with respect to \(\mathcal{I}_{1}\), some of them might not satisfy this condition with respect to \(\mathcal{I}_{3}\). We can fix this by applying the following graph refinement lemma from [11].
**Lemma 2.25** (Graph refinement).: _Let \(G=(A\sqcup B,E)\) be a bipartite graph. Then there is a sub-graph \(G^{\prime}=(A^{\prime}\sqcup B^{\prime},E^{\prime})\) so that \(\#E^{\prime}\geq\#E/2\); each vertex in \(A^{\prime}\) has degree at least \(\frac{\#E}{4\#A}\); and each vertex in \(B^{\prime}\) has degree at least \(\frac{\#E}{4\#B}\)._
Applying Lemma 2.25 to \(\mathcal{I}_{3}\), we obtain sets \(F_{4},\mathcal{R}_{4}\) and \(\mathcal{I}_{4}\), with the following properties
* \(\#\mathcal{I}_{4}\geq\frac{1}{2}\#\mathcal{I}_{3}\), and hence (2.23) continues to hold, with the LHS weakened by a factor of \(1/2\).
* Each \(f\in F_{4}\) satisfies an analogue of (2.22) with \(\mathcal{I}_{4}\) in place of \(\mathcal{I}_{2}\), except the LHS is weakened to \(\nu/8\).
* Each \(R\in\mathcal{R}_{4}\) is incident (under the incidence relation \(\mathcal{I}_{4}\)) to at least \((\#\mathcal{I}_{3})/(4\#\mathcal{R}_{1})\geq(2|\log\delta|)^{-2}\delta^{- \varepsilon_{1}}\mu\) functions \(f\in F_{4}\). Since each \(R\in\mathcal{R}_{4}\) is incident to at most \(2\mu\) functions, we have \[\#\mathcal{R}_{4}\geq\#\mathcal{I}_{4}/(2\mu)\gtrsim|\log\delta|^{-2}\delta^ {\varepsilon_{1}+O(\eta/\varepsilon)}\#\mathcal{R}_{1}\gtrsim|\log\delta|^{-3 }\delta^{\varepsilon_{1}+O(\eta/\varepsilon)}\#\mathcal{R}.\] (2.24)
**Step 3.3 Rescaling.** If \(\ell\leq\delta^{1/k-\varepsilon}\), then for each \(f\in F_{4}\) there are at most \(\delta^{-\varepsilon}\) rectangles \(R\in\mathcal{R}\) with \((f,R)\in\mathcal{I}_{4}\) (see the comment after Definition 2.3). We conclude that
\[\#\mathcal{R}\leq\#\mathcal{I}\leq|\log\delta|^{-3}\delta^{-\varepsilon_{1}} \#\mathcal{I}_{4}\leq|\log\delta|^{-3}\delta^{-\varepsilon-\varepsilon_{1}} \#F,\]
and hence (2.3) holds and we are done. (provided we select \(\varepsilon_{1}\leq\varepsilon\), and \(B\geq 3\), and \(C\) sufficiently large).
Next, suppose that
\[\ell\geq\delta^{1/k-\varepsilon}. \tag{2.25}\]
Our goal is to obtain a contradiction, and thereby finish the proof.
Let \(\rho=\ell^{k}\geq\delta^{1-k\varepsilon}\), and let \(\mathcal{S}\) be a maximal set of pairwise non-close \((\rho;k)\) tangency rectangles. For each \(f\in F_{4}\), the restriction of \(f\) to the interval \(I_{f}\) is tangent to at least one, and at most \(O(1)\) of these rectangles (recall that \(I_{f}\) is an interval of length roughly \(\ell\), and each curvilinear rectangle in \(\mathcal{S}\) has dimensions \(\rho\times\rho^{1/k}=\rho\times\ell\)). Each rectangle \(R\in\mathcal{R}_{4}\) is covered (in the sense of Definition 2.15) by at least one, and at most \(O(1)\) of these rectangles. Furthermore, if \((f,R)\in\mathcal{I}_{4}\), then there is a \((\rho;k)\) rectangle \(S\) so that the restriction of \(f\) to the interval \(I_{f}\) is tangent to \(S\), and \(S\succ R\).
This induces a decomposition of the incidence arrangement \((\mathcal{I}_{4},F_{4},\mathcal{R}_{4})\) into sub-arrangements, which we will denote by \(\{(\mathcal{I}_{S},F_{S},\mathcal{R}_{S}\}_{S\in\mathcal{S}}\), with the following properties:
* Each \(f\in F_{4}\) is contained in \(O(1)\) sets \(\{F_{S}\}_{S\in\mathcal{S}}\). If \(f\in F_{S}\) then the restriction of \(f\) to the interval \(I_{f}\) is tangent to \(S\).
* Each \(R\in\mathcal{R}_{4}\) is contained in \(O(1)\) sets \(\{\mathcal{R}_{S}\}_{S\in\mathcal{S}}\). If \(R\in\mathcal{R}_{S}\) then \(R\prec S\).
* \(\mathcal{I}_{4}=\bigcup_{S\in\mathcal{S}}\mathcal{I}_{S}\).
Fix a tangency rectangle \(S\in\mathcal{S}\) for which \(\mathcal{I}_{S}\) (and hence \(\mathcal{R}_{S}\) and \(F_{S}\)) is non-empty. After applying the re-scaling \(f\mapsto f_{S}\) and \(R\mapsto R_{S}\) from Definitions 2.16 and 2.18, we have sets \(\tilde{F}_{S}\) and \(\tilde{\mathcal{R}}_{S}\), and an incidence relation \(\mathcal{I}_{S}\).
If we define \(\tilde{F}_{S}(\tilde{R})=\{\tilde{f}\in\tilde{F}_{S}\colon(\tilde{f},\tilde{R}) \in\tilde{\mathcal{I}}_{S}\}\), then the sets \(\tilde{F}_{S}\) and \(\tilde{\mathcal{R}}_{S}\), and the sets \(\{\tilde{F}_{S}(\tilde{R})\}\) obey the two-ends non-concentration condition (2.18) from Proposition 2.24 at scale \(\tau=\delta/\rho\), with \(\varepsilon_{1}\) in place of \(\varepsilon\) and a number \(\Omega(1)\) in place of \(\delta^{-\eta}\). Before we can applying the proposition, however, we must show that the prisms \(\{\hat{\tilde{R}}\colon\tilde{R}\in\tilde{\mathcal{R}}_{S}\}\) are contained in a semi-algebraic set \(S\) of controlled complexity and small volume.
First, observe that every such prism \(\hat{\tilde{R}}\) is contained in \(\psi^{R}(S\cap Z^{*})\) (recall that \(\psi^{R}\) is defined in Definition 2.16), which in turn is contained in the union of all \((\tau;k)\) tangency prisms that intersect \(\phi^{R}(S\cap Z^{*})\subset([0,1]\times[-1,1]^{k})\cap\psi^{R}(Z)\). This in turn is contained in the set
\[S=\big{(}[0,1]\times[-1,1]^{k}\big{)}\cap N_{\tau^{1/k}}(\psi^{R}(Z)).\]
\(\psi^{R}(Z)\) is an algebraic variety of degree at most \(\deg Q\leq E\), so by Theorem 2.22 we have
\[|S|\lesssim E^{k+1}(\tau^{1/k})^{k}=E^{k+1}(\tau)\lesssim_{\varepsilon}\delta ^{-O(\eta/\varepsilon)}\tau\leq\tau^{1-O(\eta/\varepsilon^{2})},\]
where we used the bound \(\rho\geq\delta^{1-k\varepsilon}\) (and thus \(\tau\leq\delta^{k\varepsilon}\)) to replace \(\delta^{-O(\eta/\varepsilon)}\) with \(\tau^{-O(\eta/\varepsilon^{2})}\).
It is straightforward to show that \(S\) has complexity at most \(E^{O(1)}\lesssim_{\varepsilon}\delta^{O(\eta/\varepsilon)}\lesssim\tau^{O( \eta/\varepsilon^{2})}\). We wish to apply Proposition 2.24 with \(\tau\) in place of \(\delta\), \(\varepsilon_{1}\) in place of \(\varepsilon\), and \(B=O(1)\). Let \(c=c(k)>0\), \(\eta_{1}\), and \(\delta_{0}\) be the corresponding quantities from Proposition 2.24. If \(\eta>0\) is selected sufficiently small depending on \(\eta_{1},k\), and \(\varepsilon\) (recall that \(\eta_{1}\) depends on \(k\) and \(\varepsilon_{1}\), and \(\varepsilon_{1}\) in turn depends on \(k\) and \(\varepsilon\)), then the hypotheses of Proposition 2.24 are satisfied. We conclude that there is a rectangle \(\tilde{R}\in\tilde{\mathcal{R}}_{S}\); a scale \(\tau_{1}\in[\tau,\tau^{c}]\); and a \((\tau_{1};k+1)\) rectangle \(\tilde{R}_{1}\supset\tilde{R}\) with
\[\#\{\tilde{f}\in\tilde{F}_{S}(\tilde{R})\colon\tilde{f}\sim\tilde{R}_{1}\} \gtrsim\#\tilde{F}_{S}(\tilde{R})\gtrsim|\log\delta|^{-2}\delta^{\varepsilon_ {1}}\mu. \tag{2.26}\]
Undoing the re-scaling, we have a curvilinear rectangle of dimensions \(\tau_{1}\rho\times\tau_{1}^{1/(k+1)}\rho^{1/k}=\tau_{1}\rho\times\tau_{1}^{ \frac{-1}{k+1}}(\tau_{1}\rho)^{1/k}\); i.e. we have a \((\tau_{1}\rho;k;\tau_{1}^{\frac{-1}{k+1}})\) tangency rectangle \(R_{1}\supset R\), with
\[\#\{f\in F_{S}(R)\colon f\sim R_{1}\}\gtrsim|\log\delta|^{-2}\delta^{ \varepsilon_{1}}\mu. \tag{2.27}\]
Finally, define \(\rho_{1}=\tau_{1}\rho\) and define \(T=\tau_{1}^{\frac{-1}{k(k+1)}}\geq\tau^{\frac{-\varepsilon}{k(k+1)}}\geq \delta^{\frac{-\varepsilon\varepsilon}{k+1}}\). Since the rectangles in \(\mathcal{R}\) are \(\mu\)-rich and \(\varepsilon\)-robustly broad with error \(\delta^{-\eta}\), by (2.1) we have
\[\#\{f\in F_{S}(R)\colon f\sim R_{1}\}\leq\delta^{-\eta}T^{e}\#F(R)\lesssim \delta^{-\eta+\frac{\varepsilon^{2}}{k+1}}\mu. \tag{2.28}\]
Comparing (2.27) and (2.28), we obtain a contradiction provided we select \(\varepsilon_{1}\) sufficiently small depending on \(\varepsilon\) and \(c\) (recall that \(c\) in turn depends on \(k\)), and provided \(\delta>0\) is sufficiently small.
This contradiction shows that (2.25) cannot hold. This completes the proof of Proposition 2.7, except that we still need to prove Proposition 2.24. This will be done in the next section.
## 3 Tangencies inside a semi-algebraic set of small volume
In this section we will prove Proposition 2.24. We begin by establishing a decomposition theorem for semi-algebraic sets with small volume.
### Covering semi-algebraic sets with thin neighborhoods of Lipschitz graphs
In this section, we will show that a semi-algebraic set \(W\subset[0,1]^{n+1}\) with small volume can be covered by a small number of thin neighborhoods of Lipschitz graphs, plus a set that has small projection to \([0,1]^{n}\). The precise statement is as follows. Throughout this section, all implicit constants may depend on the dimension \(n\). We write \(A=\operatorname{Poly}(B)\) to mean \(A\leq CB^{C}\), where the constant \(C\) may depend on the ambient dimension \(n\).
**Proposition 3.1**.: _Let \(S\subset[0,1]^{n+1}\) be a semi-algebraic set of complexity at most \(W\). Let \(0<u\leq 1\) and \(L\geq 1\). Then we can cover \(S\) by a collection of sets,_
\[S\subset\bigcup_{i=0}^{N}S_{i}, \tag{3.1}\]
_with the following properties:_
* \(N=\operatorname{Poly}(W)\).
* \(S_{0}=T_{0}\times[0,1]\)_, where_ \(T\subset[0,1]^{n}\) _is semi-algebraic with complexity_ \(\operatorname{Poly}(W)\)_, and_ \[|T_{0}|\leq\operatorname{Poly}(W)\big{(}L^{-1}+|S|/u\big{)}.\] (3.2)
* _For each index_ \(i\geq 1\)_,_ \(S_{i}\) _is of the form_ \[S_{i}=\{(\underline{x},x_{i+1})\colon\underline{x}\in T_{i},\ f_{i}( \underline{x})<x_{n+1}<f_{i}(\underline{x})+u\},\] _where_ \(T_{i}\subset[0,1]^{n}\) _is semi-algebraic with complexity_ \(\operatorname{Poly}(W)\)_, and_ \(f_{i}\colon[0,1]^{n}\to\mathbb{R}\) _is_ \(L\)_-Lipschitz._
One of the main tools we will use is the cylindrical algebraic decomposition. This is a technique from real algebraic geometry that was originally developed in the context if quantifier elimination. The cylindrical algebraic decomposition decomposes an arbitrary semi-algebraic set into simpler sets, which are called cells1. See Chapter 5 from [3] for an introduction to the topic. We will require a version of this result where both the number of cells and their complexity is controlled by the complexity of the input.
Footnote 1: these are not to be confused with the connected components of \(\mathbb{R}^{k+1}\backslash Z(Q)\) from Section 2.4, which are also called cells.
**Theorem 3.2** (Effective cylindrical algebraic decomposition).: _Let \(S\subset\mathbb{R}^{n+1}\) be a semi-algebraic set of complexity at most \(W\) (see Definition 2.23). Then there exists a decomposition \(S=\bigsqcup_{j=0}^{N}S_{i}\) with \(N=\operatorname{Poly}(W)\), where the sets \(S_{i}\) have the following properties._
* _Each_ \(S_{i}\) _is semi-algebraic of complexity_ \(\operatorname{Poly}(W)\)_._
* _The projection of_ \(S_{0}\) _to the first_ \(n\) _coordinates is a semi-algebraic set of measure 0 and complexity_ \(\operatorname{Poly}(W)\)_._
* _For each_ \(i=1,\ldots,N\)_, the set_ \(S_{i}\) _is of one of the following two forms:_ \[S_{i}=\big{\{}(\underline{x},x_{n+1})\colon\underline{x}\in T_{i},\ f_{i}( \underline{x})<x_{n+1}<g_{i}(\underline{x})\big{\}},\] (3.3) _or_ \[S_{i}=\big{\{}(\underline{x},x_{n+1})\colon\underline{x}\in T_{i},\ x_{n+1}=f_{i}( \underline{x})\big{\}}.\] (3.4) _In the above,_ \(T_{i}\subset\mathbb{R}^{n}\) _is a semi-algebraic set of complexity_ \(\operatorname{Poly}(W)\)_;_ \(f_{i}\colon T_{i}\to\mathbb{R}\) _is smooth; and there is a nonzero polynomial_ \(F_{i}\colon\mathbb{R}^{n+1}\to\mathbb{R}\) _of degree_ \(\operatorname{Poly}(W)\) _so that_ \[F_{i}(\underline{x},f_{i}(\underline{x}))=0\quad\text{and}\quad\partial_{x_{n+ 1}}F_{i}(\underline{x},f_{i}(\underline{x}))\neq 0\qquad\text{for all }\underline{x}\in T_{i}.\] _The function_ \(g_{i}\colon T_{i}\to\mathbb{R}\) _satisfies the analogous conditions._
We now begin the process of proving Proposition 3.1. To start, we will study structural properties of the cells arising from the cylindrical algebraic decomposition.
**Lemma 3.3**.: _Let \(L>0\) and let \(S\subset[0,1]^{n+1}\) be a set of the form_
\[\{(\underline{x},x_{n+1})\in[0,1]^{n+1}\colon\underline{x}\in T,\ f( \underline{x})<x_{n+1}<g(\underline{x})\},\]
_where_
* \(T\subset[0,1]^{n}\) _is semi-algebraic of complexity at most_ \(W\)_._
* \(f\colon T\to[0,1]\) _is differentiable._
* _There is a nonzero polynomial_ \(F\) _of degree at most_ \(W\) _so that_ \(F(\underline{x},f(\underline{x}))=0\) _and_ \(\partial_{x_{n+1}}F(\underline{x},f(\underline{x}))\neq 0\) _for all_ \(\underline{x}\in T\)_._
* \(|\nabla f(\underline{x})|\geq L\) _for all_ \(\underline{x}\in T\)
_Then_
\[|T|\lesssim n^{2}W^{2}/L. \tag{3.5}\]
Proof.: Write \(T=\bigcup_{i=1}^{n}T_{i}\), where \(\big{|}\frac{d}{dx_{i}}f\big{|}\geq L/n\) on \(T_{i}\). Each set \(T_{i}\) has complexity at most \(2W,\) since by the implicit function theorem we have
\[T_{i}=\Big{\{}\underline{x}\colon\underline{x}\in T,\ \Big{|}\frac{\partial_{x_{i}}F( \underline{x})}{\partial_{x_{n+1}}F(\underline{x})}\Big{|}\geq L/n\Big{\}}= \Big{\{}\underline{x}\colon\underline{x}\in T,\ \big{(}\partial_{x_{i}}F( \underline{x})\big{)}^{2}\geq\frac{L^{2}}{n^{2}}\big{(}\partial_{x_{n+1}}F( \underline{x})\big{)}^{2}\Big{\}}.\]
Fix an index \(i\), and let \(L\subset\mathbb{R}^{n}\) be a line pointing in the \(x_{i}\) direction, with \(|L\cap T_{i}|\geq|T_{i}|\) (here we use the fact that \(T_{i}\subset[0,1]^{n}\); the \(|\cdot|\) on the LHS denotes one-dimensional Lebesgue measure, while the \(|\cdot|\) on the RHS denotes \(n\)-dimensional Lebesgue measure). Since \(T_{i}\) has complexity at most \(2W\), \(L\cap T_{i}\) contains at most \(4W^{2}\) connected components. Let \(L^{\prime}\subset L\cap T_{i}\) be an interval of length \(\geq|T_{i}|/(4W^{2})\). But since \(|\frac{d}{dx_{i}}f|\geq L/n\) on \(T_{i}\), we have \(|f(a)-f(b)|\geq L|T_{i}|/(4nW^{2})\), where \(a\) and \(b\) are the endpoints of \(L^{\prime}\). On the other hand, \(f(a),f(b)\in[0,1]\). We conclude that
\[\frac{L|T_{i}|}{4nW^{2}}\leq|f(a)-f(b)|\leq 1.\]
Re-arranging we have \(|T_{i}|\leq 4nW^{2}/L\). Summing over \(i\) we obtain (3.5).
**Lemma 3.4**.: _Let \(S\subset[0,1]^{n+1}\) be a set of the form_
\[\{(\underline{x},x_{n+1})\in[0,1]^{n+1}\colon\underline{x}\in T,\ f( \underline{x})<x_{n+1}<g(\underline{x})\},\]
_where_
* \(T\subset[0,1]^{n}\) _is semi-algebraic of complexity at most_ \(W\)_._
* \(f\colon T\to[0,1]\) _is smooth._
* _there is a polynomial_ \(F\) _of degree at most_ \(W\) _so that_ \(F(\underline{x},f(\underline{x}))=0\) _and_ \(\partial_{x_{n+1}}F(\underline{x},f(\underline{x}))\neq 0\) _for all_ \(\underline{x}\in T\)_._
_Let \(L>0\). Then we can write \(T=T^{\prime}\cup T^{\prime\prime}\), where_
* \(T^{\prime}\) _and_ \(T^{\prime\prime}\) _are semi-algebraic of complexity_ \(O_{n}(W)\)_._
* \(|T^{\prime}|=O_{n}(W^{2}/t)\)_._
* \(f\) _is differentiable on_ \(T^{\prime\prime}\)_, and_ \(|\nabla f|\leq L\) _on_ \(T^{\prime\prime}\)_._
Proof.: Let
\[T^{\prime} =\Big{\{}\underline{x}\in T\colon\sum_{i=1}^{n}\Big{(}\frac{ \partial_{x_{i}}F}{\partial_{x_{n+1}}F}\Big{)}^{2}\geq L^{2}\Big{\}},\] \[T^{\prime\prime} =T\backslash T^{\prime}.\]
By construction, \(f\) is differentiable and satisfies \(|\nabla f|\leq L\) on \(T^{\prime\prime}\). By Lemma 3.3 we have \(|T^{\prime}|=O_{n}(W^{2}/L)\).
We are now ready to prove Proposition 3.1.
Proof of Proposition 3.1.: Apply Theorem 3.2 to \(S\), and let \(A_{1},\dots,A_{N}\) be the corresponding cells. For each cell \(A_{i}\) of the form
\[A_{i}=\{(\underline{x},x_{n+1})\colon\underline{x}\in B_{i},f_{i}(\underline{ x})<x_{n+1}<g_{i}(\underline{x})\},\]
apply Lemma 3.4 to decompose \(B_{i}=B_{i}^{\prime}\cup B_{i}^{\prime\prime}\), with \(L\) as above; we have that \(|B_{i}^{\prime}|\leq L^{-1}\operatorname{Poly}(W)\), and hence
\[\Big{|}\bigcup_{i=1}^{N}B_{i}^{\prime}\Big{|}\leq L^{-1}\operatorname{Poly}(W). \tag{3.6}\]
For each index \(i\), write \(B^{\prime\prime}_{i}=T_{i}\sqcup C_{i}\), where
\[T_{i}=\{\underline{x}\in B^{\prime\prime}_{i}\colon|g(\underline{x})-f( \underline{x})|\leq u\}.\]
We have \(u|C_{i}|\leq|B^{\prime\prime}_{i}|\leq|S|\), and hence
\[\sum|C_{i}|\leq\mathrm{Poly}(W)|S|/u.\]
Thus if we define
\[T_{0}=\bigcup C_{i}\ \cup\ \bigcup Y^{\prime}_{i},\]
then \(T_{0}\) satisfies (3.2). Finally, we have
\[\{(\underline{x},x_{n+1})\colon\underline{x}\in T_{i},\ f_{i}(\underline{x}) <x_{n+1}<g_{i}(\underline{x})\}\subset\{(\underline{x},x_{n+1})\colon \underline{x}\in T_{i},\ f_{i}(\underline{x})<x_{n+1}<f_{i}(\underline{x})+u\},\]
and hence (3.1) holds. We have now proved Proposition 3.1, except that our functions \(f_{i}\) are only defined on \(T_{i}\), rather than \([0,1]^{n}\). Since \(|\nabla f_{i}|<L\) on \(T_{i}\), each \(f_{i}\) is \(L\)-Lipschitz on \(T_{i}\). By the Kirszbraun-Valentine Lipschitz extension theorem, we can extend each \(f_{i}\) to a \(L\)-Lipschitz function on \([0,1]^{n}\).
### Jet lifts in thin neighborhoods of Lipschitz graphs
Our goal in this section is to prove the following result. In what follows, recall that the jet lift \(\mathcal{J}_{j}f\) was given in Definition 2.12.
**Proposition 3.5**.: _For each \(d\geq 0,\ \varepsilon>0\), there are constants \(A=A(d)\) and \(B=B(d,\varepsilon)\) so that the following holds. Let \(D,W\geq 1\) and let \(\rho>0\). Let \(S\subset[0,1]^{d+2}\) be a semi-algebraic set of complexity at most \(W\) and volume at most \(\rho\). Then for each polynomial \(f\) of degree at most \(D\), there is a "bad" set \(B_{f}\subset[0,1]\), which is a union of at most \(A(DW)^{A}\) intervals and has measure at most \(A(DW)^{A}\rho^{1/B}\), so that the following holds._
_Let \(f,g\) be polynomials of degree at most \(D\). Suppose there is a point \(t_{0}\in[0,1]\backslash(B_{f}\cup B_{g})\) that satisfies_
\[(t_{0},\mathcal{J}_{d}f(t_{0}))\in S,\quad(t_{0},\mathcal{J}_{d }g(t_{0}))\in S, \tag{3.7}\] \[|\mathcal{J}_{d}f(t_{0})-\mathcal{J}_{d}g(t_{0})|\leq\rho.\]
_Then there is a number \(\tau\in[\rho,\rho^{1/B}]\) so that_
\[|f(t)-g(t)|\leq A\tau,\qquad t\in[t_{0}-\tau^{\varepsilon},\ t_{0}+\tau^{ \varepsilon}]. \tag{3.8}\]
_Furthermore, the value of \(\tau\) can be selected from a set \(X\subset[\rho,\rho^{1/B}]\) (the set \(X\) depends only on \(d,\varepsilon,\) and \(\rho\)) that has cardinality \(d+1\)._
Gronwall's inequality will play an important role in the proof of Proposition 3.5. We will use the following formulation. See e.g. [22] for a discussion and proof of this version.
**Theorem 3.6** (Gronwall's inequality).: _Let \(I\) be an interval, let \(F,G\colon I\times\mathbb{R}^{d}\to\mathbb{R}\), let \(t_{0}\in I\), let \(\underline{\tilde{x}},\underline{\tilde{y}}\in\mathbb{R}^{d}\), and let \(f,g\colon I\to\mathbb{R}\) satisfy the initial value problems_
\[f^{(d)}(t) =F\big{(}t,\mathcal{J}_{d-1}f(t)\big{)},\quad\mathcal{J}_{d-1}f(t _{0})=\underline{\tilde{x}}, \tag{3.9}\] \[g^{(d)}(t) =G\big{(}t,\mathcal{J}_{d-1}g(t)\big{)},\quad\mathcal{J}_{d-1}g(t _{0})=\underline{\tilde{y}}.\]
_Suppose that for \(t\) fixed, \(F\) is \(L\)-Lipschitz in \(\underline{x}\), i.e._
\[|F(t,\underline{x})-F(t,\underline{x}^{\prime})|\leq L|\underline{x}-\underline{ x}^{\prime}|,\qquad t\in I,\ \underline{x},\underline{x}^{\prime}\in\mathbb{R}^{d}.\]
_Let \(\rho>0\). Suppose that \(|\underline{\tilde{x}}-\underline{\tilde{y}}|\leq\rho\), and_
\[|F(t,\underline{x})-G(t,\underline{x})|\leq\rho,\qquad t\in I,\ \underline{x}\in\mathbb{R}^{d}. \tag{3.10}\]
_Then_
\[|f(t)-g(t)|\lesssim_{d}e^{L|I|}\rho,\qquad t\in I.\]
The following result is a variant of Theorem 3.6. Instead of requiring that \(f\) and \(g\) satisfy "nearby" initial value problems, in the sense of (3.10), we require that \(f\) satisfies the initial value problem \(f^{(d)}=F(t,\mathcal{J}_{d-1}f)\), and \(g\) almost satisfies this same initial value problem, in the sense that \(|g^{(d)}-F(t,\mathcal{J}_{d-1}g)|\) is small. The precise statement is as follows.
**Lemma 3.7**.:
* _Let_ \(d\geq 1\)_, let_ \(I\) _be an interval, and let_ \(g\in C^{d}(I)\)_._
* _Let_ \(F\colon I\times\mathbb{R}^{d}\to\mathbb{R}\) _be_ \(L\)_-Lipschitz, let_ \(\rho>0\)_, and suppose that_ \[\big{|}g^{(d)}(t)-F\big{(}t,\mathcal{J}_{d-1}g(t)\big{)}\big{|}\leq\rho,\qquad t \in I.\] (3.11)
* _Let_ \(t_{0}\in I\)_, let_ \(\tilde{\underline{y}}=\mathcal{J}_{d-1}g(t_{0})\)_, and let_ \(\tilde{\underline{x}}\in\mathbb{R}^{d}\)_, with_ \(|\tilde{\underline{x}}-\tilde{\underline{y}}|\leq\rho\)_._
* _Let_ \(f\colon I\to\mathbb{R}\) _be a solution to the initial value problem_ \[f^{(d)}(t)=F\big{(}t,\mathcal{J}_{d-1}f(t)\big{)},\qquad\mathcal{J}_{d-1}f(t_ {0})=\tilde{\underline{x}}.\] (3.12)
_Then_
\[|g(t)-f(t)|\lesssim_{d}e^{L|I|}\rho,\quad t\in I. \tag{3.13}\]
Proof.: Define
\[G(t,\underline{x}) =F(t,\underline{x})+e(t),\] \[e(t) =g^{(d)}(t)-F\big{(}t,\mathcal{J}_{d-1}g(t)\big{)}.\]
The quantity \(e(t)\) is intended to measure the error between the initial value problems \(F\) and \(G\). Inequality (3.11) says that \(|e(t)|\leq\rho\) for \(t\in I\), and thus
\[\big{|}G(t,\underline{x})-F(t,\underline{x})\big{|}\leq\rho,\qquad t\in I,\ \underline{x}\in\mathbb{R}^{d}. \tag{3.14}\]
But \(g\colon I\to\mathbb{R}\) is the solution to the initial value problem
\[g^{(d)}(t)=G\big{(}t,\mathcal{J}_{d-1}g(t)\big{)},\qquad\mathcal{J}_{d-1}g(t_ {0})=\tilde{\underline{y}}. \tag{3.15}\]
Thus by Theorem 3.6, we have
\[|f(t)-g(t)|\lesssim_{d}e^{L|I|}\rho\quad\text{for all $t\in I$}. \tag{3.16}\]
**Lemma 3.8**.:
* _Let_ \(d\geq 0\)_,_ \(L\geq 1\)_, let_ \(I\) _be an interval of length at most_ \(L^{-1}\)_, and let_ \(f,g\in C^{d}(I)\)_._
* _Let_ \(F\colon[0,1]\times\mathbb{R}^{k}\to\mathbb{R}\) _be L-Lipschitz, let_ \(\rho>0\)_, and suppose that_ \[\begin{split}&\big{|}f^{(d)}(t)-F\big{(}t,\mathcal{J}_{d-1}f(t) \big{)}\big{|}\leq\rho,\qquad t\in I,\\ &\big{|}g^{(d)}(t)-F\big{(}t,\mathcal{J}_{d-1}g(t)\big{)}\big{|} \leq\rho,\qquad t\in I.\end{split}\] (3.17)
* _Suppose there is_ \(t_{0}\in I\) _so that_ \[|\mathcal{J}_{d}f(t_{0})-\mathcal{J}_{d}g(t_{0})|\leq\rho.\] (3.18)
_Then_
\[|f(t)-g(t)|\lesssim_{d}\rho,\qquad t\in I. \tag{3.19}\]
Proof.: If \(d=0\) then (3.19) follows from (3.17) and the triangle inequality.
Suppose instead that \(d\geq 1\). Define
\[\underline{z}=\frac{1}{2}\Big{[}\mathcal{J}_{d-1}f(t_{0})+\mathcal{J}_{d-1}g(t_{ 0})\Big{]}.\]
By (3.18), we have
\[\big{|}\underline{z}-\mathcal{J}_{d-1}f(t_{0})\big{|}\leq\rho/2.\]
We now apply Lemma 3.7. Let \(h\colon I\to\mathbb{R}\) be the solution to the initial value problem
\[h^{(d)}(t)=F\big{(}t,\mathcal{J}_{d-1}h(t)\big{)},\qquad\mathcal{J}_{d-1}h(t_{ 0})=\underline{z}.\]
By Lemma 3.7, we have \(|h(t)-f(t)|\lesssim_{d}\rho\) for \(t\in I\). But note that the construction of \(h\) is symmetric in the functions \(f\) and \(g\), and thus we also have \(|h(t)-g(t)|\lesssim_{d}\rho\) for \(t\in I\). The conclusion (3.19) now follows from the triangle inequality.
With these tools, we are now ready to prove the main result in this section.
Proof of Proposition 3.5.: Without loss of generality, we may suppose that \(\varepsilon\leq 1\); otherwise we can replace \(\varepsilon\) by \(1\) and the conclusion remains valid. Let \(L_{0}=\rho^{-\varepsilon/2}\), and for each \(i=1,\ldots,d\), define \(L_{i}=L_{i-1}^{\varepsilon/2}\). For each index \(i\), define \(\rho_{i}=L_{i}^{-1/\varepsilon}\). We will select the quantity \(B(d,\varepsilon)\) sufficiently large so that \(\rho_{d}\geq\rho^{1/B}\). With \(B\) selected in this way, we have
\[\rho_{i}\in[\rho,\rho^{1/B}],\quad 0\leq i\leq d. \tag{3.20}\]
We will define an iterative decomposition of \(S\), which will take \(d\) steps; "\(L_{i}\)" will be the allowable Lipschitz constant, and \(\rho_{i}\) will be the allowable thickness at stage \(i\) of the constriction.
For the first step, apply Proposition 3.1 to \(S\) with \(L_{0}\) in place of \(L\), and \(u=\rho_{0}\); we obtain sets \(S^{0}_{1},\ldots,S^{0}_{N_{0}}\); and a set \(S^{0}_{0}\); each set \(S^{0}_{i}\) is contained in the \(\rho_{0}\) neighborhood of a \(L_{0}\)-Lipschitz graph, and \(S^{0}_{0}\subset T^{0}_{0}\times[0,1]\), with \(|T^{0}_{0}|\lesssim L_{0}^{-1}+|S|\rho_{0}^{-1}\). Since \(|S|\leq\rho\) and \(\varepsilon\leq 1\), Our choice of \(L_{0}\) and \(\rho_{0}\) ensures that \(L_{0}^{-1}\leq|S|\rho_{0}^{-1}\), and hence \(|T^{0}_{0}|\lesssim L_{0}^{-1}\).
For the \(i\)-th step of our decomposition, we apply Proposition 3.1 to \(T^{i-1}_{0}\) with \(L_{i}\) in place of \(L\), and \(u=\rho_{i}\); we obtain sets \(S^{i}_{1},\ldots,S^{i}_{N_{2}}\); and a set \(S^{i}_{0}\); each set \(S^{i}_{j}\) is contained in the \(\rho_{i}\) neighborhood of a \(L_{i}\)-Lipschitz graph, and \(S^{i}_{0}\subset T^{i}_{0}\times[0,1]\), with
\[|T^{i}_{0}|\lesssim L_{i}^{-1}+|T^{i-1}_{0}|\rho_{i}^{-1}\lesssim L_{i}^{-1}+ L_{i-1}^{-1}\rho_{i}^{-1}=L_{i}^{-1}+L_{i}^{-2/\varepsilon}L_{i}^{1/ \varepsilon}\leq 2L_{i}^{-1}, \tag{3.21}\]
Where the first inequality is the conclusion of Proposition 3.1; the second inequality used (3.21) with \(i-1\) in place of \(i\); and the third equality used the definition of \(L_{i-1}\). Note that the (iterated) use of implicit constants in these inequalities is harmless, since the iteration only occurs \(d\) times.
After this process is complete, we have a covering of \(S\) of the form
\[S\subset\bigcup_{i=0}^{d}\bigcup_{j=1}^{N_{i}}\big{(}S^{i}_{j}\times[0,1]^{i} \big{)}, \tag{3.22}\]
where \(S^{i}_{j}\subset[0,1]^{d+2-i}\) is the vertical \(\rho_{i}\) neighborhood of a \(L_{i}\)-Lipschitz graph (we denote the associated Lipschitz function by \(G^{i}_{j}\)) above a set \(T^{i}_{j}\subset[0,1]^{d+1-i}\).
We would like to claim that if \(f\) and \(g\) are two polynomials that satisfy (3.7) at some point \(t_{0}\in[0,1]\), then the corresponding points \(\big{(}t_{0},\mathcal{J}_{d}f(t_{0})\big{)}\) and \(\big{(}t_{0},\mathcal{J}_{d}g(t_{0})\big{)}\) must be contained in a common set of the form \(S^{i}_{j}\times[0,1]^{i}\) from the decomposition (3.22). Unfortunately this need not be true, since even though (3.7) guarantees that the points \(\mathcal{J}_{d}f(t_{0})\) and \(\mathcal{J}_{d}g(t_{0})\) are nearby, they might nonetheless be contained in different sets from (3.7).
To handle this annoyance, we will expand each set \(S^{i}_{j}\) slightly: define \((S^{i}_{j})^{*}\) to be the vertical \(\rho_{i}+\rho\) neighborhood of the Lipschitz graph \(G^{i}_{j}\) above the \(\rho\)-neighborhood of \(T^{i}_{j}\). Then, if \(f\) and \(g\) satisfy (3.7) at \(t_{0}\), and if \(\big{(}t_{0},\mathcal{J}_{d}f(t_{0})\big{)}\in S^{i}_{j}\times[0,1]^{i}\), then \(\big{(}t_{0},\mathcal{J}_{d}g(t_{0})\big{)}\in(S^{i}_{j})^{*}\times[0,1]^{i}\). We will see below why this is useful.
Our next task is to define the "bad" set \(B_{f}\) from the statement of Proposition 3.5. Let \(f\) be a polynomial of degree \(\leq D\). Let
\[J_{f}=\big{\{}t\in[0,1]\colon\big{(}t,\mathcal{J}_{d}f(t)\big{)}\in S\big{\}}.\]
\(J_{f}\) is semi-algebraic of complexity \((DW)^{O(1)}\), and hence is a union of \(O(DW)^{O(1)}\) intervals. We will further sub-divide these intervals into a collection of intervals \(\mathcal{I}_{f}\) with the following properties. First, \(J_{f}=\bigcup_{J\in\mathcal{I}_{f}}J\). Second, for each \(J\in\mathcal{I}_{f}\) and each set of the form \(X=S_{j}^{i}\times[0,1]^{i}\) from the decomposition (3.22), we either have \(\big{(}t,\mathcal{J}_{d}f(t)\big{)}\in X\) for all \(t\in J\), or \(\big{(}t,\mathcal{J}_{d}f(t)\big{)}\not\in X\) for all \(t\in J\). Third, the same holds true for each set of the form \(X=(S_{j}^{i})^{*}\times[0,1]^{i}\), where \((S_{j}^{i})^{*}\) is the expansion of the set \(S_{j}^{i}\) described in the previous paragraph.
The set \(\mathcal{I}_{f}\) has cardinality \(O(DW)^{O(1)}\). For each closed interval \(J=[a,b]\in\mathcal{I}_{f}\), if \(|J|\leq L_{d}^{-1}\) then define \(\mathrm{Ends}(J)=J\). If \(|J|>L_{d}^{-1}\), then define \(\mathrm{Ends}(J)=[a,a+L_{d}^{-1}]\cup[b-L_{d}^{-1},b]\). Define \(\mathrm{Ends}(J)\) analogously for intervals of the form \((a,b]\), \([a,b)\) and \((a,b)\).
Define
\[B_{f}=\bigcup_{J\in\mathcal{J}_{f}}\mathrm{Ends}(J).\]
If we define the quantity \(A=A(d)\) appropriately, then \(B_{f}\) is a union of at most \(A(DW)^{A}\) intervals, and has measure at most \(A(DW)^{A}L_{d}^{-1}\leq A(DW)^{A}\rho^{1/B}\).
Our final task is to show that \(B_{f}\) satisfies the conclusion of Proposition 3.5. Let \(f,g\) be polynomials of degree at most \(D\), and suppose there is \(t_{0}\in[0,1]\backslash(B_{f}\cup B_{g})\) that satisfies (3.7). By (3.22), there is an index \(i\) and a set \(S_{j}^{i}\times[0,1]^{i}\) that contains \(\big{(}t_{0},\mathcal{J}_{d}f(t_{0})\big{)}\). But (3.7) implies that the expanded set \((S_{j}^{i})^{*}\times[0,1]^{i}\) must contain both \(\big{(}t_{0},\mathcal{J}_{d}f(t_{0})\big{)}\) and \(\big{(}t_{0},\mathcal{J}_{d}g(t_{0})\big{)}\). Furthermore, since \(t_{0}\not\in B_{f}\cup B_{g}\), if we define \(I=[t_{0}-L_{i}^{-1},t_{0}+L_{i}^{-1}]\subset[t_{0}-L_{d}^{-1},t_{0}+L_{d}^{-1}]\), then
\[\big{(}t,\mathcal{J}_{d}f(t)\big{)}\in(S_{j}^{i})^{*}\times[0,1]^{i}\quad \text{and}\quad\big{(}t,\mathcal{J}_{d}g(t)\big{)}\in(S_{j}^{i})^{*}\times[0, 1]^{i},\quad t\in I. \tag{3.23}\]
Since \(S_{j}^{i}\) is contained in the vertical \(\rho_{i}+\rho\leq 2\rho_{i}\) (recall (3.20)) neighborhood of the \(L_{i}\)-Lipschitz function \(G_{j}^{i}\), (3.23) implies that
\[\begin{split}\big{|}f^{(d-i+1)}(t)-G(t,\mathcal{J}_{d-i}f(t) \big{|}&\leq 2\rho_{i},\quad t\in I,\\ |g^{(d-i+1)}(t)-G(t,\mathcal{J}_{d-i}g(t)|&\leq 2 \rho_{i},\quad t\in I.\end{split} \tag{3.24}\]
Apply Lemma 3.8 with \(2\rho_{i}\) in place of \(\rho\), and \(L_{i}\) in place of \(L\)--The function \(G\) from (3.24) has Lipschitz constant at most \(L_{i}\), and the interval \(I\) has length \(L_{i}^{-1}\). The conclusion (3.19) of Lemma 3.8 says that
\[|f(t)-g(t)|\lesssim_{d}\rho_{i},\quad t\in I.\]
This is exactly conclusion (3.8), provided we select \(A=A(d)\) sufficiently large.
### Proof of Proposition 2.24
Let \(F=\bigcup_{R\in\mathcal{R}}F(R)\). Apply Proposition 3.5 to \(S\), with \(d=k-1\); \(\varepsilon=1/(k+1)\); and \(\rho=\max(\delta^{\varepsilon},C_{0}\delta^{1/k})\). If we choose the constant \(C_{0}=C_{0}(k)\) appropriately, then by Lemma 2.14 we can ensure that whenever \(f,g\in F\) are tangent to a common rectangle \(R=R(I)\) from \(\mathcal{R}\), there is a point \(t_{0}\in I\) with \(|\mathcal{J}_{k-1}f(t_{0})-\mathcal{J}_{k-1}g(t_{0})|\leq\rho\).
For each \(f\in F\), we obtain a bad set \(B_{f}\), which is a union of at most \(A(DW)^{A}\) intervals and has measure at most \(A(DW)^{A}\rho^{1/B}\leq A(DW)^{A}\delta^{\varepsilon/B}\). Note that the quantities \(A\) and \(B\) only depend on \(k\) (\(B\) depends on the quantity "\(\varepsilon\)" from the statement of Proposition 2.24, but we have selected \(\varepsilon=1/(k+1)\)). Thus if \(\eta=\eta(\varepsilon,k)\) is selected sufficiently small, then by (2.18) we have
\[\#\{R=R(I)\in\mathcal{R}(f)\colon I\cap B_{f}\neq\emptyset\}\leq\frac{1}{2}\# \mathcal{R}(f). \tag{3.25}\]
Thus by pigeonholing, there exists a rectangle \(R=R(I)\in\mathcal{R}\) so that
\[\#\{f\in F(R)\colon I\cap B_{f}=\emptyset\}\geq\frac{1}{2}\#F(R). \tag{3.26}\]
Fix such a rectangle \(R\), and let \(F^{\prime}(R)\) denote the set on the LHS of (3.26). For each \(f,g\in F^{\prime}(R)\), there is a point \(t_{0}\in I\) and a scale \(\tau=\tau(f,g)\in X\) (recall that \(X\subset[\rho,\rho^{1/B}]\) has cardinality at most \(d+1=k\)) so that
\[|f(t)-g(t)|\leq A\tau,\quad t\in[t_{0}-\tau^{1/(k+1)},t_{0}+\tau^{1/(k+1)}] \subset I_{\tau}, \tag{3.27}\]
where \(I_{\tau}\) is the interval of length \(\tau^{1/(k+1)}\) with the same midpoint as \(I\).
By pigeonholing, we can select a choice of \(\tau\in X\); a choice of \(f\in F^{\prime}(R)\); and a set \(F^{\prime\prime}(R)\subset F^{\prime}(R)\) with \(\#F^{\prime\prime}(R)\geq k^{-1}(\#F^{\prime}(R))\), so that (3.27) holds for this choice of \(\tau\), for all \(g\in F^{\prime\prime}(R)\).
To finish the proof, define \(R_{1}=R^{f}(I_{\tau})\), then \(R_{1}\supset R\) is a \((\tau;k+1)\) rectangle; \(g\sim R_{1}\) for all \(g\in F^{\prime\prime}(R)\); and \(\#F^{\prime\prime}(R)\gtrsim\#F(R)\).
## 4 From rectangle tangencies to maximal functions
In this section we will use Theorem 2.6 to prove Proposition 4.4. Our proof will follow the outline sketched in Section 1.7. The next result helps us find the scale "\(\rho\)" discussed in the proof sketch (in what follows, this quantity will be called \(\delta^{\prime}\)). Our proofs will involve repeated use of dyadic pigeonholing, which will induce refinements by factors of \((\log 1/\delta)^{O(1)}\) (recall that \(O(1)\) denotes a quantity that may depend on \(k\)). To simplify notation, we will write \(A\lessapprox B\) if \(A\lesssim(\log 1/\delta)^{O(1)}B\).
**Proposition 4.1**.: _Let \(k\geq 2\) and let \(\varepsilon,\eta>0\). Then there exists \(\delta_{0}>0\) such that the following is true for all \(\delta\in(0,\delta_{0}]\). Let \(F\) be a set of functions, each of which has \(C^{k}\) norm at most \(1\). For each \(f\in F\), let \(Y(f)\subset f^{\delta}\) be a shading of \(f\). Suppose that for every \(x\in[0,1]^{2}\), every \(\rho\geq\delta\), every \(T\in[1,1/\rho]\) and every \((\rho;k;T)\) rectangle \(R\) containing \(x\), we have_
\[\#\{f\in F\colon x\in Y(f),\ f\sim R\}\leq\delta^{-\eta}T^{-\varepsilon}\#\{f \in F\colon x\in Y(f)\}. \tag{4.1}\]
_Then there exists a sub-shading \(Y^{\prime}(f)\subset Y(f)\) for each \(f\in F\); a scale \(\delta^{\prime}\in[\delta,1]\); a set \(\mathcal{R}\) of pairwise incomparable \((\delta^{\prime};k)\) rectangles; a number \(\mu\); and for each \(R\in\mathcal{R}\), a set \(F(R)\subset\{f\in F\colon f\sim R\}\) of size \(\mu\), such that the following holds_
1. \[\int_{[0,1]^{2}}\Big{(}\sum_{f\in F}\chi_{Y(f)}\Big{)}^{\frac{k+1}{k}}\leq \delta^{-O(\eta)}\sum_{R\in\mathcal{R}}\int_{R}\Big{(}\sum_{f\in F(R)}\chi_{Y^ {\prime}(f)}\Big{)}^{\frac{k+1}{k}}.\] (4.2)
2. _Either_ \(\#\mathcal{R}=1\)_, or for every_ \(R\in\mathcal{R}\)_, every_ \(\rho\in[\delta^{\prime},1]\)_, every_ \(T\in[1,1/\rho]\)_, and every_ \((\rho;k;T)\) _rectangle_ \(R^{\prime}\supset R\)_, we have_ \[\#\{f\in F(R)\colon f\sim R^{\prime}\}\leq(\delta^{\prime})^{-2\eta}T^{- \varepsilon}\mu.\] (4.3)
3. _For every_ \(R=R(I)\in\mathcal{R}\)_, let_ \(\tilde{F}(R)=\{f_{R}\colon f\in F(R)\}\) _and let_ \(\tilde{Y}^{\prime}(\tilde{f})=\phi^{R}(Y^{\prime}(f)\cap R)\) _(recall Definition_ 2.16_). Then for every point_ \(x\)_, every_ \(\rho\geq\delta/\delta^{\prime}\)_, every_ \(T\in[1,1/\rho],\) _and every_ \((\rho;k-1;T)\) _rectangle_ \(R^{\prime}\) _containing_ \(x\)_, we have_ \[\#\{\tilde{f}\in\tilde{F}(R)\colon x\in\tilde{Y}^{\prime}(\tilde{f}),\ \tilde{f}\sim R^{\prime}\}\lessapprox_{\varepsilon}T^{-\eta/2}\#\{\tilde{f}\in \tilde{F}(R)\colon x\in\tilde{Y}^{\prime}(\tilde{f})\}.\] (4.4)
In brief, Item (A) says that the \(L^{\frac{k+1}{k}}\) norm of \(\sum\chi_{Y(f)}\) can be broken into pieces localized to the rectangles in \(\mathcal{R}\). Item (B) says that the rectangles in \(\mathcal{R}\) are \(\mu\)-rich and \(\varepsilon\)-robustly broad with error at most \((\delta^{\prime})^{-2\eta}\). Item (C) says that for each \(R\in\mathcal{R}\), the functions in \(F(R)\) satisfy a (re-scaled) version of the hypotheses (4.1) from Proposition 4.1, except \(\varepsilon\) has been replaced by \(\eta/2\), and \(\delta^{-\eta}\) has been replaced by \(O_{\varepsilon}(\log 1/\delta)^{O(1)}\).
Proof.: **Step 1: A two-ends reduction.** Let \(A=O(1)\) be a constant to be specified below (the reason for introducing the constant \(A\) will be explained at the beginning of Step 2). For each \(x\in[0,1]^{2}\), let \(t(x)\) be the infimum of all numbers \(t\geq\delta\) such that there exists a \((t;k;A)\) rectangle \(R\) containing \(x\) with
\[\#\{f\colon x\in Y(f),f\sim R\}\geq t^{\eta/2}\#\{f\colon x\in Y(f)\}. \tag{4.5}\]
This set of numbers is non-empty, since it contains \(t=1\), and it is bounded below by \(\delta\). For each \(x\in[0,1]^{2}\) there exists a \((t(x);k;A)\) rectangle \(R\) containing \(x\) that satisfies a variant of (4.5) where the RHS has been weakened by a factor of \(2\). Denote this rectangle by \(R(x)\).
After dyadic pigeonholing, we can select a number \(t\in[\delta,1]\), an integer \(\nu\geq 1\), and a sub-shading \(Y_{1}(f)\subset Y(f)\) for each \(f\in F\) such that the following holds
* For each \(x\in\bigcup_{f}Y_{1}(f)\), we have \(\nu\leq\#\{f\colon x\in Y_{1}(f)\}<2\nu\); \(\#\{f\colon x\in Y(f)\}\leq\delta^{-\eta/2}\nu\); and \(t\leq t(x)<2t\).
* If \(f\in F\) and \(x\in Y_{1}(f)\), then \(f\sim R(x)\).
* \[\int_{[0,1]^{2}}\Big{(}\sum_{f\in F}\chi_{Y(f)}\Big{)}^{\frac{k+1}{k}}\lessq \delta^{-\frac{\eta}{2}(\frac{k+1}{k})}\int_{[0,1]^{2}}\Big{(}\sum_{f\in F} \chi_{Y_{1}(f)}\Big{)}^{\frac{k+1}{k}}.\] (4.6)
* For each \(x\in[0,1]^{2}\), each \(\rho\in[\delta,t]\), and each \((\rho;k)\) rectangle \(R\) containing \(x\), we have \[\#\{f\in F\colon x\in Y_{1}(f),\ f\sim R\}\leq 2\big{(}\frac{\rho}{t}\big{)}^{ \eta/2}\#\{f\in F\colon x\in Y_{1}(f)\}.\] (4.7)
By (4.1), for every \(x\in\bigcup_{f}Y_{1}(f)\), every \(\rho\geq\delta\), every \(T\in[1,1/\rho]\), and every \((\rho;k;T)\) rectangle \(R\) containing \(x\), we have
\[\#\{f\in F\colon x\in Y_{1}(f),f\sim R\} \leq\#\{f\in F\colon\ x\in Y(f),\ f\sim R\} \tag{4.8}\] \[\leq\delta^{-\eta/2}T^{-\varepsilon}\#\{f\in F\colon x\in Y(f)\} \leq 2\delta^{-\frac{3}{2}\eta}T^{-\varepsilon}\nu.\]
**Step 2: Clustering into rectangles.** Our goal in this step is to find a set of rectangles \(\mathcal{R}_{0}\) so that Item (A) is satisfied. The idea is as follows: We choose \(\delta^{\prime}\sim t\) and select a (maximal) set of pairwise incomparable \((\delta^{\prime};k)\) rectangles \(\mathcal{R}_{0}\). For each point \(x\in\mathbb{R}^{2}\), the rectangle \(R(x)\) from Step 1 will be comparable to some rectangle \(R\in\mathcal{R}_{0}\). If \(x\in Y_{1}(f)\), then \(f\sim R(x)\) and thus (one might hope!) \(f\sim R\). Thus we would have the pointwise inequality
\[\Big{(}\sum_{f\in F}\chi_{Y_{1}(f)}(x)\Big{)}^{p}\lesssim\Big{(}\sum_{ \begin{subarray}{c}f\in F\\ f\sim R\end{subarray}}\chi_{Y_{1}(f)}(x)\Big{)}^{p}, \tag{4.9}\]
and (4.2) would follow. The only problem with the above argument is that if \(f\sim R(x)\) and \(R(x)\sim R\), it is almost true that \(f\sim R\), but not quite--we only have that \(f\) is tangent to a slight thickening of \(R\). It is to deal with this technical annoyance that we introduced the number \(A=O(1)\) above.
Let \(\delta^{\prime}=(At)^{1/k}\), i.e. a \((\delta^{\prime};k)\) rectangle can be thought of as a \((t;k;A)\) rectangle that has been thickened by a (multiplicative) factor of \(A^{1/k}\) in the vertical direction. If the constant \(A=O(1)\) is selected appropriately, then we can find a set \(\mathcal{R}_{0}\) of \((\delta^{\prime};k)\) rectangles with the following properties: (i) for each \(R\in\mathcal{R}_{0}\), at most \(O(1)\) rectangles from \(\mathcal{R}_{0}\) are comparable to \(R\). (ii) Let \(x\in\mathbb{R}^{2}\), and let \(R(x)\) be a \((t;k;A)\) rectangle from Step 1. Write \(R(x)=R^{q}(I)\), and let \(R^{\prime}(x)\supset R(x)\) be the \((\delta^{\prime};k)\) rectangle obtained by taking the vertical \(\delta^{\prime}\) neighborhood of \(g\) above \(I\) (i.e. \(R^{\prime}(x)\) is the rectangle obtained by thickening \(R(x)\) in the vertical direction). Let \(f\in F\) with \(x\in Y_{1}(f)\), and hence \(f\sim R(x)\). Then there exists a rectangle \(R_{0}\in\mathcal{R}_{0}\) that is comparable to \(R^{\prime}(x)\), and satisfies \(f\sim R_{0}\).
Item (i) implies that for each \(f\in F\), the sets \(\{f^{\delta}\cap R\colon R\in\mathcal{R},\ f\sim R\}\) are pairwise \(O(1)\)-overlapping. Item (ii) says that (4.9) holds if we replace the term on the RHS by a sum over the \(O(1)\) rectangles in \(\mathcal{R}_{0}\) that are comparable to \(R^{\prime}(x)\). Thus we have
\[\int_{[0,1]^{2}}\Big{(}\sum_{f\in F}\chi_{Y_{1}(f)}\Big{)}^{\frac{k+1}{k}} \lesssim\sum_{R\in\mathcal{R}_{0}}\int_{R}\Big{(}\sum_{\begin{subarray}{c}f \in F\\ f\sim R\end{subarray}}\chi_{Y^{\prime}(f)}\Big{)}^{\frac{k+1}{k}}\sim\sum_{R \in\mathcal{R}_{0}}\nu^{\frac{k+1}{k}}\Big{|}R\cap\bigcup_{\begin{subarray}{c} f\in F\\ f\sim R\end{subarray}}Y_{1}(f)\Big{|}. \tag{4.10}\]
After refining \(\mathcal{R}_{0}\) by a \(O(1)\) factor, we can ensure that for each \(f\in F\), the sets \(\{f^{\delta}\cap R\colon R\in\mathcal{R}_{0},\ f\sim R\}\) are disjoint, and (4.10) remains true (with a new implicit constant) for this refined collection \(\mathcal{R}_{0}\).
After dyadic pigeonholing, we can select numbers \(\lambda>0\) and \(\mu\geq 1\), and a set \(\mathcal{R}_{1}\subset\mathcal{R}_{0}\) so that if we define
\[Y_{2}(f)=Y_{1}(f)\cap\bigcup_{\begin{subarray}{c}R\in\mathcal{R}_{0}\\ \lambda\leq|R\cap Y_{1}(f)|<2\lambda\end{subarray}}R, \tag{4.11}\]
then the following three items are true.
* For each \(R\in{\cal R}_{1}\) we have \[\int_{R}\sum_{\begin{subarray}{c}f\in F\\ f\sim R\end{subarray}}\chi_{Y_{1}(f)}\lessapprox\int_{R}\sum_{\begin{subarray}{ c}f\in F\\ f\sim R\end{subarray}}\chi_{Y_{2}(f)},\] (4.12) and since \[\nu\leq\sum_{\begin{subarray}{c}f\in F\\ f\sim R\end{subarray}}\chi_{Y_{1}(f)}(x)<2\nu\quad\text{for}\quad x\in\bigcup_{ \begin{subarray}{c}f\in F\\ f\sim R\end{subarray}}Y_{1}(f),\] by (4.12) and Holder we have \[\int_{R}\Big{(}\sum_{\begin{subarray}{c}f\in F\\ f\sim R\end{subarray}}\chi_{Y_{1}(f)}\Big{)}^{\frac{k+1}{k}}\lessapprox\int_{ R}\Big{(}\sum_{\begin{subarray}{c}f\in F\\ f\sim R\end{subarray}}\chi_{Y_{2}(f)}\Big{)}^{\frac{k+1}{k}}.\] (4.13)
* For each \(R\in{\cal R}_{1}\) we have \[\mu\leq\#\{f\in F\colon f\sim R,\ Y_{2}(f)\cap R\neq\emptyset\}<2\mu.\] (4.14)
* \[\sum_{R\in{\cal R}_{0}}\int_{R}\Big{(}\sum_{\begin{subarray}{c}f\in F\\ f\sim R\end{subarray}}\chi_{Y_{1}(f)}\Big{)}^{\frac{k+1}{k}}\lessapprox\sum_{ R\in{\cal R}_{1}}\int_{R}\Big{(}\sum_{\begin{subarray}{c}f\in F\\ f\sim R\end{subarray}}\chi_{Y_{2}(f)}\Big{)}^{\frac{k+1}{k}}.\] (4.15)
For each \(R\in{\cal R}_{1}\), let
\[F(R)\subset\{f\in F\colon f\sim R,\ Y_{2}(f)\cap R\neq\emptyset\}\]
be a set of size \(\mu\). We now (briefly) divide into cases.
Case 1: If \(\delta^{\prime}\leq\delta^{\eta}\) then define \({\cal R}={\cal R}_{1}\). This is the main case.
Case 2: If \(\delta^{\prime}>\delta^{\eta}\), then since \(\#{\cal R}_{1}\leq{\cal R}_{0}\lesssim(\delta^{\prime})^{-O(1)}\lesssim\delta ^{-O(\eta)}\), we can select \(R_{1}\in{\cal R}_{1}\) with
\[\sum_{R\in{\cal R}_{1}}\int_{R}\Big{(}\sum_{\begin{subarray}{c}f\in F\\ f\sim R\end{subarray}}\chi_{Y_{2}(f)}\Big{)}^{\frac{k+1}{k}}\leq\delta^{-O( \eta)}\int_{R_{1}}\Big{(}\sum_{\begin{subarray}{c}f\in F\\ f\sim R_{1}\end{subarray}}\chi_{Y_{2}(f)}\Big{)}^{\frac{k+1}{k}}. \tag{4.16}\]
Define \({\cal R}=\{R_{1}\}\).
We will show that in both Case 1 and Case 2, Item (B) is satisfied. In Case 2, we have \(\#{\cal R}=1\) and Item (B) is satisfied. Suppose instead we are in Case 1, i.e. \(\delta^{\prime}\in[\delta,\delta^{\eta}]\); we will establish (4.3). Let \(\rho\in[\delta^{\prime},1]\), \(T\in[1,1/\rho]\), let \(R\in{\cal R}\), and let \(R^{\prime}\supset R\) be a \((\rho;k;T)\) rectangle. Suppose
\[\#\{f\in F(R)\colon f\sim R^{\prime}\}=\omega\mu,\]
for some \(\omega>0\). To show that Item (B) is satisfied, we need to prove that
\[\omega\leq(\delta^{\prime})^{-2\eta}T^{-\varepsilon}. \tag{4.17}\]
We have
\[\int_{R}\sum_{\begin{subarray}{c}f\in F(R)\\ f\sim R^{\prime}\end{subarray}}\chi_{Y_{2}(f)}\geq\omega\lambda\mu\geq\frac{ 1}{2}\omega\int_{R}\sum_{f\in F(R)}\chi_{Y_{2}(f)}\gtrapprox\omega\int_{R}\sum _{f\in F(R)}\chi_{Y_{1}(f)}\gtrapprox\omega\Big{|}R\cap\bigcup_{f\in F(R)}Y_{ 1}(f)\Big{|}, \tag{4.18}\]
where the first two inequalities used the definition of \(\lambda\) and the shading \(Y_{2}\) from (4.11), and the third inequality used (4.12). The integral on the LHS of (4.18) is supported on the set
\[W=R\ \cap\bigcup_{\begin{subarray}{c}f\in F(R)\\ f\sim R^{\prime}\end{subarray}}Y_{2}(f)\ \ \subset\ \ R\ \cap\bigcup_{f\in F(R)}Y_{1}(f). \tag{4.19}\]
Thus comparing the left and right sides of (4.18), we conclude that there exists a point \(x\in W\) with
\[\sum_{\begin{subarray}{c}f\in F(R)\\ f\sim R^{\prime}\end{subarray}}\chi_{Y_{2}(f)}(x)\gtrapprox\omega\nu.\]
For this point \(x\), we have
\[\#\{f\in F\colon x\in Y_{1}(f),\ f\sim R^{\prime}\}\geq\#\{f\in F\colon x\in Y _{2}(f),\ f\sim R^{\prime}\}\gtrapprox\omega\nu. \tag{4.20}\]
Comparing (4.8) and (4.20), we obtain (4.17), provided \(\delta_{0}=\delta_{0}(\varepsilon,\eta)\) is selected sufficiently small--here we use the assumption that \(\delta^{\prime}\leq\delta^{\eta}\) to dominate the implicit constant \((\log 1/\delta)^{O(1)}\) in (4.20) by \((\delta^{\prime})^{-\eta/4}\). At this point, we have established Item (B).
**Step 3: Refining the shading.** Our next task is to establish Item (C). After dyadic pigeonholing, there exists a number \(\nu_{1}\leq\nu\) so that if we define
\[Y^{\prime}(f)=\bigcup_{R\colon\,f\in F(R)}\Big{\{}x\in Y_{2}(f)\cap R\colon \nu_{1}\leq\sum_{g\in F(R)}\chi_{Y_{2}(g)}(x)<2\nu_{1}\Big{\}},\]
then
\[\sum_{R\in\mathcal{R}}\int_{R}\Big{(}\sum_{f\in F(R)}\chi_{Y_{2}(f)}\Big{)}^ {\frac{k+1}{k}}\lessapprox\sum_{R\in\mathcal{R}}\int_{R}\Big{(}\sum_{f\in F( R)}\chi_{Y^{\prime}(f)}\Big{)}^{\frac{k+1}{k}}. \tag{4.21}\]
We have \(\nu_{1}\gtrapprox\nu\), and thus by (4.7), for each \(x\in[0,1]^{2}\) each \(\rho\in[\delta/\delta^{\prime},1]\), and each \((\rho\delta^{\prime};k)\) rectangle \(R^{\prime}\) containing \(x\), we have
\[\#\{f\in F(R)\colon x\in Y^{\prime}(f),\ f\sim R^{\prime}\}\leq\rho^{\eta/2} \nu\lessapprox\rho^{\eta/2}\mu\{f\in F(R)\colon x\in Y^{\prime}(f)\}. \tag{4.22}\]
We will see how this establishes Item (C). Fix a rectangle \(R=R(I)\in\mathcal{R}\), and let \(\tilde{F}(R)\) and \(\{\tilde{Y}^{\prime}(\tilde{f})\colon\tilde{f}\in\tilde{F}(R)\}\) be the sets defined in Part (C) of the statement of Proposition 4.1. Under this re-scaling, the inequality (4.22) becomes the following: For each \(x\in[0,1]^{2}\) each \(\rho\in[\delta/\delta^{\prime},1]\), and each \((\rho;k)\) rectangle \(R^{\prime}\) containing \(x\), we have
\[\#\{\tilde{f}\in\tilde{F}(R)\colon x\in\tilde{Y}^{\prime}(f),\ \tilde{f}\sim R^{ \prime}\}\lessapprox\rho^{\eta/2}\#\{\tilde{f}\in\tilde{F}(R)\colon x\in \tilde{Y}^{\prime}(\tilde{f})\}. \tag{4.23}\]
To show that Item (C) is satisfied, let \(x\in[0,1]^{2}\), \(\rho\in[\delta/\delta^{\prime},1]\), \(T\in[1/\rho,1]\), and let \(R^{\prime}\) be a \((\rho;k-1;T)\) rectangle containing \(x\). By Lemma B.4, the functions \(\{\tilde{f}\in\tilde{F}(R)\colon x\in\tilde{Y}^{\prime}(f),\ \tilde{f}\sim R^{\prime}\}\) are all tangent to a curvilinear rectangle of dimensions \(A\tau\times\tau^{1/k}\), where \(A=O(1)\) and \(\tau=\min(\rho,T^{-k})\). Thus by Lemma B.5, at least a \(\gtrsim 1\) fraction of these rectangles are tangent to a common \((\tau;k)\) rectangle \(R^{\prime\prime}\), which contains \(x\), i.e.
\[\#\{\tilde{f}\in\tilde{F}(R)\colon x\in\tilde{Y}^{\prime}(f),\ \tilde{f}\sim R^{ \prime}\}\lesssim\#\{\tilde{f}\in\tilde{F}(R)\colon x\in\tilde{Y}^{\prime}(f), \ \tilde{f}\sim R^{\prime\prime}\}.\]
The size of the latter set is controlled by (4.23). Thus we have
\[\#\{\tilde{f}\in\tilde{F}(R)\colon x\in\tilde{Y}^{\prime}(f),\ \tilde{f}\sim R^{ \prime}\} \lessapprox\tau^{\eta/2}\#\{\tilde{f}\in\tilde{F}(R)\colon x\in \tilde{Y}^{\prime}(\tilde{f})\}\] \[\lesssim\max(\rho^{-\eta/2},T^{-k\eta/2})\#\{\tilde{f}\in\tilde{ F}(R)\colon x\in\tilde{Y}^{\prime}(\tilde{f})\}\] \[\leq T^{-\eta/2}\#\{\tilde{f}\in\tilde{F}(R)\colon x\in\tilde{Y}^ {\prime}(\tilde{f})\},\]
where the second inequality used the fact that \(T\leq\rho^{-1}\) and \(k\geq 1\). This establishes Item (C).
Finally, by chaining inequalities (4.6), (4.10), (4.15), (4.16), and (4.21), we see that Item (A) is satisfied.
Next, we will show how Proposition 4.1 and Theorem 2.6 can be combined to prove Proposition 4.4, which is a variant of Theorem 1.9 where the non-concentration condition on \(F\) is replaced by a (local) two-ends type non-concentration condition on the set of curves passing through each point. Before stating the result, we recall the following definition from [24].
**Definition 4.2**.: _Let \((M,d)\) be a metric space. Let \(\alpha,\delta,C>0\). A set \(E\subset M\) is called a \((\delta,\alpha;C)\)-set if for all \(r\geq\delta\) and all metric balls \(B\) of radius \(r\), we have_
\[\mathcal{E}_{\delta}(E\cap B)\leq C(r/\delta)^{\alpha},\]
_where \(\mathcal{E}_{\delta}(X)\) denotes the \(\delta\)-covering number of the set \(X\). In informal settings, we will sometimes abbreviate this to \((\delta,\alpha)\)-set._
Our proof below involves an anisotropic rescaling, which sends a \((\rho;k)\) rectangle to the unit square. Such a rescaling distorts \((\delta,\alpha)\)-sets into slightly more complicated objects. The next definition describes a class of set that is preserved (as a class) under this type of rescaling.
**Definition 4.3**.: _Let \(\delta,\tau,\alpha>0\) and \(C\geq 1\). Let \(f\colon[0,1]\to\mathbb{R}\). We say a set \(Y(f)\subset f^{\delta}\) is a \(\delta\)-thick shading striped by a \((\tau;\alpha;C)\)-set if \(Y(f)\) is contained in a set of the form \(f^{\delta}\cap(E\times\mathbb{R})\), where \(E\subset[0,1]\) is a \((\tau;\alpha;C)\)-set._
The shadings \(Y(f)\) from the statement of Theorem 1.9 are \(\delta\)-thick shadings striped by \((\delta,\alpha;\delta^{-\eta})\)-sets, i.e. \(\tau=\delta\). However, we gain some additional flexibility by allowing \(\delta\) and \(\tau\) to differ. We exploit this flexibility as follows. Suppose \(Y(f)\subset f^{\delta}\) is a \(\delta\)-thick shading striped by a \((\tau;\alpha;C)\)-set. Suppose furthermore that \(R=R(I)\) is a \((\delta^{\prime};k)\) rectangle, and \(f\sim R\). Let \(\tilde{f}=f_{R}\) in the sense of Definition 2.16, and let \(\tilde{Y}(\tilde{f})=\phi^{R}(Y(f)\cap R)\). Since this rescaling is anisotropic, it affects \(\delta\) and \(\tau\) differently--\(\tilde{Y}(\tilde{f})\) is a \(\delta/\delta^{\prime}\)-thick shading striped by a \((\tau/\delta^{1/k},\alpha;C)\)-set. This observation will play an important role in the proof below.
**Proposition 4.4**.: _Let \(k\geq 1\), \(0\leq\alpha\leq 1\), and let \(\varepsilon>0\). Then there exists \(\eta>0\) such that the following is true for all \(\delta,\tau>0\). Let \(F\) be a set of polynomials of degree at most \(\delta^{-\eta}\), each of which has \(C^{k}\) norm at most 1. For each \(f\in F\), let \(Y(f)\) be a \(\delta\)-thick shading striped by a \((\tau,\alpha;\delta^{-\eta})\)-set. Suppose that for all \(x\in[0,1]^{2}\), all \(\rho\in[\delta,1]\), all \(T\in[1,1/\rho]\), and all \((\rho;k;T)\) rectangles \(R\) containing \(x\), we have_
\[\#\{f\in F\colon x\in Y(f),\ f\sim R\}\leq\delta^{-\eta}T^{-\varepsilon}\#\{f \in F\colon x\in Y(f)\}. \tag{4.24}\]
_Then_
\[\Big{\|}\sum_{f\in F}\chi_{Y(f)}\Big{\|}_{\frac{k+1}{k}}\lesssim_{\alpha, \varepsilon}\delta^{-\varepsilon}\delta^{\frac{k+\alpha}{k+1}\tau\frac{k(1- \alpha)}{k+1}}(\#F). \tag{4.25}\]
Proof.: We prove the result by induction on \(k\).
The base case.We begin with the base case \(k=1\). After dyadic pigeonholing and replacing each shading \(Y(f)\) by a refinement \(Y_{1}(f)\), we may suppose that there is a number \(\mu\) so that
\[\mu\leq\sum_{f\in F}\chi_{Y_{1}(f)}(x)<2\mu\quad\text{for all }x\in X,\quad X=\bigcup_{f\in F}Y_{1}(f).\]
(4.24) remains true with \(Y_{1}\) in place of \(Y\), and it suffices to establish (4.25) (with \(\varepsilon/2\) in place of \(\varepsilon\)) for the shading \(Y_{1}\). By (4.24) with \(\rho=2\delta\) and \(T=2^{1/\varepsilon}\delta^{-\eta/\varepsilon}\), we have that for each \(x\in X\), there are \(\gtrsim\mu^{2}\) pairs \(f,g\in F\) with the following two properties: (i) \(x\in Y_{1}(f)\cap Y_{1}(g)\), and (ii): the connected component of \(f^{\delta}\cap g^{\delta}\) containing \(x\) projects to an interval of length at most \(2^{1/\varepsilon}\delta^{1-\eta/\varepsilon}\) on the \(x_{1}\)-axis; denote this interval by \(I(f,g)\). Let \(\mathcal{T}\) be the set of triples \((x,f,g)\in X\times F^{2}\), where the pair \(f,g\) satisfy items (i) and (ii). Then \(|\mathcal{T}|\sim\mu^{2}X\), where \(|\cdot|\) denotes the product of two-dimensional Lebesgue measure on \(X\) and counting measure on \(F^{2}\).
For each \(f,g\in F\), we have
\[|\{x\in X\colon(x,f,g)\in\mathcal{T}\}|\leq|Y_{1}(f)\cap Y_{1}(g)|\leq|Y_{1}(f )\cap(\operatorname{graph}f|_{I(f,g)})^{\delta}|,\]
where \((\operatorname{graph}f|_{I(f,g)})^{\delta}\) denotes the \(\delta\) neighborhood of the graph of \(f\), restricted to the interval \(I(f,g)\); recall that this interval has length \(O_{\varepsilon}(\delta^{1-\eta/\varepsilon})\). Since \(Y_{1}(f)\) is a \(\delta\)-thick shadings striped by a \((\tau,\alpha;\delta^{-\eta})\)-set, we have
\[|Y_{1}(f)\cap(\operatorname{graph}f|_{I(f,g)})^{\delta}|\lesssim_{\varepsilon} \left\{\begin{array}{ll}(\delta\tau)\big{(}\delta^{-\eta}(\delta^{1-\eta/ \varepsilon}/\tau)^{\alpha}\big{)},&\delta^{1-\eta/\varepsilon}\geq\tau\\ \delta^{2-\eta/\varepsilon},&\delta^{1-\eta/\varepsilon}\leq\tau\end{array} \right\}\leq\delta^{1+\alpha-\eta/\varepsilon}\tau^{1-\alpha}.\]
Since \(f\) and \(g\) are polynomials of degree at most \(\delta^{-\eta}\), we have that \(f^{\delta}\cap g^{\delta}\) is a union of \(O(\delta^{-\eta})\) connected components, and thus each pair \(f,g\in F\) can contribute to \(\mathcal{T}\) above \(O(\delta^{-\eta})\) intervals. Hence
\[|\mathcal{T}|\lesssim_{\varepsilon}\delta^{1+\alpha-2\eta/\varepsilon}\tau^{1- \alpha}(\#F)^{2}.\]
On the other hand, we have \(|X|\leq\mu^{-2}|\mathcal{T}|\), and thus
\[|X|\lesssim\delta^{1+\alpha-2\eta/\varepsilon}\tau^{1-\alpha}\Big{(}\frac{\# F}{\mu}\Big{)}^{2}. \tag{4.26}\]
If we select \(\eta\leq\varepsilon^{2}/4\), then (4.26) implies (4.25) (with \(\varepsilon/2\) in place of \(\varepsilon\), as required above).
The induction stepSuppose that \(k\geq 2\) and that the result has been established for \(k-1\). Fix \(0\leq\alpha\leq 1\) and \(\varepsilon>0\). Let \(\eta>0\) be a quantity to be specified below, let \(\delta,\tau>0\), and let \(F\) and \(Y(f)\) satisfy the hypotheses of Proposition 4.4 with this value of \(\eta\). First, let \(\delta_{0}>0\) be a small quantity to be chosen below, which depends on \(k\) and \(\varepsilon\). We may suppose that \(\delta\leq\delta_{0}\), since otherwise (4.25) is trivial, provided we choose the implicit constant sufficiently large.
Let \(\varepsilon_{1}=\varepsilon/2\). Let \(\eta_{1}=\eta_{1}(k,\varepsilon_{1})\) be the output from Theorem 2.6, with \(\varepsilon_{1}\) in place of \(\varepsilon\). We will select \(\eta>0\) sufficiently small so that \(\eta\leq\eta_{1}\). Thus the shadings \(\{Y(f)\colon f\in F\}\) satisfy Hypothesis (4.1) of Proposition 4.1 with \(\varepsilon_{1}\) in place of \(\varepsilon\) and \(\eta_{1}\) in place of \(\eta\). Applying Proposition 4.1, we get a sub-shading \(Y^{\prime}(f)\subset Y(f)\); a scale \(\delta^{\prime}\in[\delta,1]\); a set \(\mathcal{R}\) of \((\delta^{\prime},k)\) rectangles; sets \(F(R)\), \(R\in\mathcal{R}\); and a multiplicity \(\mu\leq\#F\).
By Item (B), either \(\mathcal{R}=1\), or (provided we choose \(\delta_{0}\) sufficiently small depending on \(k\) and \(\varepsilon_{1}\)) we can apply Theorem 2.6 (recall that we selected \(\eta_{1}\) sufficiently small to ensure that Theorem 2.6 can be applied) to conclude that
\[\#\mathcal{R}\leq\delta^{-\varepsilon_{1}}\Big{(}\frac{\#F}{\mu}\Big{)}^{ \frac{k+1}{k}}. \tag{4.27}\]
We next explore the consequences of Item (C) from Proposition 4.1. We first consider the case where \(\delta^{\prime}\leq\delta^{1-\varepsilon_{1}}\). By Item (A) from Proposition 4.1, we have
\[\Big{\|}\sum_{f\in F}\chi_{Y(f)}\Big{\|}_{\frac{k+1}{k}}^{\frac{k+ 1}{k}} \leq\delta^{-O(\eta_{1})}\sum_{R\in\mathcal{R}}\int_{R}\Big{(}\sum_ {f\in F(R)}\chi_{Y^{\prime}(f)}\Big{)}^{\frac{k+1}{k}} \tag{4.28}\] \[\lesssim\left\{\begin{array}{ll}\delta^{-O(\eta_{1})}(\# \mathcal{R})\mu^{\frac{k+1}{k}}(\delta\tau)\Big{(}\delta^{-\eta}(\delta^{1/k}/ \tau)^{\alpha}\Big{)},&\tau\leq\delta^{1/k}\\ \delta^{-O(\eta_{1})}(\#\mathcal{R})\mu^{\frac{k+1}{k}}\delta^{\frac{k+1}{k}}, &\tau>\delta^{1/k}\end{array}\right.\] \[\lesssim\delta^{-\varepsilon/2-O(\eta_{1})}\delta^{\frac{k+\eta }{k}}\tau^{1-\alpha}(\#F)^{\frac{k+1}{k}},\]
and we have established (4.25) and completed the proof, provided we select \(\eta_{1}\) sufficiently small depending on \(\varepsilon\) and \(k\), and provided we select the implicit constant in (4.25) sufficiently large depending on \(\varepsilon\) and \(k\).
For the remainder of the proof we will consider the case where \(\delta^{\prime}<\delta^{1-\varepsilon_{1}}\), so in particular the implicit constant \(O_{\varepsilon_{1}}(\log 1/\delta)^{O(1)}\) from (4.4) is bounded by \(O_{\varepsilon_{1}}(\log\frac{1}{\delta/\delta^{\prime}})^{O(1)}\). Recall that by Item (C), the sets \(F(R)\) and \(Y^{\prime}(f)\), \(f\in F(R)\) satisfy (4.4). For each \(R\in\mathcal{R}\), let \(\tilde{F}(R)\) and \(\tilde{Y}^{\prime}(f)\) be as defined in Item (C) of Proposition 4.1.
If \(\delta_{0}\) (and thus \(\delta/\delta^{\prime}\)) is sufficiently small, then \(\tilde{F}(R)\) and \(\tilde{Y}^{\prime}(f)\) will satisfy the induction hypothesis (4.24) with the parameters changed as follows:
* \(k\) is replaced by \(k-1\).
* \(\delta\) is replaced by \(\tilde{\delta}=\delta/\delta^{\prime}\).
* \(\tau\) is replaced by \(\tilde{\tau}=\tau/(\delta^{\prime})^{1/k}\)
* The functions \(f\in F\) are polynomials of degree \(\delta^{-\eta}\leq\tilde{\delta}^{-\eta/\varepsilon_{1}}\), each of which have \(C^{k}\) norm at most \(1\).
* The shadings \(\tilde{Y}^{\prime}(f)\) are \(\tilde{\delta}\)-thick shadings striped by a \((\tilde{\tau};\alpha;\tilde{\delta}^{-\eta/\varepsilon_{1}})\)-set.
* The shadings \(\tilde{Y}^{\prime}(f)\) satisfy (4.24), with \(T^{-\eta_{1}/2}\) in place of \(T^{-\varepsilon}\), and \(O_{\varepsilon_{1}}(\log 1/\tilde{\delta})^{O(1)}\) in place of \(\delta^{-\eta}\).
It is now time to apply the induction hypothesis: we apply Proposition 4.4 with \(k-1\) in place of \(k\); \(\alpha\) unchanged; and \(\eta_{1}/2\) in place of \(\varepsilon\). Let \(\eta_{2}\) be the output from this proposition. If \(\eta>0\) is selected sufficiently small, then \(\eta/\varepsilon_{1}\leq\eta_{2}\). This means that the functions \(f\in F\) are polynomials of degree at most \(\tilde{\delta}^{-\eta_{2}}\), and the shadings \(\tilde{Y}^{\prime}(f)\) are striped by \((\tilde{\tau};\alpha;\tilde{\delta}^{-\eta_{2}})\)-sets. If \(\delta_{0}\) and thus \(\tilde{\delta}\) are sufficiently small, then the quantity \(O_{\varepsilon_{1}}(\log 1/\tilde{\delta})^{O(1)}\) from the final item above is at most \(\tilde{\delta}^{-\eta_{2}}\). Thus we can use the induction hypotheses to conclude that
\[\Big{\|}\sum_{f\in\tilde{F}(R)}\chi_{\tilde{Y}^{\prime}(f)}\Big{\|}_{\frac{k} {k-1}}\lesssim\delta^{-\eta_{1}/4}\tilde{\delta}^{\frac{k-1+\alpha}{k}}\tilde{ \tau}^{\frac{(1-\alpha)(k-1)}{k}}(\#F(R)). \tag{4.29}\]
We also have the \(L^{1}\) bound
\[\Big{\|}\sum_{f\in\tilde{F}(R)}\chi_{\tilde{Y}^{\prime}(f)}\Big{\|}_{1}\leq \delta^{-\eta}\tilde{\delta}\tilde{\tau}^{1-\alpha}(\#F(R)). \tag{4.30}\]
Interpolating (4.29) and (4.30) and recalling the definition of \(\tilde{\delta}\) and \(\tilde{\tau}\) (and the fact that \(\eta\leq\eta_{1}\)), we have
\[\Big{\|}\sum_{f\in\tilde{F}(R)}\chi_{\tilde{Y}^{\prime}(f)}\Big{\|}_{\frac{k+1 }{k}}^{\frac{k+1}{k}}\leq\Big{\|}\sum_{f\in\tilde{F}(R)}\chi_{\tilde{Y}^{ \prime}(f)}\Big{\|}_{1}^{\frac{k}{k}}\Big{\|}\sum_{f\in\tilde{F}(R)}\chi_{ \tilde{Y}^{\prime}(f)}\Big{\|}_{\frac{k}{k-1}}\leq\delta^{-\eta_{1}}(\delta^{ \prime})^{-\frac{k+1}{k}}\delta^{\frac{k+\alpha}{k}}\tau^{1-\alpha}(\#F).\]
Undoing the scaling that mapped \(R\) to the unit square (this scaling distorted volumes by a factor of \((\delta^{\prime})^{\frac{k+1}{k}}\)), and recalling that \(\#F(R)=\mu\) for each \(R\in\mathcal{R}\), we conclude that
\[\int_{R}\Big{(}\sum_{f\in F(R)}\chi_{Y^{\prime}(f)}\Big{)}^{\frac{k+1}{k}} \lesssim\delta^{-2\eta_{1}}\delta^{\frac{k+\alpha}{k}}\tau^{1-\alpha}\mu^{ \frac{k+1}{k}}. \tag{4.31}\]
Combining Item (A) from Proposition 4.1 and (4.31), we conclude that
\[\Big{\|}\sum_{f\in F}\chi_{Y(f)}\Big{\|}_{\frac{k+1}{k}}^{\frac{ k+1}{k}} \lesssim\varepsilon\delta^{-O(\eta)}\sum_{R\in\mathcal{R}}\int_{R} \Big{(}\sum_{f\in F(R)}\chi_{Y^{\prime}(f)}\Big{)}^{\frac{k+1}{k}} \tag{4.32}\] \[\lesssim\delta^{-O(\eta)-2\eta_{1}}(\#\mathcal{R})\delta^{\frac{k +\alpha}{k}}\tau^{1-\alpha}\mu^{\frac{k+1}{k}}\] \[\leq\delta^{-O(\eta)-2\eta_{1}-\varepsilon/2}\delta^{\frac{k+ \alpha}{k}}\tau^{1-\alpha}(\#F)^{\frac{k+1}{k}}.\]
If we select \(\eta_{1}\) and \(\eta\) sufficiently small (depending on \(\varepsilon\) and \(k\)), then the term \(\delta^{-O(\eta)-\delta^{-2\eta_{1}}-\varepsilon/2}\) has size at most \(\delta^{-\varepsilon}\). This establishes (4.25) and closes the induction.
## 5 Proof of Theorem 1.9
In this section, we will prove the following slightly more technical variant of Theorem 1.9.
**Theorem 1.9\({}^{\prime}\).**_Let \(k\geq 1\), let \(I\) be a compact interval, and let \(\mathcal{F}\subset C^{\infty}(I)\) be uniformly smooth and forbid \(k\)-th order tangency. Let \(0<\beta\leq\alpha\leq 1\), and let \(\varepsilon>0\). Then there exists \(\eta>0\) and \(\delta_{0}>0\) so that the following is true for all \(\delta\in(0,\delta_{0}]\). Let \(F\subset\mathcal{F}\) be a \((\delta,\beta;\delta^{-\eta})\)-set (here \(\mathcal{F}\) is given the usual metric on \(C^{k}(I)\)). For each \(f\in F\), let \(Y(f)\subset f^{\delta}\) be a \((\delta,\alpha;\delta^{-\eta})\)-set (here we use the usual Euclidean metric on \(\mathbb{R}^{2}\)). Then_
\[\Big{\|}\sum_{f\in F}\chi_{Y(f)}\Big{\|}_{\frac{k+1}{k}}\leq\delta^{- \varepsilon}\big{(}\delta^{2-\alpha}\#F\big{)}^{\frac{k}{k+1}}. \tag{5.1}\]
_If instead \(0\leq\alpha\leq 1\) and \(\beta>\alpha\) then the result remains true, except the bound becomes_
\[\Big{\|}\sum_{f\in F}\chi_{Y(f)}\Big{\|}_{\frac{k+1}{k}}\leq\delta^{- \varepsilon}\big{(}\delta^{2-\alpha-\frac{\beta-\alpha}{k}}\#F\big{)}^{\frac{k }{k+1}}.\]
Theorem 1.9 is the special case where \(\alpha=\beta=1\) and \(Y(f)=f^{\delta}\) for each \(f\in F\).
Before proving Theorem 1.9\({}^{\prime}\), let us examine how it differs from Proposition 4.4. First, the functions in Proposition 4.4 are polynomials, while those in Theorem 1.9\({}^{\prime}\) are uniformly smooth; moving between these
two conditions will not pose any difficulties. More care, however, is needed to move between the different non-concentration hypotheses imposed by Theorem 1.9\({}^{\prime}\) versus Proposition 4.4. In brief, if \(F\) is a family of curves that violates the non-concentration hypothesis (4.24) from Proposition 4.4, then for a typical point \(x\in[0,1]^{2}\), the curves whose \(\delta\)-neighborhoods contain \(x\) will be concentrated inside a small ball (in the metric space \(C^{k}(I)\)) in \(\mathcal{F}\). Thus the curves in \(F\) can be partitioned into non-interacting pieces, each of which is localized to a small ball in \(\mathcal{F}\). Since \(F\) is a \((\delta,\beta;\delta^{-\eta})\)-set, each of these pieces only contains a small fraction of the total collection of curves. Each of these pieces can then be re-scaled to create an arrangement of curves that satisfies the non-concentration hypothesis (4.24). We now turn to the details
Proof of Theorem 1.9\({}^{\prime}\).: **Step 1: Polynomial approximation.** First, after a harmless rescaling we may suppose that \(I=[0,1]\) and \(\sup_{f\in\mathcal{F}}\|f\|_{C^{k+1}}\leq 1/2\). Let \(\eta\) be a quantity to be chosen below. By Jackson's approximation theorem (see e.g. [1]), for each \(f\in F\) there exists a polynomial \(P_{f}\) of degree at most \(K\delta^{-\eta/2}\), so that \(\|f-P_{f}\|_{C^{k+1}}\leq\delta/4\). The quantity \(K\) depends only on the numbers \(\sup_{f\in\mathcal{F}}\|f^{(i)}\|_{\infty}\) for \(i=0,\ldots,i_{0}\), with \(i_{0}\sim 1/\eta\). Crucially, \(K\) is independent of \(\delta\) and the specific choice of \(F\subset\mathcal{F}\). In particular, if \(\delta_{0}>0\) and hence \(\delta\) is sufficiently small depending on \(\eta\) and \(\mathcal{F}\), then the degree of each polynomial \(P_{f}\) is at most \(\delta^{-\eta}\). Define \(F_{1}=\{P_{f}\colon f\in F\}\). For each \(P_{f}\in F_{1}\), define the shading \(Y(P_{f})=P_{f}^{2\delta}\cap N_{\delta}(Y(f))\). Abusing notation slightly, we will replace \(\delta\) by \(2\delta\), so \(Y(P_{f})\subset P_{f}^{\delta}\). It suffices to prove Theorem 1.9 with \(F_{1}\) in place of \(F\), i.e. we must show that
\[\Big{\|}\sum_{f\in F_{1}}\chi_{Y(f)}\Big{\|}_{\frac{k+1}{k}}\leq\delta^{- \varepsilon}\big{(}\delta^{2-\alpha-\max\big{(}0,\frac{\beta-\alpha}{k}\big{)} }\#F\big{)}^{\frac{k}{k+1}}. \tag{5.2}\]
Note that since the set \(F\) is a \((\delta,\beta;\delta^{-\eta})\)-set, we may suppose after a (harmless) refinement by a factor of \(\delta^{\eta}\) that the points in \(F\) are \(\delta\)-separated. Hence the set \(F_{1}\) is \(\frac{3}{4}\delta\)-separated, and is a \((\delta,\beta;2\delta^{-\eta})\)-set. The set \(F_{1}\) also satisfies the "forbidding \(k\)-th order tangencies" condition (1.11), with \(\omega\) replaced by \(\omega/2\).
**Step 2: A two-ends reduction.** Let \(\varepsilon_{1}>0\) be a small quantity to be specified below. For each \(x\in\mathbb{R}^{2}\), let \(t(x)\) be the infimum of all \(t\geq\delta\) for which there exists a ball \(B\subset C^{k}([0,1])\) of radius \(t\) satisfying
\[\#\{f\in F_{1}\cap B\colon x\in Y(f)\}\geq t^{\varepsilon_{1}}\#\{f\in F_{1} \colon x\in Y(f)\}.\]
After dyadic pigeonholing, we can select a radius \(t\) and a shading \(Y_{1}(f)\subset Y(f)\), \(f\in F_{1}\) with the following properties.
* \(t/2\leq t(x)<t\) for each \(x\in\bigcup_{f\in F_{1}}Y_{1}(f)\).
* For each \(x\in\bigcup_{f\in F_{1}}Y_{1}(f)\), there is a ball \(B(x)\subset C^{k}([0,1])\) of radius \(t\) that contains every \(f\in F_{1}\) with \(x\in Y_{1}(f)\).
* \[\Big{\|}\sum_{f\in F_{1}}\chi_{Y(f)}\Big{\|}_{\frac{k+1}{k}}\lessneq\delta^{- \varepsilon_{1}}\Big{\|}\sum_{f\in F_{1}}\chi_{Y_{1}(f)}\Big{\|}_{\frac{k+1}{ k}}.\] (5.3)
* For each \(x\in\bigcup_{f\in F_{1}}Y_{1}(f)\), each \(r\in[\delta,t]\), and each ball \(B\subset C^{k}([0,1])\) of radius \(r\), we have \[\#\{f\in F_{1}\cap B\colon x\in Y_{1}(f)\}\leq(r/t)^{\varepsilon_{1}}\#\{f \in F_{1}\colon x\in Y_{1}(f)\}.\] (5.4)
Let \(\mathcal{B}_{0}\) be a maximal set of pairwise non-overlapping balls in \(C^{k}([0,1])\) of radius \(t\) that intersect \(\mathcal{F}\). For each \(B\in\mathcal{B}\), let \(4B\) denote the ball with the same center and radius \(4t\); denote this new set of balls by \(\mathcal{B}_{1}\). Then for every \(x\in\bigcup_{f\in F_{1}}Y_{1}(f)\), the ball \(B(x)\) is contained in at least one of the balls from \(\mathcal{B}_{1}\), and hence we have the pointwise bound
\[\int\Big{(}\sum_{f\in F_{1}}\chi_{Y_{1}(f)}\Big{)}^{\frac{k+1}{k}}\leq\sum_{B \in\mathcal{B}_{1}}\int\Big{(}\sum_{f\in F_{1}\cap B}\chi_{Y_{1}(f)}\Big{)}^{ \frac{k+1}{k}}. \tag{5.5}\]
We claim that each \(f\in\mathcal{F}\) is contained in \(O(c^{-O(1)})\) balls from \(\mathcal{B}\), where \(c>0\) is the quantity from (1.11) associated to the family \(\mathcal{F}\). From this it follows that
\[\sum_{B\in\mathcal{B}_{1}}\#(F_{1}\cap B)\lesssim c^{-O(1)}\#F.\]
To verify the above claim, suppose that \(f\in F\) is contained in \(\ell\) distinct balls with centers \(g_{1},\ldots,g_{\ell}\). Since the points \(g_{1},\ldots,g_{\ell}\) are \(t\)-separated in \(C^{k}([0,1])\), by (1.11) we have that the vectors \(v_{j}=\big{(}g_{j}(0),g_{j}^{\prime}(0),\ldots,g_{j}^{(k)}(0)\big{)}\), \(j=1,\ldots,\ell\) are at least \(ct/2\)-separated in \(\mathbb{R}^{k+1}\) with the \(L^{1}\) metric. But since \(\|f-g_{j}\|_{C^{k}}\leq 4t\) for each index \(j\), by the triangle inequality the vectors \(\{v_{j}\}_{j=1}^{\ell}\) are contained in a ball of radius \(8t\). We conclude that \(\ell\lesssim c^{-O(1)}\), as desired.
**Step 3: Rescaling and Applying Proposition 4.4.** For each \(B\in\mathcal{B}_{1}\), with center \(g_{B}\) and each \(f\in F_{1}\cap B\), define \(\tilde{f}_{B}(x)=(4t)^{-1}(f(x)-g_{B}(x)\). Then \(\|\tilde{f}_{B}\|_{C^{k}}\leq 1\) for each \(f\in F_{1}\cap B\). Define \(\tilde{\delta}=\delta/(2t)\) and let \(\tilde{Y}_{1}(\tilde{f}_{B})\) be the image of \(Y_{1}(f)\) under the map \(\phi_{B}(x,y)=(x,(4t)^{-1}(y-g_{B}(x))\). Then \(\tilde{Y}_{1}(\tilde{f}_{B})\subset\tilde{f}_{B}^{\tilde{\delta}}\). The shading \(\tilde{Y}(\tilde{f}_{B})\) now satisfies the hypotheses of Proposition 4.4, with \(\tilde{\delta}\) in place of \(\delta\) and \(\tau=\delta\). Define \(\tilde{F}_{B}=\{\tilde{f}_{B}\colon f\in F\cap B\}\).
The non-concentration estimate (5.4) now has the following consequence. For each \(r\in[\tilde{\delta},1]\) and each ball \(B^{\prime}\subset C^{k}([0,1])\) of radius \(r\), we have
\[\#\{\tilde{f}_{B}\in\tilde{F}_{B}\cap B^{\prime}\colon x\in\tilde{Y}_{1}( \tilde{f}_{B})\}\leq 4r^{\varepsilon_{1}}\#\{\tilde{f}_{B}\in\tilde{F}_{B} \colon x\in\tilde{Y}_{1}(\tilde{f}_{B})\}. \tag{5.6}\]
The consequence of (5.6) is the following: for each \(x\in\bigcup_{\tilde{f}_{B}\in\tilde{F}_{B}}\tilde{Y}_{1}(\tilde{f}_{B})\), each \(T\geq 1\), each \(\rho\geq\tilde{\delta}\), and each \((k;\rho;T)\)-rectangle \(R\) containing \(x\), we have
\[\#\{\tilde{f}_{B}\in\tilde{F}_{B}\colon x\in\tilde{Y}_{1}(\tilde{f}_{B}),\ \tilde{f}_{B}\sim R\}\lesssim T^{ \varepsilon_{1}}\#\{\tilde{f}_{B}\in\tilde{F}_{B}\colon x\in\tilde{Y}_{1}( \tilde{f}_{B}),\ \tilde{f}_{B}\sim R\}. \tag{5.7}\]
Indeed, by Lemma B.3, the set of functions \(\tilde{f}_{B}\) in the set on the LHS of (5.7) are localized to a ball \(B^{\prime}\) centered at \(g_{B}\) of diameter \(O(T^{-1})\) (recall that a \((\rho;k;T)\)-rectangle has length \((T\rho)^{1/k}\). ) Comparing with (5.6), we obtain (5.7).
Applying Proposition 4.4 with \(\varepsilon_{1}\) in place of \(\varepsilon\); \(\tilde{\delta}\) in place of \(\delta\); and \(\tau=\delta\), we conclude that if \(\eta>0\) is sufficiently small, then for each ball \(B\in\mathcal{B}_{1}\) we have (provided \(\tilde{\delta}\) is sufficiently small)
\[\int\Big{(}\sum_{\tilde{f}_{B}\in\tilde{F}_{B}}\chi_{\tilde{Y}_{1}(\tilde{f}_{ B})}\Big{)}^{\frac{k+1}{k}}\lesssim_{\alpha,\varepsilon}\tilde{\delta}^{- \varepsilon_{1}}\tilde{\delta}^{1+\alpha/k}\tau^{1-\alpha}(\#\tilde{F}_{B})^{ \frac{k+1}{k}}\leq\delta^{-\varepsilon_{1}}t^{-1-\alpha/k}\delta^{2-\alpha+ \alpha/k}(\#\tilde{F}_{B})^{\frac{k+1}{k}}.\]
Undoing the scaling \(\phi_{B}\) (which distorted volumes by a factor of \(4t\)) and using the fact that \(\#F_{B}\lesssim\delta^{-\eta}(t/\delta)^{\beta}\) (since \(F\) is a \((\delta,\beta;\delta^{-\eta})\)-set), we have
\[\begin{split}\int\Big{(}\sum_{f\in F_{B}}\chi_{Y_{1}(f)}\Big{)}^{ \frac{k+1}{k}}\lesssim_{\alpha,\varepsilon}\delta^{-\varepsilon_{1}}t^{-\alpha/ k}\delta^{2-\alpha+\alpha/k}(\#F_{B})^{\frac{k+1}{k}}\\ \lesssim\delta^{-\varepsilon_{1}-\eta}t^{\frac{\beta-\alpha}{k}} \delta^{2-\alpha+\frac{\alpha-\beta}{k}}(\#F_{B})\\ =\delta^{-\varepsilon_{1}-\eta}(\delta/t)^{\frac{\alpha-\beta}{k} }\delta^{2-\alpha}(\#F_{B}).\end{split} \tag{5.8}\]
Combining (5.3), (5.5), and (5.8), we conclude that
\[\Big{\|}\sum_{f\in F_{1}}\chi_{Y(f)}\Big{\|}^{\frac{k+1}{k}} \lesslessneq\varepsilon,\alpha\ \delta^{-2\varepsilon_{1}-\eta}\sum_{B\in\mathcal{B}_{1}}(\delta/t)^{\frac{ \alpha-\beta}{k}}\delta^{2-\alpha}(\#F_{B})\lesssim\delta^{-2\varepsilon_{1}- \eta}(\delta/t)^{\frac{\alpha-\beta}{k}}\delta^{2-\alpha}(\#F). \tag{5.9}\]
If \(\alpha\geq\beta\), then the worst case occurs when \(t=\delta\). This is unsurprising, in light of the behavior of Arrangements 1, 2, and 3 from Section 1.7. If instead \(\beta>\alpha\), then the worst case occurs when \(t=1\). Regardless, we obtain (5.2), provided we select \(\eta,\varepsilon_{1}\leq\varepsilon/3\), and choose \(\delta_{0}>0\) sufficiently small so that the implicit constant \(O_{\varepsilon,\alpha}(\log(1/\delta)^{O(1)})\) in inequality (5.9) is at most \(\delta^{\varepsilon/3}\).
## 6 From Theorem 1.9\({}^{\prime}\) to Theorem 1.4
In this section we will prove Theorem 1.4. We begin with the case \(s=m-1\). Let \(h\colon\mathcal{C}\times I\to\mathbb{R}\), \(\Phi\colon\mathcal{C}\to\mathbb{R}^{m-s}\), \(\mathcal{C}_{0}\subset\mathcal{C}\), and \(I_{0}\subset I\) be as in the statement of Theorem 1.4. Since \(\mathcal{C}_{0}\) and \(I_{0}\) are compact and \(h,\Phi\) are smooth, it suffices to consider the case where \(\mathcal{C}=N(u_{0},r)\) is a small neighborhood of a point \(u_{0}\), and \(I_{0}\) is a short
interval. Since \(h\) parameterizes a \(m\)-dimensional family of cinematic curves, if the neighborhood \(\mathcal{C}\) and the interval \(I_{0}\) are chosen sufficiently small, then there exists \(c>0\) so that
\[\sum_{j=0}^{m-1}|\partial_{t}^{j}h(u;t)-\partial_{t}^{j}h(u^{\prime};t)|\geq c|u- u^{\prime}|,\quad u,u^{\prime}\in\mathcal{C},\ t\in I_{0},\]
i.e. the family \(\mathcal{F}=\{h(\cdot,u)\colon u\in\mathcal{C}_{0}\}\) is uniformly smooth and forbids \((m-1)\)-st order tangency.
The reduction from Theorem 1.9 to Theorem 1.4 now proceeds by a standard \(L^{p}\) duality argument. We will briefly sketch the proof, and refer the reader to Lemma 10.4 from [39] for further details. Let \(\{v_{i}\}\) be a maximal \(\delta\)-separated subset of \(\Phi(\mathcal{C}_{0})\). If \(|v-v^{\prime}|<\delta\), then \(M_{\delta}f(v)\leq AM_{\delta}f(v^{\prime})\), where the constant \(A\) depends on \(h,\Phi\), \(\mathcal{C}\) and \(I\). Thus
\[\|M_{\delta}f\|_{p}\lesssim\Big{(}\delta\sum_{j}|M_{\delta}f(v_{j})|^{p}\Big{)} ^{1/p}.\]
By the duality of \(\ell^{p}\) and \(\ell^{p^{\prime}}\), there exists a sequence \(\{y_{j}\}\) with \(\delta\sum_{j}y_{j}^{p^{\prime}}=1\), so that
\[\Big{(}\delta\sum_{j}|M_{\delta}f(v_{j})|^{p}\Big{)}^{1/p}=\delta\sum_{j}y_{j} |M_{\delta}f(v_{j})|,\]
and thus
\[\|M_{\delta}f\|_{p}\lesssim\delta\sum_{j}y_{j}\frac{1}{\delta}\int_{g_{j}^{ \delta}}|f|=\int\Big{(}\sum_{j}y_{j}\chi_{g_{j}^{\delta}}\Big{)}|f|,\]
where \(g_{j}\in\mathcal{F}\) is a function that comes within a factor of \(1/2\) of achieving the supremum \(M_{\delta}f(v_{j})\). We now use Holder's inequality to bound
\[\int\Big{(}\sum_{j}y_{j}\chi_{g_{j}^{\delta}}\Big{)}|f|\leq\Big{\|}\sum_{j}y_ {j}\chi_{g_{j}^{\delta}}\Big{\|}_{p^{\prime}}\|f\|_{p}.\]
We would like to apply Theorem 1.9, but we must first deal with the weights \(\{y_{j}\}\). Since we do not care about factors of \(\log(1/\delta)\), this can be handled using dyadic pigeonholing. We divide \(\Big{\|}\sum_{j}y_{j}\chi_{g_{j}^{\delta}}\Big{\|}_{p^{\prime}}\) into \(\log(1/\delta)\) pieces based on the dyadic value of \(y_{j}\) (there are only \(O(\log(1/\delta))\) dyadic ranges for \(y_{j}\), since each \(y_{j}\) has size at most \(1\), and values smaller than \(\delta^{100m}\) can be ignored, since the total contribution from such weights is at most \(O(\delta^{100})\)), and apply Theorem 1.9 with \(p^{\prime}=\frac{m}{m-1}\) to each piece. Summing the resulting contributions, we obtain the estimate \(\Big{\|}\sum_{j}y_{j}\chi_{g_{j}^{\delta}}\Big{\|}_{p^{\prime}}\leq\delta^{-\varepsilon}\), provided \(\delta>0\) is selected sufficiently small.
_Remark 6.1_.: The conclusion (1.8) of Theorem 1.8 holds for all \(\delta>0\) sufficiently small. More precisely, the conclusion holds for all \(\delta\in(0,\delta_{0}]\), where \(\delta_{0}\) depends on the following quantities:
* \(m\) and \(s\) (so far, we have only considered the case \(m=s+1\)).
* \(\varepsilon\).
* The infimum of \(|\det DF_{t}(u)|\) from Definition 1.1, for \((u,t)\in\mathcal{C}_{0}\times I_{0}\); this quantifies the property that \(h\) paramaterizes a \(m\)-dimensional family of cinematic curves.
* The infimum of \(|\det D\Phi|_{V_{u,t}}|\) from Definition (1.2), for \((u,t)\in\mathcal{C}_{0}\times I_{0}\); this quantifies the property that \(\Phi\) is transverse to \(h\).
* \(\sup|\nabla\Phi|\); in order for \(F\) to be a \((\delta,\beta;\delta^{-\eta})\)-set, we need this supremum to be at most \(\delta^{-\eta}\).
* The \(C^{N}\)-norm of \(h\), where \(N=N(\varepsilon)\) is a large integer depending on \(\varepsilon\). More precisely, we can cover \(\mathcal{C}_{0}\subset\mathcal{C}\) by a finite (independent of \(\delta\)) set of coordinate charts, and our choice of \(\delta_{0}\) will depend on the maximum of the \(C^{N}\)-norm of \(h\) in these coordinate charts.
Next, we consider the case \(s<m-1\). The reduction from \(s<m-1\) to \(s=m-1\) is a "slicing" argument. Again, since \(\mathcal{C}_{0}\) and \(I_{0}\) are compact and \(h,\Phi\) are smooth, it suffices to consider the case where \(\mathcal{C}=N(u_{0},r)\) is a small neighborhood of a point \(u_{0}\), and \(I\) is a short interval. In particular, we can suppose that there is a unit vector \(e\in\mathbb{R}^{m-s}\) so that for each \(u\in\mathcal{C}\) and each \(t\in I\), if we consider the manifold \(V_{u;t}\) given by (1.5), then \(\Phi(V_{u;t})\) is a codimension-1 manifold (i.e. dimension \((m-s-1)\)) in \(\mathbb{R}^{m-s}\), and at each point \(p\in\Phi(V_{u;t})\), the tangent plane \(T_{p}\Phi(V_{u;t})\) has normal vector that makes angle \(\leq 1/100\) with \(e\).
After a harmless rotation, we may suppose that \(e=e_{1}\) is the first standard basis vector. After further restricting \(\mathcal{C}\) and translating, we may suppose that \(\Phi(\mathcal{C})\) is the cube \(Q=[0,r]^{m-s}\) for some small \(r>0\). Writing \(v=(\underline{v},v_{m-s})\in\mathbb{R}^{m-s-1}\times\mathbb{R}\) and \(Q=\underline{Q}\times[0,r]\), we have
\[\begin{split}\left\|M_{\delta}f\right\|_{L^{s+1}(Q)}& =\Big{(}\int_{\underline{Q}}\int_{0}^{r}(M_{\delta}f(v))^{s+1} dv\Big{)}^{\frac{1}{s+1}}\leq\Big{(}\sup_{\underline{v}\in\underline{Q}}\int_{0}^{r}(M _{\delta}f(\underline{v},v_{m-s})^{s+1}dv_{m-s})^{\frac{1}{s+1}}\\ &=\sup_{\underline{v}\in\underline{Q}}\left\|M_{\delta}f( \underline{v}\cdot)\right\|_{L^{s+1}([0,r])}=\sup_{\underline{v}\in\underline{ Q}}\left\|M_{\delta}^{\underline{v}}f\right\|_{L^{s+1}([0,r])},\end{split} \tag{6.1}\]
where
\[M_{\delta}^{\underline{v}}f(v_{m-s})=\frac{1}{\delta}\sup_{u\in\Phi^{-1}( \underline{v},v_{m-s})}\int_{\gamma_{u}}f.\]
The purpose of the above computation is that \(M_{\delta}^{\underline{v}}\) is a maximal operator in the sense of Definition 1.3, with \(s+1\) in place of \(m\). Thus we can apply Theorem 1.4 with \(s+1\) in place of \(m\) to conclude (see Remark 6.1) that there exists a choice of \(\delta_{0}>0\) (which is uniform in our choice of \(\underline{v}\)) so that \(\|M_{\delta}^{\underline{v}}\|_{L^{s+1}\to L^{s+1}}\leq\delta^{-\varepsilon}\) for all \(\underline{v}\) and all \(\delta\in(0,\delta_{0}]\). This means that for \(\delta\in(0,\delta_{0}]\), we have
\[\sup_{\underline{v}\in\underline{Q}}\left\|M_{\delta}^{\underline{v}}f\right\| _{L^{s+1}([0,r])}\leq\delta^{-\varepsilon}\|f\|_{s+1}. \tag{6.2}\]
Combining (6.1) and (6.2), we obtain (1.8).
## 7 From Theorem 1.4 to Theorem 1.7
In this section we will prove Theorem 1.7. The main new input is a local smoothing estimate by Chen, Guo, and Yang [8]. As noted in the introduction, Chen, Guo, and Yang prove sharp \(L^{p}\to L^{p}\) bounds for the axis-parallel elliptic maximal function by combining their local smoothing theorem with an estimate similar to (1.8). We will follow a similar strategy. We begin by recalling the setup from [8].
### Local smoothing: The Chen-Guo-Yang framework
Let \(s\geq 2\), \(w=(w_{1},\ldots w_{s})\). Let \(\zeta(w;t)\colon\mathbb{R}^{s}\to\mathbb{R}\) be smooth, let \(\phi(w,t)\) be a smooth bump function supported near the origin. Define
\[A_{\zeta,\phi}f(x,y;w)=\int_{\mathbb{R}}f(x-t,y-\zeta(w;t)\phi(w,t)dt, \tag{7.1}\]
and define
\[G_{\zeta,\phi}f(x,y)=\sup_{w\in\mathbb{R}^{s}}|A_{\zeta,\phi}f(x,y;w)|. \tag{7.2}\]
Next, we define an analogue of Sogge's cinematic curvature condition from [33] in this setting. Let
\[T^{\zeta}(w;t)=\big{(}\partial_{t}\zeta(w;t),\ \partial_{t}^{2}\zeta(w;t), \ldots,\partial_{t}^{s+1}\zeta(w;t)\big{)}^{T}.\]
**Definition 7.1**.: _We say that \(G_{\zeta,\phi}\) satisfies the \(s\) parameter curvature condition at the origin if_
\[\det\big{[}\partial_{t}T^{\zeta},\ \partial_{w_{1}}T^{\zeta},\ldots,\partial_{w_{s }}T^{\zeta}\big{]}\Big{|}_{(w,t)=(0,0)}\neq 0. \tag{7.3}\]
By continuity, if (7.3) is satisfied, then the determinant continues to be nonzero for \((w,t)\) in a small neighborhood of the origin. The bump function \(\phi\) will be selected so that this determinant will be uniformly bounded away from \(0\) on the support of \(\phi\).
Now we can state Proposition 3.2 from [8]. In what follows, \(P_{k}f\) denotes the Littlewood-Paley projection to the frequency annulus of magnitude \(\sim 2^{k}\).
**Proposition 7.2** (Chen, Guo, Yang).: _Let \(\zeta(w,t)\colon\mathbb{R}^{s}\times\mathbb{R}\to\mathbb{R}\) satisfy the \(s\) parameter curvature condition at the origin. Then there exists \(p_{s}=p(s)<\infty\) so that for all \(\varepsilon>0\) and all smooth bump functions \(\phi(w,t)\) whose support is contained in a sufficiently small neighborhood of the origin, there is a constant \(C=C(\varepsilon,\zeta,\phi)\) so that_
\[\|A_{\zeta,\phi}(P_{k}f)\|_{L^{p}(\mathbb{R}^{2}\times\mathbb{R}^{s})}\leq C2^ {-(\frac{k+1}{p}+\varepsilon)k}\|f\|_{L^{p}(\mathbb{R}^{2})}. \tag{7.4}\]
_Remark 7.3_.: In [8], Proposition 3.2 is stated with the additional hypothesis that \(\zeta(w,t)\) is a "normal form" at the origin (this is defined in Definition 3.1 from [8]). However, the argument immediately following Proposition 4.2 shows how an arbitrary \(\zeta(w,t)\) can be reduced to the case where \(\zeta\) is a normal form. We also remark that the analogue of (7.4) from [8] has the expression \(A_{\zeta,\phi}f\) rather than \(A_{\zeta,\phi}(P_{k}f)\), but the latter is what is intended.
Note that
\[\|G_{\zeta,\phi}(P_{k}f)(x,y))\|_{L^{p}_{xy}}=\|A_{\zeta,\phi}(P_{k}f)(x,y;w) )\|_{L^{p}_{xy}(L^{\infty}_{xy})},\]
where \(L^{p}_{xy}\) denotes \(L^{p}(\mathbb{R}^{2})\) in the variables \((x,y)\) and \(L^{\infty}_{w}\) denotes \(L^{\infty}(\mathbb{R}^{s})\) in the variable \(w\). Thus by Sobolev embedding, (7.4) implies
\[\|G_{\zeta,\phi}(P_{k}f)\|_{L^{p}(\mathbb{R}^{2})}\leq C2^{-(\frac{1}{p}+ \varepsilon)k}\|f\|_{L^{p}(\mathbb{R}^{2})}, \tag{7.5}\]
with \(p=p(s)\) as above, and for a (possibly different) constant \(C=C(\varepsilon,\gamma,\chi)\). I.e. the sublinear operator \(G_{\zeta,\phi}\) has high frequency decay, in the sense of Definition 1.5.
### From local smoothing to maximal averages over curves
Our next task is to relate the maximal operator \(G_{\zeta,\phi}f\) from (7.2) to the operator \(M\) from Definition 1.3. By compactness, it suffices to consider the case where \(\mathcal{C}\) is a small neighborhood of a point \(u_{0}\), and \(I\) is a small interval. Since we restrict to the case where \(M\) is translation invariant, we may choose local coordinates of the form \(u=(x,y,w_{1},\ldots,w_{s})\) so that the parameterization and projection functions \(h\colon\mathcal{C}\times I\to\mathbb{R}\) and \(\Phi\colon\mathcal{C}\to\mathbb{R}^{2}\) can be expressed in the form \(h(u;t)=\zeta(w_{1},\ldots,w_{s};t-x)+y\) and \(\Phi(u)=(x,y)\); we can choose these coordinates so that \(u_{0}=0\) and \(I\) is an interval centered at \(0\). Let \(G=G_{\zeta,\phi}\), where \(\phi\) is a bump function chosen so that Proposition (7.2) holds. We will further restrict \(\mathcal{C}\) and \(I\) so that \(\phi\) is identically \(1\) on \(\mathcal{C}\times I\). With these restrictions, we have
\[Mf(x,y)=\sup_{w\colon(x,y,w)\in\mathcal{C}_{0}}\int_{\gamma_{w}}f \leq \sup_{w\in\mathbb{R}^{s}}A_{\zeta,\phi}f(x,y;w)\leq G_{\zeta,\phi} f(x,y), \tag{7.6}\]
for every non-negative function \(f\colon\mathbb{R}^{2}\to\mathbb{R}\).
Let us suppose for the moment that \(\zeta(w,t)\colon\mathbb{R}^{s}\times\mathbb{R}\to\mathbb{R}\) satisfies the \(s\) parameter curvature condition at the origin. Theorem 1.4 says that for each \(\varepsilon>0\), there exists a constant \(C_{\varepsilon}\) so that
\[\|G_{\zeta,\phi}P_{k}f\|_{L^{s+1}(\mathbb{R}^{2})}\leq C_{\varepsilon}2^{ \varepsilon k}\|f\|_{L^{s+1}(\mathbb{R}^{2})}. \tag{7.7}\]
Indeed, the quantity \(G_{\zeta,\phi}(P_{k}f)(x,y)\) is comparable to \(M_{\delta}f(x,y)\) for \(\delta=2^{-k}\), where \(M\) is the maximal operator from (1.6) associated to \(h\). The conclusion of Theorem 1.4 holds for all \(\delta>0\) sufficiently small (depending on \(\varepsilon,\mathcal{C},h\), and \(\Phi\)), but this may be extended to all \(\delta>0\) by selecting a sufficiently large constant \(C_{\varepsilon}\).
Let \(p>s+1\). If we select \(\varepsilon>0\) sufficiently small depending on \(p\) and the Lebesgue exponent \(p(s)\) from (7.5), then by interpolating (7.5) and (7.7), we conclude that there exist constants \(\eta>0\) (small) and \(C\) (large) so that
\[\|G_{\zeta,\phi}P_{k}f\|_{L^{p}(\mathbb{R}^{2})}\leq C2^{-\eta k}\|f\|_{L^{p}( \mathbb{R}^{2})},\]
and hence there is a constant \(C_{p}\) so that
\[\|G_{\zeta,\phi}f\|_{L^{p}(\mathbb{R}^{2})}\leq C_{p}\|f\|_{L^{s}(\mathbb{R}^{2})}. \tag{7.8}\]
Since it suffices to prove Theorem 1.7 for non-negative functions, the theorem now follows from (7.6).
It remains to verify that \(\zeta(w,t)\colon\mathbb{R}^{s}\times\mathbb{R}\to\mathbb{R}\) satisfies the \(s\) parameter curvature condition at the origin. By hypothesis, \(h\) parameterizes a \(s+2\)-dimensional family of cinematic curves, in the sense of Definition 1.1. To slightly simplify notation below, we will write coordinates \(u=(y,x,w_{1},\ldots,w_{s})\) rather than \((x,y,w_{1},\ldots,w_{s})\). We have
\[DF_{0}^{h}(0)=\left(\begin{array}{ccccc}1&\partial_{t}h&\partial_{w_{1}}h& \cdots&\partial_{w_{s}}h\\ 0&\partial_{t}\partial_{t}h&\partial_{t}\partial_{w_{1}}h&\cdots&\partial_{t }\partial_{w_{s}}h\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&\partial_{t}^{s+1}\partial_{t}h&\partial_{t}^{s+1}\partial_{w_{1}}h&\cdots &\partial_{t}^{s+1}\partial_{w_{s}}h\end{array}\right) \tag{7.9}\]
But the bottom-right \((s+1)\times(s+1)\) minor of the above matrix is precisely \(T^{\zeta}(w;t)\), and hence these matrices have the same determinant. Since \(h\) paramaterizes a \((s+2)\)-dimensional family of cinematic curves, this determinant is non-vanishing at \((u;t)=(0;0)\). We conclude that \(\zeta(w,t)\colon\mathbb{R}^{s}\times\mathbb{R}\to\mathbb{R}\) satisfies the \(s\) parameter curvature condition at the origin.
## 8 From Theorem 1.9\({}^{\prime}\) to Theorems 1.10 and 1.12
In this section we will briefly discuss the reduction from Theorem 1.9\({}^{\prime}\) to Theorems 1.10 and 1.12. Reductions of this type are already present in the literature, so we will just provide a brief sketch and refer the reader to the appropriate sources for further details.
### Restricted Projections
The connection between exceptional set estimates for projections in restricted sets of projections, and maximal function estimates for curves was first explored by Kaenmaki, Orponen, and Venieri in [23]. We will follow the framework from Section 2 of [31]. We will only briefly sketch the numerology of the problem, and refer the reader to [31] for details.
Let \(\gamma\colon[0,1]\to\mathbb{R}^{n}\) and \(E\subset\mathbb{R}^{n}\) be as in the statement of Theorem 1.10; after re-scaling and replacing \(E\) by a subset, we may suppose that \(E\subset[-1,1]^{n}\) and \(\dim E\leq 1\). Suppose for contradiction that there exists some \(0\leq q<\dim E\) so that the set
\[S=\{t\in[0,1]\colon\dim\left(E\cdot\gamma(t)\right)<q\}\]
satisfies \(\dim S>q\). After possibly replacing \(S\) and \(E\) by subsets, we may suppose that \(q<\dim S=\dim E\). Let \(\alpha=\dim S\). Let \(\mathcal{F}=\{t\mapsto z\cdot\gamma(t)\colon z\in[-1,1]^{n}\}\). Since \(\gamma\) is smooth, the set \(\mathcal{F}\) is uniformly smooth, and the nondegeneracy condition (1.14) implies that \(\mathcal{F}\) forbids \((n-1)\)-st order tangency. Define \(\mathcal{F}_{E}=\{t\mapsto z\cdot\gamma(t)\colon z\in E\}\).
Let \(\eta,\delta_{0}>0\). In Section 2 of [31], the authors explain how to extract a \((\delta,\alpha;\delta^{-\eta})\)-set \(F\subset\mathcal{F}_{E}\), for some \(\delta\in(0,\delta_{0}]\), and how to construct a shading \(Y(f)\subset f^{\delta}\) for each \(f\in F\), where each set \(Y(f)\) is a \((\delta,\alpha;\delta^{-\eta})\)-set (in the metric space \(\mathbb{R}^{2}\)), with the property that the union \(\bigcup_{f\in F}Y(f)\) is contained in a \((\delta,\alpha;\delta^{-\eta})\times(\delta,q;\delta^{-\eta})\) quasi-product, i.e. a set \(X\subset\mathbb{R}^{2}\) whose projection to the \(x\)-axis is a \((\delta,\alpha;\delta^{-\eta})\)-set, and every fiber above this projection is a \((\delta,q;\delta^{-\eta})\)-set. In particular, such a quasi-product has measure at most \(\delta^{2-\alpha-q-2\eta}\). Since \(\sum_{f\in f}|Y(f)|\gtrsim\delta^{2-2\alpha+2\eta}\), by Holder's inequality we have
\[\Big{\|}\sum_{f\in F}\chi_{Y(f)}\Big{\|}_{\frac{n}{n-1}}^{\frac{n}{n-1}}\geq \delta^{-2\alpha+\frac{\delta-\alpha}{n-1}+O(\eta)}. \tag{8.1}\]
On the other hand, by Theorem 1.9 with \(k=n-1\), for each \(\varepsilon>0\) we have
\[\Big{\|}\sum_{f\in F}\chi_{Y(f)}\Big{\|}_{\frac{n}{n-1}}^{\frac{n}{n-1}}\leq \delta^{-2\alpha-\varepsilon}, \tag{8.2}\]
provided \(\delta\) is sufficiently small. Since \(q<\alpha\), we obtain a contradiction provided \(\varepsilon,\eta\), and \(\delta_{0}\) are chosen sufficiently small. We refer the reader to Section 2 of [31] for details.
### Furstenberg sets of curves
In this section we will briefly discuss the proof of Theorem 1.12. In [21], Hera, and Shmerkin, and Yavicoli obtained new bounds for the dimension of \((\alpha,2\alpha)\) Furstenberg sets. They did this by first introducing the notion of a discretized \((\alpha,\beta)\) Furstenberg set, and then showing that covering number bounds on the size of such discretized Furstenberg sets imply Hausdorff dimension bounds for the corresponding \((\alpha,\beta)\) Furstenberg sets. An identical strategy will work here. The corresponding notion of a discretized Furstenberg set of curves is as follows.
**Definition 8.1**.: _Let \(\mathcal{F}\subset C^{k}([0,1])\). For \(\alpha,\beta,\delta,C>0\), we say a set \(E\subset[0,1]^{2}\) is a discretized \((\delta,\alpha,\beta)\) Furstenberg set of curves (with error \(C\)) from the family \(\mathcal{F}\), if \(E=\bigcup_{f\in F}A_{f}\), where_
* _The set_ \(F\subset\mathcal{F}\) _is a_ \((\delta,\beta;C)\)_-set (in the metric space_ \(C^{k}([0,1])\)_, with_ \(\#F\geq C^{-1}\delta^{-\beta}\)_._
* _For each_ \(f\in F\)_, the set_ \(A_{f}\) _is a_ \((\delta,\alpha;C)\)_-set (in the metric space_ \(\mathbb{R}^{2}\)_), with_ \(|A_{f}|\geq C^{-1}\delta^{2-\alpha}\)_, which is contained in_ \(f^{2\delta}\)_._
Definition 8.1 is modeled off of Definition 3.2 from [21]. The definitions are very similar, with the following two differences: in [21], the authors consider lines in \(\mathbb{R}^{n}\) rather that a family \(\mathcal{F}\) of curves, and the authors use the notation "\(\lessapprox\)" to suppress the role of the constant \(C\).
In Lemma 3.3 from [21], the authors prove the following: Let \(\alpha,\beta,s\geq 0\). Suppose that for every \(\varepsilon>0\), there exists \(\eta>0\) so that every \((\delta,\alpha,\beta)\) Furstenberg set of lines (with error \(\delta^{-\eta}\)) has measure at least \(\delta^{2-s+\varepsilon}\). Then every \((\alpha,\beta)\) Furstenberg set has Hausdorff dimension at least \(s\).
An identical proof yields the analogous result for Furstenberg sets of curves: Fix a family \(\mathcal{F}\subset C^{k}([0,1])\), and fix \(\alpha,\beta,s\geq 0\). Suppose that for every \(\varepsilon>0\), there exists \(\eta>0\) so that every \((\delta,\alpha,\beta)\) Furstenberg set of curves (with error \(\delta^{-\eta}\)) from the family \(\mathcal{F}\) has measure at least \(\delta^{2-s+\varepsilon}\). Then every \((\alpha,\beta)\) Furstenberg set of curves from \(\mathcal{F}\) has Hausdorff dimension at least \(s\). Thus in order to prove Theorem 1.12, it suffices to obtain the corresponding bound on the volume of discretized \((\delta,\alpha,\beta)\) Furstenberg sets of curves from \(\mathcal{F}\).
To this end, fix \(k\geq 1\) and \(0\leq\beta\leq\alpha\leq 1\), and fix a family \(\mathcal{F}\) of uniformly south curves that forbid \(k\)-th order tangency. Fix \(\varepsilon>0\), and let \(\eta>0\) be a small quantity to be specified below. Let \(E\subset[0,1]^{2}\) be a discretized \((\delta,\alpha,\beta)\) Furstenberg set of curves (with error \(\delta^{-\eta}\)) from the family \(\mathcal{F}\), and let \(F\subset\mathcal{F}\) and \(\{Y(f)\colon f\in F\}\) be as in Definition 8.1). Then if \(\eta\) is sufficiently small, we can use Theorem 1.9 and Holder's inequality to compute
\[\delta^{2-\alpha-\beta}=\left\|\chi_{E}\sum_{f\in F}\chi_{Y(f)}\right\|_{1} \leq\left\|\chi_{E}\right\|_{k+1}\right\|\sum_{f\in F}\chi_{Y(f)}\right\|_{ \frac{k+1}{k}}\leq|E|^{\frac{1}{k+1}}\delta^{-\frac{\varepsilon}{k+1}}( \delta^{2-\alpha-\beta})^{\frac{k}{k+1}}.\]
Re-arranging, we conclude that \(|E|\geq\delta^{2-\alpha-\beta+\varepsilon}\), as desired.
## Appendix A Examples
In this section, we will show that the maximal functions discussed in the introduction can be expressed in the framework described in Section 1.1. The Kakeya maximal function is straightforward: select \(\mathcal{C}_{0}=[0,1]^{2}\), \(I_{0}=[0,1]\), \(\mathcal{C}\) a neighborhood of \(\mathcal{C}_{0}\), and \(I\) a neighborhood of \(I_{0}\). Let \(h(m,b;t)=mt+b\), and let \(\Phi(m,b)=m\). Then \(F_{t}(m,b)=\left(mt+b,m\right)\), and \(DF_{t}=\binom{t,1}{1,0}\), which is invertible. Since \(s=m-1\), \(\Phi\) is automatically transverse to \(h\).
For the Wolff and Bourgain circular maximal functions, we can use translation and rotation symmetry to reduce to the case where \(r\) takes values in a neighborhood of \(1\) and the centers \((x,y)\) take values in a neighborhood of \((0,0)\). Finally, we may restrict the integral (1.2) (resp. (1.3)) to the upper arc of \(C(x,y,r)\) above the interval \([-\rho,\rho]\), for \(\rho>0\) a small (fixed) quantity. With these reductions, define \(\mathcal{C}\) to be a neighborhood of \((0,0,1)\); \(I\) a neighborhood of \(0\); \(h(x,y,r;t)=\sqrt{r^{2}-(t-x)^{2}-y}\). Then it suffices to verify that \(D^{h}F_{0}\) has full rank at \((x,y,r)=(0,0,1)\); this is a straightforward computation.
For the Wolff circular maximal function, define \(\Phi(x,y,r)=r\). we have \(m=s+1\), and hence \(\Phi\) is automatically transverse to \(h\). For the Bourgain circular maximal function we have \(\Phi(x,y,r)=(x,y)\), and thus we must verify that \(D\Phi\) restricted to the manifold
\[V_{(0,0,1;0)} =\{(x^{\prime},y^{\prime},r^{\prime})\in\mathcal{C}\colon h(x^{ \prime},y^{\prime},r^{\prime};0)=1,\partial_{t}h(x^{\prime},y^{\prime},r^{ \prime};t)|_{t=0}=0\}\] \[=\{(x^{\prime},y^{\prime},r^{\prime})\in\mathcal{C}\colon y^{ \prime}=0\ r^{\prime}=1-x^{\prime}\}\]
has rank \(1\) at \((0,0,1)\). But this is evidently the case, since we can write this manifold as \(\{(t,0,1-t)\}\) for \(t\) in a neighborhood of \(0\).
Finally, we discuss the Erdogan elliptic maximal function. Given an ellipse with semi-major axis \(a\), semi-minor axis \(b\), center \((x,y)\), and rotation angle \(\theta\), define
\[\begin{array}{ll}A=a^{2}\sin^{2}\theta+b^{2}\cos^{2}\theta&B=2(b^{2}-a^{2}) \sin\theta\cos\theta&C=a^{2}\cos^{2}\theta+b^{2}\sin^{2}\theta\\ D=-2Ax-By&E=-Bx-2Cy&F=Ax^{2}+Bxy+Cy^{2}-a^{2}b^{2}.\end{array}\] (A.1)
Then the corresponding ellipse is the locus of points \((X,Y)\) satisfying
\[AX^{2}+BXY+CY^{2}+DX+EY+F=0.\]
In light of the above, define
\[h(a,b,x,y,\theta,t)=\frac{-(Bt+E)+\sqrt{(Bt+E)^{2}-4C(At^{2}+Dt+F)}}{2C}.\]
Again, after translation, rotation, and anisotropic re-scaling, it suffices to consider the case where \(\mathcal{C}\) is a neighborhood of \((1,1,0,0,0)\), i.e. the semi-major and semi-minor axes have lengths close to \(1\), and origin is close to \((0,0)\), and the rotation is close to \(0\). With \(A,\ldots,F\) as given by (A.1), \(h\) is a function from \(\mathcal{C}\) to \(\mathbb{R}\). The graph of \(t\mapsto h(a,b,x,y,\theta;t)\) is the (upper half) of the ellipse with major axis \(a\), minor axis \(b\), center \((x,y)\), and rotation \(\theta\). A direct computation shows that \(DF_{0}^{h}(1,1,0,0,0)\) has non-zero determinant.
Next, we have \(\Phi(a,b,x,y,\theta)=(x,y)\). We must verify that \(D\Phi\) restricted to the manifold
\[V_{(1,1,0,0,0;0)} =\{(a^{\prime},b^{\prime},x^{\prime},y^{\prime},\theta^{\prime}) \in\mathcal{C}\colon h(a^{\prime},b^{\prime},x^{\prime},y^{\prime},\theta^{ \prime};t)=1/4,\ \partial_{t}h(a^{\prime},b^{\prime},x^{\prime},y^{\prime},\theta^{ \prime};t)|_{t=0}=0,\] \[\partial_{t}^{2}h(a^{\prime},b^{\prime},x^{\prime},y^{\prime}, \theta^{\prime};t)|_{t=0}=-\sqrt{2},\ \partial_{t}^{3}h(a^{\prime},b^{\prime},x^{\prime},y^{\prime},\theta^{ \prime};t)|_{t=0}=0\}\]
has rank \(1\) at \((1,1,0,0,0).\) But in a neighborhood of \((1,1,0,0,0)\), this manifold can be written as \((1+a_{1}(t),1+a_{2}(t),0,b(t),a_{3}(t))\), where \(a_{1},a_{2},a_{3}\) are smooth and satisfy \(a_{i}(0)=0\), and \(\partial_{t}b(t)|_{t=0}\sim 1\). Since \(\Phi(a,b,x,y,\theta)=(x,y)\), we conclude that \(D\Phi\) restricted to \(V_{(1,1,0,0,0;0)}\) has rank \(1\), as desired.
### The range of \(p\) in Theorem 1.7 is sharp
In this section we will give an example showing that the range of \(p\) in Theorem 1.7 is sharp. Define
\[h(x,y,w_{1},\ldots,w_{s};t)=y+w_{1}(t-x)^{2}+w_{2}(t-x)^{3}+w_{2}(t-x)^{4}+ \ldots+w_{s}(t-x)^{s+1}.\]
It is straightforward to show that every polynomial (in \(t\)) of degree \(\leq s+1\) can be uniquely expressed as \(h(x,y,w_{1},\ldots,w_{s};t)\) for an appropriate choice of \(x,y,w_{1},\ldots,w_{s}\).
For \(\rho>0\) small, define
\[f(x,y)=(y+\rho)^{-1/(s+1)}\chi_{[-1,1]\times[0,1]}.\]
Then \(\|f\|_{s+1}\sim(\log 1/\rho)^{1/(s+1)}\). On the other hand, for \((x,y)\) in a neighborhood of \((0,1)\), we can select \(w\) so that the curve \(h(x,y,w_{1},\ldots,w_{s};0)\) is tangent to the \(x\)-axis to order \(s\), and hence
\[\int_{\gamma_{u}}f\sim\log(1/\rho),\]
where \(\gamma_{u}\) is the graph of \(t\mapsto h(x,y,w_{1},\ldots,w_{s};t)\) over \([-1,1]\). Letting \(\rho\searrow 0\), we conclude that the operator \(M\) from (1.7) cannot be bounded from \(L^{p}\to L^{p}\) for \(p=s+1\). To show that no \(L^{p}\to L^{p}\) bound is possible for \(p<s+1\) is straightforward: let \(h\) be as above, and let \(f\) be the characteristic function of a \(1\times\rho\) rectangle.
Geometric lemmas
In this section we will record several computations that explore some of the consequences of curve-rectangle tangency. Our main tool will be Taylor's theorem with remainder. In a typical argument in this section, we will approximate a function \(f\) by its \(k\)-th order Taylor polynomial, which we denote by \(f_{k}\). To show that the function \(f\) cannot be small on a large set, we will need the analogous result for \(f_{k}\). The following inequalities will be useful for this purpose.
**Theorem B.1** (Remez inequality).: _Let \(I\subset\mathbb{R}\) be a finite interval, let \(E\subset I\) be measurable, and let \(P\) be a polynomial of degree at most \(D\). Then_
\[\sup_{x\in I}|P(x)|\leq\Big{(}\frac{4|I|}{|E|}\Big{)}^{D}\sup_{x\in E}|P(x)|.\]
**Theorem B.2** (Polya inequality).: _Let \(P\) be a polynomial of degree at most \(D\), with leading coefficient \(a\in\mathbb{R}\). Then for \(\lambda>0\), we have_
\[|\{x\in\mathbb{R}\colon|P(x)|\leq\lambda\}|\leq 4\Big{(}\frac{\lambda}{2|a|} \Big{)}^{1/D}.\]
The next inequality says that if \(f\) is small on a long interval, then the derivatives of \(f\) must also be small on this interval.
**Lemma B.3**.: _Let \(k\geq 1\), \(\delta>0\), and let \(f\in C^{k}\) with \(\|f\|_{C^{k}}\leq 1\). Let \(I\subset[0,1]\) be a closed interval of length at most \(\delta^{1/k}\), and suppose that \(|f(x)|\leq\delta\) for \(x\in I\)._
_Then_
\[\sup_{x\in I}|f^{(i)}(x)|\lesssim\delta|I|^{-i},\quad i=0,\dots,k.\] (B.1)
Proof.: First, we may suppose that \(\delta^{1/(k-1)}<|I|\leq\delta^{1/k}\), since otherwise (B.1) for \(i=k\) follows from the assumption \(\|f\|_{C^{k}}\leq 1\), and we may replace \(k\) by \(k-1\).
Let \(K=2\cdot 8^{k^{2}}k\). We will prove that (B.1) holds with implicit constant \(K\). Suppose not; then there exists an index \(0\leq i\leq k\) so that
\[\sup_{x\in I}|f^{(i)}(x)|>K8^{-ik}\delta|I|^{-i}.\] (B.2)
Let \(j\) be the largest index for which (B.2) holds. Since \(\sup_{x\in I}|f(x)|\leq\delta\), \(\|f\|_{C^{k}}\leq 1\), and \(|I|\leq\delta^{1/k}\), we must have \(1\leq j\leq k-1\).
Select \(x_{0}\in I\) with \(|f^{(j)}(x_{0})|=\sup|f^{(j)}|\). Define
\[Q(x)=f(x_{0})+\sum_{i=1}^{j}\frac{(x-x_{0})^{i}}{i!}f^{(i)}(x_{0}).\]
By Polya's sub-level set inequality (Theorem B.2) with \(\lambda=K8^{-(k+1)j}\delta/j!\), we have
\[|\{x\in I\colon|Q(x)|\leq\lambda\}|\leq 4\Big{(}\frac{\lambda}{2|f^{(j)}(x_{0})|/ j!}\Big{)}^{1/j}\leq 4\Big{(}\frac{K8^{-(k+1)j}\delta/j!}{2\cdot K8^{-jk} \delta|I|^{-j}/j!}\Big{)}^{1/j}\leq\frac{1}{2}|I|.\] (B.3)
In particular, there exists a point \(x\in I\) with \(|Q(x)|\geq K8^{-(k+1)j}\delta/j!\). On the other hand, by Taylor's theorem there is a point \(x_{1}\) between \(x_{0}\) and \(x\) so that
\[f(x)=Q(x)+\frac{(x-x_{0})^{j+1}}{(j+1)!}f^{(j+1)}(x_{1}),\]
and hence
\[\begin{split}|f(x)|&\geq|Q(x)|-\frac{|x-x_{0}|^{j+1}}{(j+ 1)!}|f^{(j+1)}(x_{1})|\\ &\geq\frac{K8^{-(k+1)j\delta}}{j!}-\frac{|I|^{j+1}}{(j+1)!}\big{(} K8^{-(j+1)k}\delta|I|^{-j-1}\big{)}\\ &\geq K8^{-jk}\delta\Big{(}\frac{1}{8^{j}j!}-\frac{1}{8^{k}(j+1)! }\big{)}\\ &\geq\frac{K}{8^{k^{2}}k}\delta\\ &>\delta.\end{split}\] (B.4)
This contradicts the assumption that \(|f(x)|\leq\delta\) on \(I\). We conclude that (B.1) holds.
The next result says that if \(f\) is tangent to a \((\delta;k-1;T)\) rectangle \(R\), then there is a corresponding value of \(\rho\geq\delta\) (which depends on \(\delta,k,\) and \(T\)) so that \(f\) is tangent to a \((\rho;k)\) rectangle associated to \(R\).
**Lemma B.4**.: _Let \(k\geq 1\), \(\delta>0\), and let \(f\in C^{k}\) with \(\|f\|_{C^{k}}\leq 1\). Let \(I=[a,a+(T\delta)^{\frac{1}{k-1}}]\subset[0,1]\), with \(T\geq 1\). suppose that \(|f(x)|\leq\delta\) for \(x\in I\)._
_Let \(\rho=\max(\delta,T^{-k})\). Then \(|f(x)|\lesssim\rho\) for \(x\in[a,a+\rho^{1/k}]\)._
Proof.: If \(T\geq\delta^{1/k}\) then \((T\delta)^{\frac{1}{k-1}}\geq\delta^{1/k}\) the the conclusion is immediate.
Suppose instead that \(T<\delta^{1/k}\) and \(\rho=T^{-1/k}\). Since \(\|f\|_{C^{k}}\leq 1\) and \(|I|\leq\delta^{1/k}\), we can apply Lemma B.3 to conclude that
\[\sup_{x\in I}|f^{(i)}(x)|\lesssim\delta|I|^{-i}=\delta^{1-\frac{i}{k-1}}\rho^ {\frac{i}{k(i-1)}},\quad i=0,\ldots,k-1.\] (B.5)
Since \(\|f\|_{C^{k}}\leq 1\), we can apply Taylor's theorem to conclude that for each \(x\in[a,a+\rho^{1/k}]\), there is a point \(x_{1}\) between \(a\) and \(x\) so that
\[f(x)=\sum_{i=0}^{k-1}\frac{f^{(i)}(a)}{i!}(x-a)^{i}+\frac{f^{k}(x_{1})}{(k)!} (x-a)^{k},\]
and hence by (B.5) (and noting that \(\rho\leq\delta\)),
\[|f(x)|\lesssim\sum_{i=0}^{k-1}\delta^{1-\frac{i}{k-1}}\rho^{\frac{i}{k(k-1)} }\rho^{\frac{i}{k}}+\rho^{k/k}=\sum_{i=0}^{k-1}\delta(\rho/\delta)^{\frac{i}{ k-1}}+\rho\lesssim\rho.\qed\]
The next result says that if a set of functions are all tangent to a common curvilinear rectangle of dimensions \(A\delta\times\delta^{1/k}\), then a large fraction of these functions must be tangent to a common curvilinear rectangle of dimensions \(\delta\times\delta^{1/k}\). The proof is an application of pigeonholing, and is omitted.
**Lemma B.5**.: _Let \(k\geq 1,A\geq 1\). Then there exists \(\varepsilon>0\) so that the following holds. Let \(F\) be a set of functions with \(C^{k}\) norm at most 1. Suppose that there is an interval \(I\subset[0,1]\) of length \(\delta^{1/k}\) so that \(\sup_{x\in I}|f(x)-g(x)|\leq A\delta\) for all \(f,g\in F\). Then there is a set \(F^{\prime}\subset F\) of cardinality at least \(\varepsilon(\#F)\) so that \(\sup_{x\in I}|f(x)-g(x)|\leq\delta\) for all \(f,g\in F^{\prime}\)._
The next result records a useful property of families of curves that forbid \(k\)-th order tangency: if two functions from this family are both tangent to a common \((\delta;k;T)\) rectangle for \(T\geq 1\), then these functions must be close in \(C^{k}\) norm. The proof is similar to that of Lemma B.3, and is omitted.
**Lemma B.6**.: _Let \(k\geq 2\), \(\delta>0\), \(K\geq 1.\) Let \(I\) be a compact interval and let \(f\in C^{k}(I)\) with \(\|f\|_{C^{k}(I)}\leq 1\). Let \(J\subset I\) be a closed interval of length at most 1. Suppose that_
\[\sup_{x\in J}|f(x)|\leq\delta,\] (B.6)
\[\|f\|_{C^{k}(I)}\leq K\inf_{x\in I}\sum_{j=0}^{k}|f^{(j)}(x)|.\] (B.7)
_Then_
\[\|f\|_{C^{k}(I)}\lesssim(K/|J|)^{k}\delta.\] (B.8)
|
2305.11040 | Simulation of a Variational Quantum Perceptron using Grover's Algorithm | The quantum perceptron, the variational circuit, and the Grover algorithm
have been proposed as promising components for quantum machine learning. This
paper presents a new quantum perceptron that combines the quantum variational
circuit and the Grover algorithm. However, this does not guarantee that this
quantum variational perceptron with Grover's algorithm (QVPG) will have any
advantage over its quantum variational (QVP) and classical counterparts. Here,
we examine the performance of QVP and QVP-G by computing their loss function
and analyzing their accuracy on the classification task, then comparing these
two quantum models to the classical perceptron (CP). The results show that our
two quantum models are more efficient than CP, and our novel suggested model
QVP-G outperforms the QVP, demonstrating that the Grover can be applied to the
classification task and even makes the model more accurate, besides the
unstructured search problems. | Nouhaila Innan, Mohamed Bennai | 2023-05-18T15:34:14Z | http://arxiv.org/abs/2305.11040v1 | # Simulation of a Variational Quantum Perceptron using Grover's Algorithm
###### Abstract
The quantum perceptron, the variational circuit, and the Grover algorithm have been proposed as promising components for quantum machine learning. This paper presents a new quantum perceptron that combines the quantum variational circuit and the Grover algorithm. However, this does not guarantee that this quantum variational perceptron with Grover's algorithm (QVPG) will have any advantage over its quantum variational (QVP) and classical counterparts.
Here, we examine the performance of QVP and QVP-G by computing their loss function and analyzing their accuracy on the classification task, then comparing these two quantum models to the classical perceptron (CP). The results show that our two quantum models are more efficient than CP, and our novel suggested model QVP-G outperforms the QVP, demonstrating that the Grover can be applied to the classification task and even makes the model more accurate, besides the unstructured search problems.
Quantum Machine Learning, Quantum Perceptron, Grover Algorithm, Variational Quantum Algorithm.
## I Introduction
Recently, there has been an increasing number of studies to combine the disciplines of quantum information and machine learning, and a variety of theories to merge these fields have consistently been put forward since machine learning is under pressure due to a lack of processing power of the increased amount of data in the world, and quantum computing offers these super computational capabilities.
The combination of these two fields invariably leads to a massive interest in innovative information processing mechanisms that open up a new and improved range of solutions for various domains of applications, and the first concept was the research on quantum models of neural networks; it was essentially biologically inspired, in the hope of finding explanations for brain function within the framework of quantum theory [1]. In 2013, this combination got the name quantum machine learning by Lloyd et al. [2] as a definition of an area of research that explores the combination of quantum information and ML principles.
However, the development of potential quantum machine learning algorithms has made some progress; several famous classical ML algorithms already have quantum analogs, such as the quantum support vector machine (QSVM), quantum k-means clustering, quantum Boltzmann machine (QBM), and the quantum perceptron (QP) which there have been some papers that mainly overview methods and algorithms of this model.
Zhou et al. [3] developed a quantum perceptron approach based on the quantum phase capable of computing the XOR function using only one neuron, then Siomau et al. [4] introduced an autonomous quantum perceptron based on calculating a set of positive valued operators and valued measurements (POVM), after that Sagheer and Zidane [5] proposed a quantum perceptron based on Siomau method capable of constructing its own set of activation operators to be applied widely in both quantum and classical applications to overcome the linearity limitation of the classical perceptron
In 2018, a multidimensional input quantum perceptron (MDIQP) was proposed by Yamamoto et al. [6]; their model had an arbitrary number of inputs with different synaptic weights, being able to form large quantum artificial neural networks (QANNs). And after that, Torrontegui and Ripoll suggested a unitary quantum perceptron as an efficient universal approximator using the sigmoid function with the possibility to apply it to different applications
like quantum sensing [7].
Wiebe et al. [8] introduced two quantum perceptron models based on Grover's search algorithm to minimize the error of QP, and based on that, we got inspired to think about Grover's algorithm as a way to develop our model. However, recently several studies showed that variational algorithms are so suitable for quantum machine learning models, especially the quantum perceptron [9; 10], so in this work, we would like to provide a different way to implement this model by increasing its accuracy using the variational circuit and Grover's algorithm. The rest of this paper is organized as follows: in section II, we review some basic knowledge of Grover's algorithm for unstructured search, and we describe the principal features of the classical perceptron by showing the mechanics of the algorithm; in Section III, we study the concept of a quantum variational perceptron by describing the state preparation, the model representation for the associated quantum circuit, and the measurement component. In Section IV, we describe the quantum variational perceptron model with Grover's algorithm, and in Section V, we examine how well our model performs. Finally, in the last section VI, this paper concludes with final remarks and future work.
## II Background
### Grover's Algorithm
Grover's algorithm is a quantum algorithm for searching an unsorted database with \(N\) items in a short amount of time. Usually, it would take \(O(\sqrt{N})\) time since we would have to search through all the entries to find the right one [11]. Even though it is simply a quadratic speed-up, it is substantial when it is significant. Unlike many previous quantum algorithms, which address a "black box" problem, Grover's Search Algorithm solves a searching problem in which the objective is to achieve a particular state with a measurement among many possible states [12]. In the simplest form, the algorithm allows us to estimate (into the database) when we give a function (output of database), as shown the figure 1.
The steps of the Grover algorithm are as follows; first, we have to define input states. Without loss of generality, we will assume that the input states are integers between \(0\) and \(2^{n}\), where \(n\) is an integer. The \(2^{n}\) integer states will be encoded using the states \(|0\rangle\) and \(|1\rangle\) of \(n\) qubits.
Second, for our case, we must define an Oracle function \(f(x)\), which is a function that returns zero for all possible input states except one input state, and it should be encoded in an operator \(O\) that acts as \(O|x\rangle=(-1)^{f(x)}\,|x\rangle\)[13], which means that the Oracle negates the probability amplitude of the input state \(|x\rangle\) if and only if \(f(x)=1\), and for better understanding, algorithm 1 explains these steps:
Figure 1: An Overview of Grover’s Algorithm Steps
```
\(\bullet\) Step 1: Initialization of the qubits in the \(|0\rangle\) state and creation of a uniform superposition of all basis inputs. \(\bullet\) Step 2: Execution of the Oracle. \(\bullet\) Step 3: Application of Grover's diffusion operator (inversion about the mean). \(\bullet\) Step 4: Repetitions of steps 2 and 3. \(\bullet\) Step 5: Final measurement.
```
**Algorithm 1** Grover's Algorithm
### Classical Perceptron
Perceptron is an artificial neuron, and so it is a neural network unit. It performs calculations to detect features or patterns in the input data. It is an algorithm for supervised learning of binary classifiers. This algorithm allows artificial neurons to learn and process features in a data set [14]. Its modeling function is given by:
\[f(x;y)=\phi(x,x) \tag{1}\]
Where \(x\) and \(y\) are the inputs and outputs are real or binary numbers. Sometimes the mathematical structure makes it convenient to choose \(\{-1,1\}\) rather than \(\{0,1\}\), and \(\phi\) is the activation function referred to as the sign function or Heaviside function [15].
The perceptron plays an essential role in machine learning projects. It is frequently used to classify data or simplify or supervise binary classifier learning capabilities. Recall that supervised learning consists of teaching an algorithm to make predictions, which can be achieved by feeding the algorithm with already correctly labeled data.
And to better understand how it works, figure 2 shows a perceptron in the typical neural network graph representation, where the inputs and outputs are considered units with certain values, updated by the units that influence them, and the connections between them are associated with a weight.
#### ii.2.1 Perceptron Algorithm
The conventional perceptron algorithm is designed for binary classification; consequently, we represent the training data used in this section as \(S=\left\{\left(x_{i},y_{i}\right)\right\}_{i=1}^{n}\) with \(x_{i}\in R\) and \(y_{i}\in\{-1,1\}\), the outputs \(y_{i}\) as mentioned before, can only accept two values \(1\,or-1\) hence the name binary classification [16].
A binary perceptron is a linear classifier. Given the data S, the objective is to learn the weight vector \(w\in R\) such that \(\left\langle w,x_{i}\right\rangle>0\) for any \(x\) belonging to class \(1\) and \(\left\langle w,x_{i}\right\rangle\leq 0\) for any \(x\) belonging to class \(-1\).
The idea of the perceptron algorithm is to initialize \(w\) arbitrarily, iterate several times (set a priory or until convergence) on the training data, and adjust the weight w each time a data element is misclassified, and we can formulate it as follows (Algorithm 2):
Figure 2: Graphical representation of perceptron
```
Initialize the weights \(w\) and the bias \(b\) randomly. for to \(n\)do for each example \((x_{i},y_{i})\)do Compute the prediction \(y_{i}=sign\left(\left\langle w,x_{i}\right\rangle\right)\) if\(\hat{y}_{i}\neq y_{i}\)then if\(y_{i}\) is positive then Adjust \(w:w=w+x_{i}\) elseif\(y_{i}\) is negative then Adjust \(w:w=wix_{i}\) endif endif endfor endfor
```
**Algorithm 2** Perceptron Algorithm
## III Quantum variational perceptron
The Quantum Variational Perceptron (QVP) is a machine learning model that combines quantum computing and artificial neural networks. It is based on the variational quantum circuit (VQC) model [17; 18], which shows how quantum circuits can represent the parameters of a neural network. The QVP uses quantum circuits to do complicated calculations that are hard to do classically, and the classical optimizer adjusts the parameters of these quantum circuits to make them optimized.
One of the main advantages of the QVP is that it can handle high-dimensional and non-linear data, which is difficult to process with classical neural networks. The QVP can also do quantum-enhanced feature extraction to improve the model's accuracy, which is essential for natural language processing, drug discovery, and classification.
The QVP also has potential advantages over classical neural networks in terms of robustness to noise and generalization that we will show in the results paragraph V, quantum computing devices are sensitive to noise, but the QVP can use quantum error correction techniques to mitigate this issue. This robustness to noise makes the QVP well-suited for applications where the data is uncertain or noisy.
So our goal is to use quantum computing to compute \(f(x;\theta)\) and then use a classical optimizer to optimize the lost function, so we represent our problem with a quantum circuit [19; 20; 21]. We build this circuit in three steps: state preparation, model circuit, and measurement.
### State Preparation:
State preparation is an essential step in the quantum variational circuit [22]. It involves preparing the initial quantum state as an input to the rest of our circuit, and we can determine this state by a set of parameters that we vary to optimize the circuit's output [23]. There are many different approaches for state preparation, but one popular method is to use a series of unitary transformations to transform a known initial state into the desired variational state. These unitary transformations can be implemented using a variety of quantum gates, such as rotation gates, controlled-not gates, and phase gates. It is essential to carefully choose the initial state and the unitary transformations to maximize the chances of obtaining the desired output from the quantum circuit.
So our first step is to encode the classical data \(x\) into quantum data; as we mention, there are different approaches to accomplish it, like basis encoding, angle encoding, higher-order encoding, and amplitude encoding [24].
Each methodology possesses a unique application. For instance, in the basis encoding method, the quantum state is represented on a specific basis, including the computational or Fourier bases. Conversely, in the angle encoding method, the quantum state is expressed in terms of the angles of rotation applied to the state. Additionally, in higher-order encoding, the quantum state is represented utilizing higher-order properties, such as entanglement or quantum coherence.
In our case, we chose the amplitude encoding method, which means that our data are directly associated with the amplitude of the quantum state; the idea is to represent these quantum states as complex-valued amplitudes to
describe the probability of measuring a particular outcome when the quantum state is measured.
In this method, we represent quantum states as vectors within a complex vector space and quantum operations as matrices acting upon these vectors. This representation offers a compact and intuitive framework for describing quantum states and operations, making it particularly useful within the context of quantum variational circuits. The advantage of this method is the ability to encode a data set of \(M\) inputs with \(N\) features need only \(n=log(N*M)\) qubits; it's based on creating an operator \(\phi(x)\) which will result in the state \(\phi(x)|0_{p}\rangle\) with \(p\) the number of qubits.
### Model Circuit:
The model circuit step is a crucial component in the optimization process. In this step, a parametrized quantum circuit called an ansatz, in the form of a unitary operator, is built to model our quantum state. The parameters of the circuit are adjusted iteratively to minimize the difference between the model state and the target state. The model circuit step is essential for achieving accurate and efficient quantum state preparation and algorithm implementation. Also, the performance and success of our circuit depend on how well we choose the ansatz structure and the optimization algorithm.
So we have built \(U(\theta)\), a unitary operator from the quantum state \(|\phi(x)\rangle\), which represents the vector \(x\) in the quantum circuit, such that \(U(\theta)|\phi(x)\rangle\) is a quantum state measurable, with \(\theta\) the trainable parameter.
After that, we decomposed \(U(\theta)=U_{1}...U_{L}\) as a product of two qubits parameterized gates, then we used a combination of generic unitary gates and CNOT gates, and as a result, we have the expression of \(U\) as follows:
\[U(\theta,\phi,\lambda)=\begin{bmatrix}cos(\frac{\theta}{2})&-e^{i\lambda}sin( \frac{\theta}{2})\\ e^{i\phi}sin(\frac{\theta}{2})&e^{i\lambda+i\phi}cos(\frac{\theta}{2})\end{bmatrix} \tag{2}\]
### Measurement:
The measurement step allows the extraction of information from the quantum state, it involves the projection of the quantum state onto a measurement basis, and the collection of the resulting measurement outcomes is used to calculate the loss function; we use this function to adjust the parameters of our quantum circuit in this current step.
So we start by generating the first qubit of the quantum state using the circuit in figure 3, then we measured it, and it gave us this expression:
\[f(x;\theta)=\mathbb{P}(y|x=1;\theta)=\mathbb{P}(q_{0}=1|x;\theta)=\sum_{k=1}^ {n}|(U(\theta)\phi(x))_{k}|^{2} \tag{3}\]
We have simplified the expression 3 using \(\pi(x;\theta)\) as \(\mathbb{P}(q_{0}=1|x;\theta)\), now after we have set up all the parameters required, our model is trained to minimize a loss function which can be defined as follows:
\[\mathcal{L}(\theta)=\frac{1}{N}\sum_{i=1}^{N}l(\pi(x_{i};\theta),y_{i}) \tag{4}\]
Figure 3: Quantum Variational Perceptron Circuit
The parameters are updated via batch stochastic gradient descent; the difficulty was accurately calculating the gradient. Fortunately, it is possible to evaluate the gradient via quantum circuits, and as a result, we have equation 5:
\[\nabla_{\theta}\mathcal{L}(\theta)=\frac{1}{N}\sum_{i=1}^{N}\nabla_{\theta}(l( \pi(x_{i};\theta),y_{i}))=\frac{1}{N}\sum_{i=1}^{N}\nabla_{\theta}\pi(x_{i}; \theta)\partial_{1}l(\pi(x_{i};\theta),y_{i}) \tag{5}\]
Where \(\partial_{1}l\) is the partial derivative of \(l\) regarding the first variable, the expression of gradient loss became:
\[\nabla_{\theta}\pi(x_{i};\theta)=\frac{-\nabla_{\theta}\mathbb{E}(\sigma_{z}) }{2}=-\frac{1}{2}\nabla_{\theta}\langle\phi(x)U|\sigma_{z}|U\phi(x)\rangle \tag{6}\]
with \(\sigma_{z}\otimes\mathbb{I}\otimes...\otimes\mathbb{I}\) is abbreviated to \(\sigma_{z}\). After considering \(\nu\) as an element of the vector \(\theta\), we can represent the \(\partial_{\nu}\pi(x;\theta)\) as:
\[\partial_{\nu}\pi(x;\theta)=-\frac{1}{2}\langle\phi(x)\partial_{\nu}U|\sigma _{z}|U\phi(x)\rangle-\frac{1}{2}\langle\phi(x)U|\sigma_{z}|\partial_{\nu}U \phi(x)\rangle \tag{7}\]
And using the rules of derivation, equation 7 became equal to the expression below (8).
\[\partial_{\nu}\pi(x;\theta)=-\frac{1}{2}(\langle\phi(x)\partial_{\nu}U|\sigma _{z}|U\phi(x)\rangle+\langle\phi(x)\partial_{\nu}U|\sigma_{z}|U\phi(x)\rangle^ {*})=-Re\{\langle\phi(x)\partial_{\nu}U|\sigma_{z}|U\phi(x)\rangle\} \tag{8}\]
Since the \(\partial_{\nu}U\) is not a unitary operator, we cannot use it in our quantum circuit; however, we achieve it due to our circuit because the derivative of \(U\) only concerns the derivative of the elementary gate, which means that \(\partial_{\nu}U=U_{1}...\partial_{\nu}(U_{i})...U_{L}\) where \(U_{i}\) is the gate to which \(\nu\) belongs. For an elementary unitary gate, as described above, we have the following identities:
\[\partial_{\theta}U=\frac{1}{2}U(\theta+\pi,\phi,\lambda) \tag{9}\]
\[\partial_{\phi}U=\frac{i}{2}(U(\theta,\phi,\lambda)-U(\theta,\phi+\pi,\lambda)) \tag{10}\]
\[\partial_{\lambda}U=\frac{i}{2}U(\theta,\phi,\lambda)-U(\theta,\phi,\lambda+\pi)) \tag{11}\]
Therefore \(\partial_{\nu}U\) can be computed as the form:
\[\sum_{k=1}^{K}a_{k}Re\{\langle\phi(x)U(\theta^{[k]})|\sigma_{z}|U(\theta)\phi (x)\rangle\}+\sum_{l=1}^{L}b_{l}Im\{\langle\phi(x)U(\theta^{[l]})|\sigma_{z}| U(\theta)\phi(x)\rangle\} \tag{12}\]
where \(\theta^{[k]}\) and \(\theta^{[l]}\) are the modified vector of parameters from the above identities, the imaginary part comes from the \(i\) in the same identities for \(\phi\) and \(\lambda\) with the result that we can calculate our loss function using 12.
## IV Quantum variational perceptron with Grover algorithm
According to the work of Khanal et al. [25], in order to use Grover's algorithm in the classification task, they reformulated the problem of classification as a search problem, replacing Grover's oracle with the variational algorithm and adding quantum gates (AND, XOR, and OR). In order to introduce the notion of classification in Grover's algorithm, they used the Kernel method, which is a method that is more adapted to complex problems [26]; in addition, other research works try to achieve the classification task by applying the Grover algorithm in different methods [27; 28].
This paper suggests a new method based on adding the Grover circuit after the variational algorithm. As known, we can use the Grover circuit for amplitude amplification, so in our suggested method, we use it to speed up the training process by amplifying the amplitude of the target state. Thus making it easier to identify during the measurement step, besides amplifying the amplitude of the correct output state in the superposition of all possible output states and updating the weights to get a more accurate model.
The circuit is almost the same as the variational circuit. However, we obtained the classification by measuring the probability of the qubits in a particular configuration, and we executed the same circuit above several times to obtain the loss function values. We counted how many times the configuration appeared in the classified data set. It is two-dimensional because we use two qubits; we assign values to the parameters of the circuit, then we rescale these input values to fit the angles.
Figure 4 shows the quantum circuit applied in this method, using the same steps as in Section IV, then adding a Grover circuit using Hadamard gates, CNOT gates, controlled \(Z\) gates, and Pauli X gates [29; 30].
## V Results and Discussion
To test our model implementation, we loaded the Iris data set to evaluate its performance. We have calculated the accuracy for each model's iteration: the quantum variational perceptron with Grover and the variational perceptron using the _ibmq_qasm_simulator_, which is one of the IBM quantum simulators [31]. This QASM simulator allows us to generate quantum circuits both ideally and subject to noise modeling with a maximum of 32 qubits. The results verify if our suggested model fits the training and validation sets by using the result of the variational perceptron as a reference and comparing both quantum models to the classical model to show the quantum advantage.
Table 1 shows how we summarized the results of the three different models, and comparing the accuracy of a model on the training set versus the validation set can give insight into the model's ability to generalize to new data. In general, we expect the model's accuracy on the training set to be higher than its accuracy on the validation set because the model has seen the training data during training and has learned to classify it correctly, and as expected, the table 1 verified this.
Figure 4: Quantum Variational Perceptron-Grover Circuit
Accordingly, we can assume that none of the three models is overfitting. The difference is that, generally, when a model reaches high validation accuracy in fewer iterations, it might be considered more efficient because it needs fewer data points to make accurate predictions. In this work, our quantum variational perceptron model and the quantum variational perceptron with Grover have a high validation accuracy of 99% and reach that accuracy after a few iterations, indicating that they might be more efficient than the classical perceptron model.
The classical perceptron model has the lowest validation accuracy at 96%, but it reached this accuracy after most iterations, which shows that both quantum models are better. However, it may be more robust and generalizable to new data.
It is essential to mention that the number of iterations alone is insufficient to decide which model is better. We can explain this because of the use of quantum computers that, with quantum computing laws, have more powerful and efficient ways to predict the output. Furthermore, because of the use of the Grover circuit, our suggested model became in a configuration that made the measure of probability go smoother and faster, which led us to the expected value of our probability and weights in a speedy way to reach good accuracy.
Sometimes, it is crucial to stop the training when the model starts to overfit; this can be done by monitoring the loss or accuracy over time, which is why the next step is to compare the loss function.
We use the loss function to optimize a quantum machine learning model or a classical one, measuring the difference between the model's predicted output and the actual output. We typically quantify this difference as an error, exactly the sum of errors made for each example in the data set; that is why we train the model to minimize this error.
The loss value implies how poorly or well a model behaves after each iteration of optimization, so we have tried to plot the loss function of the three methods: the classical perception, the quantum variational perceptron, and the quantum variational perceptron with Grover, figure 5 shows that the loss function of our approach converged well over the 100 iterations over the other models (CP & QVP), and as known, a lower loss function value generally indicates better performance, as it means that the model's predictions are closer to the actual values. So the loss function of QVP-G (\(\approx\) 0.4) would be considered the best performing, so we can assume that our method improves the model's performance.
\begin{table}
\begin{tabular}{||c c c c c c c||} \hline & \multicolumn{2}{c|}{CP} & \multicolumn{2}{c|}{QVP} & \multicolumn{2}{c|}{QVP-Grover} \\ \hline Iter & Acc train & Acc valid & Acc train & Acc valid & Acc train & Acc valid \\ \hline
[MISSING_PAGE_POST]
\hline \end{tabular}
\end{table}
Table 1: Summary of classification accuracy on the training and validation sets for Classical Perceptron (CP), Quantum Variational Perceptron (QVP) and Quantum Variational Perceptron with Grover(GVP-G).
## VI Conclusion
In this study, we proposed a novel model of a quantum perceptron that can achieve high accuracy to solve the classification task. First, we built our model using the variational circuit and Grover's algorithm. Then we demonstrated that our proposed model performs better by comparing its results to the variational quantum and classical perceptrons.
Furthermore, our model's loss was minimal compared to the other models (CP & QVP). Those results proved that our model is accurate and perfectly fits some data. Hence, the use of Grover's algorithm in quantum machine learning, specifically the perceptron model, is promising since this approach can reduce the number of iterations needed to achieve the best accuracy.
In the future, we intend to develop other quantum perceptron models, evaluate their performance with the classification tasks, and use the proposed perceptron in real-world applications. Another exciting research path is to explore and implement other quantum models.
|
2306.08041 | Data Poisoning to Fake a Nash Equilibrium in Markov Games | We characterize offline data poisoning attacks on Multi-Agent Reinforcement
Learning (MARL), where an attacker may change a data set in an attempt to
install a (potentially fictitious) unique Markov-perfect Nash equilibrium for a
two-player zero-sum Markov game. We propose the unique Nash set, namely the set
of games, specified by their Q functions, with a specific joint policy being
the unique Nash equilibrium. The unique Nash set is central to poisoning
attacks because the attack is successful if and only if data poisoning pushes
all plausible games inside the set. The unique Nash set generalizes the reward
polytope commonly used in inverse reinforcement learning to MARL. For zero-sum
Markov games, both the inverse Nash set and the set of plausible games induced
by data are polytopes in the Q function space. We exhibit a linear program to
efficiently compute the optimal poisoning attack. Our work sheds light on the
structure of data poisoning attacks on offline MARL, a necessary step before
one can design more robust MARL algorithms. | Young Wu, Jeremy McMahan, Xiaojin Zhu, Qiaomin Xie | 2023-06-13T18:01:18Z | http://arxiv.org/abs/2306.08041v2 | # On Faking a Nash Equilibrium
###### Abstract
We characterize offline data poisoning attacks on Multi-Agent Reinforcement Learning (MARL), where an attacker may change a data set in an attempt to install a (potentially fictitious) unique Markov-perfect Nash equilibrium. We propose the unique Nash set, namely the set of games, specified by their Q functions, with a specific joint policy being the unique Nash equilibrium. The unique Nash set is central to poisoning attacks because the attack is successful if and only if data poisoning pushes all plausible games inside it. The unique Nash set generalizes the reward polytope commonly used in inverse reinforcement learning to MARL. For zero-sum Markov games, both the inverse Nash set and the set of plausible games induced by data are polytopes in the Q function space. We exhibit a linear program to efficiently compute the optimal poisoning attack. Our work sheds light on the structure of data poisoning attacks on offline MARL, a necessary step before one can design more robust MARL algorithms.
## 1 Introduction
Data poisoning attacks are well-known in supervised learning (intentionally forcing the learner to train a wrong classifier) and reinforcement learning (wrong policy) [1; 5; 6; 10; 11; 9; 13; 17; 8; 12; 15; 16]. Can data poisoning attacks be a threat to Markov Games, too? This paper answers this question in the affirmative: Under mild conditions, an attacker can force two game-playing agents to adopt any fictitious Nash Equilibrium (NE), which does not need to be a true NE of the original Markov Game. Furthermore, the attacker can achieve this goal while minimizing its attack cost, which we define below. Obviously, such power poses a threat to the security of Multi-Agent Reinforcement Learning (MARL).
Formally, we study two-player zero-sum offline MARL. Let \(D\) be a dataset \(\{(s^{(k)},\mathbf{a}^{(k)},r^{(k)})\}_{k=1}^{K}\) with \(K\) tuples of state \(s\), joint action \(\mathbf{a}=(a_{1},a_{2})\), rewards \((r,-r)\). The attacker's target NE is an arbitrary pure strategy pair \(\pi^{\dagger}:=(\pi_{1}^{\dagger},\pi_{2}^{\dagger})\). The attacker can poison \(D\) into another dataset \(D^{\dagger}\) by paying cost \(C(D,D^{\dagger})\). Two MARL agents then receive \(D^{\dagger}\) instead of \(D\). The attacker wants to ensure that the agents learn the target NE \(\pi^{\dagger}\) while minimizing \(C\).
This problem is not well studied in the literature. Naive approaches - such as modifying all the actions in the dataset to those specified by the target policy \((\pi_{1}^{\dagger},\pi_{2}^{\dagger})\) - might not achieve the goal for MARL learners who assign penalties due to the lack of data coverage. Modifying all the rewards in the dataset that coincides with the target policy to the reward upper bound might be feasible, but would not be optimal in terms of attack cost \(C\). Results on data poisoning against single-agent reinforcement learning also cannot be directly applied to the multi-agent case. In particular, there
are no optimal policies in MARL, and equilibrium policies are computed instead. There could be multiple equilibria that are significantly different, and as a result, installing a target policy as the unique equilibrium is difficult.
Adversarial attacks on MARL have been studied in [7; 3; 4], but we are only aware of one previous work [14] on offline reward poisoning against MARL. Nonetheless, they made the strong assumption that the learners compute the Dominant Strategy Markov Perfect Equilibrium (DSMPE). In contrast, we assume a weaker solution concept, Markov Perfect Equilibrium (MPE). Our general attack framework also accommodates other forms of data poisoning.
Our framework can be summarized by the mnemonic "ToM moves to the UN". (i) UN stands for the Unique Nash set, which is the set of Q functions that make the target \(\pi^{\dagger}\) the unique NE. Uniqueness is crucial for the attacker to ensure that MARL agents choose the target NE with certainty, and not breaking ties arbitrarily among multiple NEs. (ii) ToM stands for the attacker's Theory of Mind of the MARL agents, namely the plausible set of Q functions that the attacker believes the agents will entertain upon receiving the poisoned dataset \(D^{\dagger}\). (iii) The attack is successful if, by controlling \(D^{\dagger}\), the attacker can move the Tom set inside the UN set. A successful attack with the smallest cost \(C(D,D^{\dagger})\) is optimal.
Summary of Contributions:
* We show that the set of zero-sum Markov games for which a deterministic policy is the unique MPE is equivalent to the set of games for which the policy is a strict MPE, and can be characterized by a polytope in the Q function space.
* We describe a class of MARL learners that compute equilibrium policies based on games within confidence regions around a point estimate of the Q function of the Markov game. With appropriate parameters, an attack on these learners would work on most of the model-based and model-free offline MARL learners proposed in the literature.
* We convert a version of the reward poisoning problem to a linear program that can be solved efficiently, and we provide an attack that is always feasible as long as the sizes of the attacker's confidence regions are sufficiently small.
* We provide a unified framework for offline data poisoning attacks on MARL agents. Our results highlight a security threat to multi-agent reinforcement learning agents, a necessary step before one can design novel MARL algorithms robust to adversarial attacks.
## 2 Faking a Nash Equilibrium
### The Unique Nash Set (UN) of a Normal-form Game
We present the main components of our approach with a normal-form game, in particular, a two-player zero-sum game is a tuple \(\left(\mathcal{A},R\right)\), where \(\mathcal{A}=\mathcal{A}_{1}\times\mathcal{A}_{2}\) is the joint action space and \(R:\mathcal{A}\rightarrow\left[-b,b\right]\) is the mean reward function. We use \(b=\infty\) in the case of unbounded rewards. Given \(\mathcal{A}\), we denote the set of reward functions by \(\mathcal{R}=\left\{R:\mathcal{A}\rightarrow\mathbb{R}\right\}\).
A pure strategy profile \(\pi=\left(\pi_{1},\pi_{2}\right)\) is a pair of actions, where \(\pi_{i}\in\mathcal{A}_{i}\) specifies the action for agent \(i\in\left\{1,2\right\}\). We focus on pure strategies, but we allow mixed strategies in which case we use the notation \(\pi_{i}\left(a_{i}\right)\) to represent the probability of \(i\) using the action \(a_{i}\in\mathcal{A}_{i}\), and \(R\) computes the expected reward \(R\left(\pi\right)\coloneqq\sum_{a_{1}\in\mathcal{A}_{1},a_{2}\in\mathcal{A}_{ 2}}\pi_{1}\left(a_{1}\right)\pi_{2}\left(a_{2}\right)R\left(\left(a_{1},a_{2} \right)\right)\).
**Definition 1** (Nash Equilibrium).: A Nash equilibrium (NE) of a normal-form game \(\left(\mathcal{A},R\right)\) is a mixed strategy profile \(\pi\) that satisfies,
\[R\left(\left(\pi_{1},a_{2}\right)\right) =R\left(\pi\right)=R\left(\left(a_{1},\pi_{2}\right)\right), \forall\ a_{1}:\pi_{1}\left(a_{1}\right)>0,a_{2}:\pi_{2}\left(a_{2}\right)>0,\] \[R\left(\left(\pi_{1},a_{2}\right)\right) \leq R\left(\pi\right)\leq R\left(\left(a_{1},\pi_{2}\right) \right),\forall\ a_{1}:\pi_{2}\left(a_{1}\right)=0,a_{2}:\pi_{2}\left(a_{1} \right)=0,\]
in particular, for a pure strategy profile \(\pi\), it is a Nash equilibrium if,
\[R\left(\left(\pi_{1},a_{2}\right)\right)\leq R\left(\pi\right)\leq R\left( \left(a_{1},\pi_{2}\right)\right),\forall\ a_{1}\neq\pi_{1},a_{2}\neq\pi_{2}. \tag{1}\]
We define \(\mathcal{N}\left(R\right)\coloneqq\left\{\pi:\pi\text{ is an NE of }\left(\mathcal{A},R\right)\right\}\) to be the set of all Nash equilibria of a normal-form game \(\left(\mathcal{A},R\right)\).
Now, we define the inverse image of \(\mathcal{N}\) from a single pure strategy profile \(\pi\) back to the space of reward functions to be the unique Nash set.
**Definition 2** (Unique Nash).: The unique Nash set of a pure strategy profile \(\pi\) is the set of reward functions \(R\) such that \(\left(\mathcal{A},R\right)\) has a unique Nash equilibrium \(\pi\),
\[\mathcal{U}\left(\pi\right)\coloneqq\mathcal{N}^{-1}\left(\left\{\pi\right\} \right)=\left\{R\in\mathcal{R}:\mathcal{N}\left(R\right)=\left\{\pi\right\} \right\}. \tag{2}\]
To characterize \(\mathcal{U}\left(\pi\right)\), we note that for normal-form games, a pure strategy profile \(\pi\) is the unique Nash equilibrium of a game if and only if it is a strict Nash equilibrium, which is defined as a policy \(\pi\) that satisfies (1) with strict inequalities.
**Proposition 1** (Unique Nash Polytope).: _For any pure strategy profile \(\pi\),_
\[\mathcal{U}\left(\pi\right) =\left\{R\in\mathcal{R}:\pi\text{ is a strict NE of }\left(\mathcal{A},R\right)\right\}\] \[=\left\{R\in\mathcal{R}:R\left(\left(\pi_{1},a_{2}\right)\right)< R\left(\pi\right)<R\left(\left(a_{1},\pi_{2}\right)\right),\forall\ a_{1}\neq\pi_{1},a_{2}\neq\pi_{2} \right\}. \tag{3}\]
Here, the uniqueness is among all Nash equilibria including mixed-strategy Nash equilibria. The proof of the equivalence between (2) and (3) is in the appendix. We restrict our attention to pure-strategy equilibria and defer the discussion of mixed strategy profiles to the last section.
To avoid working with strict inequalities, we define a closed subset of \(\mathcal{U}\left(\pi\right)\) of reward functions that lead to strict Nash equilibria with an \(\iota\) reward gap, which means all strict inequalities in (3) are satisfied with a gap of at least \(\iota\), for some \(\iota>0\).
**Definition 3** (Iota Strict Unique Nash).: For \(\iota>0\), the \(\iota\) strict unique Nash set of a pure strategy profile \(\pi\) is,
\[\underline{\mathcal{U}}\left(\pi;\iota\right)\coloneqq\left\{R\in\mathcal{R}:R \left(\left(\pi_{1},a_{2}\right)\right)+\iota\leq R\left(\pi\right)\leq R \left(\left(a_{1},\pi_{2}\right)\right)-\iota,\forall\ a_{1}\neq\pi_{1},a_{2} \neq\pi_{2}\right\}. \tag{4}\]
For every pure strategy profile \(\pi\) and \(\iota>0\), we have \(\underline{\mathcal{U}}\left(\pi;\iota\right)\subset\mathcal{U}\left(\pi\right)\), and the set is a polytope in \(\mathcal{R}\).
### The Attacker's Theory of Mind (ToM) for Offline Normal-form Game Learners
We provide a model of the attacker's theory of mind of the victim. We assume that the victims compute the Nash equilibria based on the reward functions estimated from a dataset \(D\in\mathcal{D}\), where \(\mathcal{D}\) is the set of possible datasets with \(K\) episodes in the form \(\left\{\left(\mathbf{a}^{\left(k\right)},r^{\left(k\right)}\right)\right\}_{k =1}^{K}\), with \(\mathbf{a}^{\left(k\right)}\in\mathcal{A}\) and \(r^{\left(k\right)}\in\left[-b,b\right]\) for every \(k\in\left[K\right]\).
**Definition 4** (Theory of Mind).: Given a dataset \(D\in\mathcal{D}\), the theory-of-mind set \(\mathcal{T}\left(D\right)\subseteq\mathcal{R}\) is the set of plausible reward functions that the victims estimate based on \(D\) to compute their equilibria. In particular, if the victims learn an action profile \(\pi\), then \(\pi\in\bigcup_{R\in\mathcal{T}\left(D\right)}\mathcal{N}\left(R\right)\).
The theory-of-mind sets can be arbitrary and could be difficult to work with. We define an outer approximation the set that is a hypercube in \(\mathcal{R}\).
**Definition 5** (Outer Approximation of Theory of Mind).: An outer approximation of \(\mathcal{T}\left(D\right)\) is a set denoted by \(\overline{\mathcal{T}}\left(D\right)\) that satisfies \(\mathcal{T}\left(D\right)\subseteq\overline{\mathcal{T}}\left(D\right)\) for every \(D\in\mathcal{D}\), and can be written in the form,
\[\overline{\mathcal{T}}\left(D\right)=\left\{R\in\mathcal{R}:\left|R\left( \mathbf{a}\right)-\hat{R}\left(\mathbf{a}\right)\right|\leq\rho^{\left(R \right)}\left(\mathbf{a}\right),\forall\ \mathbf{a}\in\mathcal{A}\right\}, \tag{5}\]
for some point estimate \(\hat{R}\) and radius \(\rho^{\left(R\right)}\).
We call \(\overline{\mathcal{T}}\left(D\right)\) a linear outer approximation if \(\hat{R}\) is linear in \(\left\{r^{\left(k\right)}\right\}_{k=1}^{K}\).
We present a few examples of the theory-of-mind sets as follows.
**Example 1** (Theory of Mind for Maximum Likelihood Victims).: Given a dataset \(D\in\mathcal{D}\), if the attacker believes the victims are maximum likelihood learners, then \(\mathcal{T}\left(D\right)\) is a singleton \(R^{\text{ MLE }}\), where, for every \(\mathbf{a}\in\mathcal{A}\),
\[R^{\text{ MLE }}\left(\mathbf{a}\right)\coloneqq\begin{cases} \frac{1}{N\left(\mathbf{a}\right)}\sum_{k=1}^{K}r^{\left(k\right)}\mathbb{1}_{ \left\{\mathbf{a}^{\left(k\right)}=\mathbf{a}\right\}}&\text{ if }N\left(\mathbf{a}\right)>0\\ 0&\text{ if }N\left(\mathbf{a}\right)=0\end{cases};N\left(\mathbf{a} \right)\coloneqq\sum_{k=1}^{K}\mathbb{1}_{\left\{\mathbf{a}^{\left(k\right)}= \mathbf{a}\right\}}. \tag{6}\]
The smallest outer approximation \(\overline{\mathcal{T}}\left(D\right)\) can be specified using \(\hat{R}=R^{\text{ MLE}}\) and \(\rho^{\left(R\right)}=0\), and \(\overline{\mathcal{T}}\) is linear since (6) is linear in \(\left\{r^{\left(k\right)}\right\}_{k=1}^{K}\).
**Example 2** (Theory of Mind for Pessimistic Optimistic Victims).: Given a dataset \(D\in\mathcal{D}\), if the attacker believes the victims are learners that use pessimism and optimism by adding and subtracting bonus terms and estimating one or two games, as in [2], then \(\mathcal{T}\left(D\right)\) may contain two reward functions \(\underline{R}\) and \(\overline{R}\), where for every \(\mathbf{a}\in\mathcal{A}\),
\[\underline{R}\left(\mathbf{a}\right)\coloneqq R^{\text{ MLE}}\left(\mathbf{a} \right)-\beta\left(\mathbf{a}\right);\overline{R}\left(\mathbf{a}\right) \coloneqq R^{\text{ MLE}}\left(\mathbf{a}\right)+\beta\left(\mathbf{a}\right), \tag{7}\]
with \(\beta\left(\mathbf{a}\right)=\dfrac{c}{\sqrt{N\left(\mathbf{a}\right)}}\) being the bonus term, for some constant \(c\).
The smallest outer approximation \(\overline{\mathcal{T}}\left(D\right)\) can be specified using \(\hat{R}=R^{\text{ MLE}}\) and \(\rho^{\left(R\right)}\left(\mathbf{a}\right)=\beta\left(\mathbf{a}\right)\) for every \(\mathbf{a}\in\mathcal{A}\), and \(\overline{\mathcal{T}}\) is linear since (6) and (7) are both linear in \(\left\{r^{\left(k\right)}\right\}_{k=1}^{K}.\)
**Example 3** (Theory of Mind for Data Splitting Victims).: Given a dataset \(D\in\mathcal{D}\), if the attacker believes the victims use maximum likelihood estimates on a subsample of the \(D\), similar to the data-splitting procedure in [2], then \(\overline{\mathcal{T}}\left(D\right)\) could be viewed as a high-probability set of rewards that the victims are estimating and \(\rho^{\left(R\right)}\) would be half of the confidence interval width for the mean of the subsample around the mean of the complete dataset \(R^{\text{ MLE}}\).
### The Cheapest Way to Move ToM into UN for Normal-form Games
The goal of the attacker is to install a specific action profile as the unique Nash equilibrium of the game learned by the victim while minimally modifying the training data. We consider a general attacker's cost as a function \(C:\mathcal{D}\times\mathcal{D}\rightarrow\mathbb{R}^{+}\) where \(C\left(D,D^{\dagger}\right)\) is the cost of modifying the dataset from \(D\) to \(D^{\dagger}\). Given the original data set \(D\in\mathcal{D}\), the attacker's attack modality \(\mathcal{D}\left(D\right)\) is the set of datasets the attacker is allowed to modify the original dataset to. For the reward poisoning problem, where \(\mathcal{D}^{\left(R\right)}\left(D\right)\) is all possible datasets in which only rewards are modified from \(r^{\left(k\right)}\) to \(r^{\dagger,\left(k\right)}\), we consider the following cost function.
**Example 4** (\(L_{1}\) Cost Function).: For reward poisoning problems, we define the \(L_{1}\) cost of modifying the dataset from \(D=\left\{\left(\mathbf{a}^{\left(k\right)},r^{\left(k\right)}\right)\right\}_ {k=1}^{K}\) to \(D^{\dagger}=\left\{\left(\mathbf{a}^{\left(k\right)},r^{\dagger,\left(k\right)} \right)\right\}_{k=1}^{K}\) by \(C^{\left(1\right)}\left(D,D^{\dagger}\right)\coloneqq\sum_{k=1}^{K}\left|r^{ \left(k\right)}-r^{\dagger,\left(k\right)}\right|\).
Now, given the original dataset \(D\) and the attacker's target action profile \(\pi^{\dagger}\), we formally state the attacker's problem as finding the cheapest way to move \(\mathcal{T}\left(D\right)\) into \(\mathcal{U}\left(\pi^{\dagger}\right)\).
**Definition 6** (Attacker's Problem).: The attacker's problem with the target action profile \(\pi^{\dagger}\) is,
\[\inf_{D^{\dagger}\in\mathcal{D}\left(D\right)} C\left(D,D^{\dagger}\right) \tag{8}\] \[s.t.\ \mathcal{T}\left(D^{\dagger}\right)\subseteq\mathcal{U} \left(\pi^{\dagger}\right).\]
In general, (8) cannot be solved efficiently, but for reward poisoning problems with \(L_{1}\) cost objective, we can relax the attacker's problem using \(\iota\) strict unique Nash sets, which is a polytope described by (4), and a linear outer approximation of the theory-of-mind set, a hypercube described by (5), which can be converted into a linear program and solved efficiently. We state this observation as the following proposition and depict the relationship between the sets in Figure 1.
**Proposition 2** (Reward Poisoning Linear Program).: _Given \(\iota>0\) and a linear \(\overline{\mathcal{T}}\), the following problem is a relaxation of the attacker's reward poisoning problem and can be converted into a linear program,_
\[\min_{D^{\dagger}\in\mathcal{D}^{\left(R\right)}\left(D\right)} C^{\left(1\right)}\left(D,D^{\dagger}\right) \tag{9}\] \[s.t.\ \overline{\mathcal{T}}\left(D^{\dagger}\right)\subseteq \underline{\mathcal{U}}\left(\pi^{\dagger};\iota\right).\]
In Figure 1, given a dataset \(D\), the general attacker's problem (8) of moving \(\mathcal{T}\left(D\right)\) (light green) to \(\mathcal{T}\left(D^{\dagger}\right)\) (light red) such that it is inside \(\mathcal{U}\left(\pi^{\dagger}\right)\) (light blue) while minimizing the distance from
to \(D^{\dagger}\) is often intractable. We construct a relaxed problem (9) of moving \(\overline{\mathcal{T}}\left(D\right)\) (green) to \(\overline{\mathcal{T}}\left(D^{\dagger}\right)\) (red) such that it is inside \(\underline{\mathcal{U}}\left(\pi^{\dagger}\right)\) (blue), in which all sets are polytopes and thus can be converted to a linear program for linear costs and linear theory-of-mind mappings.
In the appendix, we provide the complete linear program and show that the solution of (9) is feasible for (8). The optimality of the linear program solution depends on how close the outer approximation of the theory-of-mind set is, and in the case when the theory-of-mind set is already a hypercube, the infimum in (8) can be achieved by taking the limit as \(\iota\to 0\). The following is an example illustrating the conversion of (9) into a linear program.
**Example 5** (Maximum Likelihood Centered Linear Program).: In the case \(\hat{R}=R^{\text{ MLE }}\) in the theory-of-mind set, (9) is given by,
\[\min_{r^{\dagger}\in[-b,b]^{K}} \sum_{k=1}^{K}\left|r^{(k)}-r^{\dagger,(k)}\right| \tag{10}\] \[s.t. R^{\text{ MLE }}\text{ is a linear function of }r^{\dagger}\text{ satisfying \ \eqref{eq:m_1}}\] \[\overline{R}\text{ and }\underline{R}\text{ are upper and lower bounds of }\overline{\mathcal{T}}\left(r^{\dagger};R^{\text{ MLE }}\right)\text{ satisfying \ \eqref{eq:m_1}}\] \[\left(\overline{R},\underline{R}\right)\text{ is in } \underline{\mathcal{U}}\left(\pi^{\dagger}\right)\text{ satisfying \ \eqref{eq:m_1}}\]
Since \(\overline{\mathcal{T}}\left(r^{\dagger};R^{\text{ MLE }}\right)\) is a hypercube and \(\underline{\mathcal{U}}\left(\pi^{\dagger}\right)\) is a polytope, the fact that the corners of the hypercube are inside the unique Nash set if and only if every element in the hypercube is in the unique Nash set implies that the constraint in (9) is satisfied. Technically, we only require one corner of the hypercube to be inside the unique Nash polytope, as shown in Figure 1, and we leave the details to the proof of Proposition 2 in the appendix. Then, because the objective and all of the constraints in (10) are linear in \(r^{\dagger},\overline{R},\underline{R}\) and \(R^{\text{ MLE }}\), this problem is a linear program.
## 3 Faking a Markov Perfect Equilibrium
### The Unique Nash Set (UN) of a Markov Game
We now consider the attacker's problem for Markov games. A finite-horizon two-player zero-sum Markov game \(G\) is a tuple \(\left(\mathcal{S},\mathcal{A},P,R,H\right)\), where \(\mathcal{S}\) is the finite state space; \(\mathcal{A}=\mathcal{A}_{1}\times\mathcal{A}_{2}\) is the joint action space; \(P=\left\{P_{h}:\mathcal{S}\times\mathcal{A}\rightarrow\Delta\mathcal{S} \right\}_{h=1}^{H}\) is the transition function with the initial state distribution \(P_{0}\in\Delta\mathcal{S}\); and \(R=\left\{R_{h}:\mathcal{S}\times\mathcal{A}\rightarrow[-b,b]\right\}_{h=1}^{H}\) is the mean reward function; and \(H\) is the finite time horizon.
A deterministic Markovian policy \(\pi=\left(\pi_{1},\pi_{2}\right)\) is a pair of policies, where \(\pi_{i}=\left\{\pi_{i,h}:\mathcal{S}\rightarrow\mathcal{A}_{i}\right\}_{h=1}^{H}\) for \(i\in\left\{1,2\right\}\), and \(\pi_{i,h}\left(s\right)\) specifies the action used in period \(h\) and state \(s\). Again, we focus on deterministic policies, but we allow stochastic policies in which case we use
Figure 1: Attacker’s Problem
the notation \(\pi_{i}=\left\{\pi_{i,h}:\mathcal{S}\rightarrow\Delta\mathcal{A}_{i}\right\}_{h= \text{for }i\in\left\{1,2\right\}\), and \(\pi_{i,h}\left(s\right)\left(a_{i}\right)\) represent the probability of \(i\) using the action \(a_{i}\in\mathcal{A}_{i}\) in period \(h\) state \(s\).
The \(\mathsf{Q}\) function is defined as, for every \(h\in\left[H\right],s\in\mathcal{S},\mathbf{a}\in\mathcal{A}\),
\[Q_{h}\left(s,\mathbf{a}\right)=R_{h}\left(s,\mathbf{a}\right)+\sum_{s^{\prime }\in\mathcal{S}}P_{h}\left(s^{\prime}|s,\mathbf{a}\right)\max_{\pi_{1}\in \Delta\mathcal{A}_{1}}\min_{\pi_{2}\in\Delta\mathcal{A}_{2}}Q_{h+1}\left(s^{ \prime},\pi\right), \tag{11}\]
with the convention \(Q_{H+1}\left(s,\mathbf{a}\right)=0\), and in the case \(\pi\) is stochastic, we write,
\[Q_{h}\left(s,\pi_{h}\left(s\right)\right)=\sum_{a_{1}\in\mathcal{A}_{1}}\sum_{ a_{2}\in\mathcal{A}_{2}}\pi_{1,h}\left(s\right)\left(a_{1}\right)\pi_{2,h} \left(s\right)\left(a_{2}\right)Q_{h}\left(s,\left(a_{1},a_{2}\right)\right).\]
Given \(\mathcal{S},\mathcal{A},H\), we denote the set of \(\mathsf{Q}\) functions by \(\mathcal{Q}=\left\{\left\{Q_{h}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{ R}\right\}_{h=1}^{H}\right\}\). Technically, \(\mathcal{Q}\) is not the set of proper \(\mathsf{Q}\) functions of Markov games since both the reward functions and the transition functions do not have to be proper, and given \(Q\in\mathcal{Q}\), we may not be able to construct a Markov game that induces \(Q\). This choice is made to accommodate both model-based and model-free victims who may or may not estimate the rewards and transitions explicitly from the dataset.
A stage game of a Markov game \(G\) in period \(h\in\left[H\right]\), state \(s\in\mathcal{S}\) under policy \(\pi\) is a normal form game \(\left(\mathcal{A},Q_{h}\left(s\right)\right)\), where \(\mathcal{A}\) is the joint action space of \(G\); and \(Q_{h}\left(s\right)\) is the mean reward function, meaning the reward from action profile \(\mathbf{a}\in\mathcal{A}\) is \(Q_{h}\left(s,\mathbf{a}\right)\). We define Markov perfect equilibria as policies in which the action profile used in every stage game is a Nash equilibrium.
**Definition 7** (Markov Perfect Equilibrium).: A Markov perfect equilibrium (MPE) policy \(\pi\) is a policy such that \(\pi_{h}\left(s\right)\) is a Nash equilibrium in the stage game \(\left(\mathcal{A},Q_{h}\left(s\right)\right).\)
We define the set of all Markov perfect equilibria policies of a Markov game that induces \(Q\in\mathcal{Q}\) by \(\mathcal{M}\left(Q\right)=\left\{\pi:\pi\text{ is an MPE of a Markov game with }\mathsf{Q}\text{ function }Q\right\}.\)
We note that Nash equilibria for Markov games can also be defined by converting the Markov game into a single normal-form game, but we only consider Markov perfect equilibria since Nash equilibria that are not Markov perfect require coordination and commitment to policies in stage games that are not visited along equilibrium paths, which is not realistic in the multi-agent reinforcement learning setting.
We define the unique Nash set for Markov games as follows.
**Definition 8** (Unique Nash).: The unique Nash set of a deterministic Markovian policy \(\pi\) for a Markov game \(G\) is the set of \(\mathsf{Q}\) functions such that \(\pi\) is the unique Markov perfect equilibrium under policy \(\pi\),
\[\mathcal{U}\left(\pi\right)\coloneqq\mathcal{M}^{-1}\left(\left\{\pi\right\} \right)=\left\{Q\in\mathcal{Q}:\mathcal{M}\left(Q\right)=\left\{\pi\right\} \right\}. \tag{12}\]
Next, we extend the characterization of the unique Nash set for normal-form games to the Markov game setting.
**Theorem 1** (Unique Nash Polytope).: _For any deterministic policy \(\pi\),_
\[\mathcal{U}\left(\pi\right) =\left\{Q\in\mathcal{Q}:\pi_{h}\left(s\right)\text{ is a strict NE of }\left(\mathcal{A},Q_{h}\left(s\right)\right),\forall\ h\in\left[H\right],s\in \mathcal{S}\right\}\] \[=\left\{Q\in\mathcal{Q}:\left\{\begin{aligned} Q_{h} \left(s,\left(\pi_{1,h}\left(s\right),a_{2}\right)\right)<Q_{h}\left(s,\pi \left(s\right)\right)<Q_{h}\left(s,\left(a_{1},\pi_{2,h}\left(s\right)\right) \right),\\ \forall\ a_{1}\neq\pi_{1,h}\left(s\right),a_{2}\neq\pi_{2,h}\left(s \right),h\in\left[H\right],s\in\mathcal{S}\end{aligned}\right\}, \tag{13}\]
We show the equivalence between (12) and (13) in the proof of Theorem 1 in the appendix. To avoid working with strict inequalities in (13), we again define the \(\iota\) strict version of the unique Nash polytope.
**Definition 9** (Iota Strict Unique Nash).: For \(\iota>0\), the \(\iota\) strict unique Nash set of a deterministic policy \(\pi\) is,
\[\underline{\mathcal{U}}\left(\pi;\iota\right)\coloneqq\left\{Q\in\mathcal{Q}: \begin{cases}Q_{h}\left(s,\left(\pi_{1,h}\left(s\right),a_{2}\right)\right)+ \iota\leq Q_{h}\left(s,\pi\left(s\right)\right),\\ Q_{h}\left(s,\pi\left(s\right)\right)\leq Q_{h}\left(s,\left(a_{1},\pi_{2,h} \left(s\right)\right)\right)-\iota,\\ \forall\ a_{1}\neq\pi_{1,h}\left(s\right),a_{2}\neq\pi_{2,h}\left(s\right),h \in\left[H\right],s\in\mathcal{S}\end{cases}\right\}. \tag{14}\]
For every deterministic policy \(\pi\) and \(\iota>0\), we have \(\underline{\mathcal{U}}\left(\pi;\iota\right)\subset\mathcal{U}\left(\pi\right)\), and the set is a polytope in \(\mathcal{Q}\).
### The Attacker's Theory of Mind (ToM) for Offline Multi-Agent Reinforcement Learners
Similar to the theory-of-mind set for normal-form game learners, we define the set for Markov game learners in the \(\mathcal{Q}\) space. Here, \(\mathcal{D}\) is the set of datasets with \(K\) episodes in the form \(\left\{\left\{\left(s_{h}^{\left(k\right)},\mathbf{a}_{h}^{\left(k\right)},r_{h }^{\left(k\right)}\right)\right\}_{h=1}^{H}\right\}_{k=1}^{K}\) with \(s_{h}^{\left(k\right)}\in\mathcal{S},\mathbf{a}_{h}^{\left(k\right)}\in \mathcal{A}\) and \(r_{h}^{\left(k\right)}\in\left[-b,b\right]\) for every \(k\in\left[K\right]\), and the victims compute the Markov perfect equilibria based on the Q functions estimated from such datasets.
**Definition 10** (Theory of Mind).: Given a dataset \(D\in\mathcal{D}\), the theory-of-mind set \(\mathcal{T}\left(D\right)\subseteq\mathcal{Q}\) is the set of Q functions that the victims estimate based on \(D\) to compute their equilibria. In particular, if the victims learn a policy \(\pi\), then \(\pi\in\bigcup_{Q\in\mathcal{T}\left(D\right)}\mathcal{M}\left(Q\right).\)
**Example 6** (Theory of Mind for Maximum Likelihood Victims).: To extend Example 1 in the Markov game setting, we define \(R^{\text{ MLE}}\) the same way and \(P^{\text{ MLE}}\) as follows,
\[R_{h}^{\text{ MLE}}\ \left(s,\mathbf{a}\right) \coloneqq\begin{cases}\dfrac{1}{N_{h}\left(s,\mathbf{a}\right)} \sum_{k=1}^{K}r_{h}^{\left(k\right)}\mathbbm{1}_{\left\{s_{h}^{\left(k\right)} =s,\mathbf{a}_{h}^{\left(k\right)}=\mathbf{a}\right\}}&\text{ if }N_{h}\left(s,\mathbf{a} \right)>0\\ 0&\text{ if }N_{h}\left(s,\mathbf{a}\right)=0\end{cases}, \tag{15}\] \[N_{h}\left(s,\mathbf{a}\right) \coloneqq\sum_{k=1}^{K}\mathbbm{1}_{\left\{s_{h}^{\left(k\right)} =s,\mathbf{a}_{h}^{\left(k\right)}=\mathbf{a}\right\}},\] \[P_{h}^{\text{ MLE}}\ \left(s^{\prime}|s,\mathbf{a}\right) \coloneqq\begin{cases}\dfrac{\sum_{k=1}^{K}r_{h}^{\left(k \right)}\mathbbm{1}_{\left\{s_{h+1}^{\left(k\right)}=s^{\prime},s_{h}^{\left(k \right)}=s,\mathbf{a}_{h}^{\left(k\right)}=\mathbf{a}\right\}}}{N_{h}\left(s, \mathbf{a}\right)}&\text{ if }N_{h}\left(s,\mathbf{a}\right)>0\\ \dfrac{1}{|\mathcal{S}|}&\text{ if }N_{h}\left(s,\mathbf{a}\right)=0 \end{cases},\] (16) \[P_{0}^{\text{ MLE}}\ \left(s\right) \coloneqq\dfrac{1}{K}\sum_{k=1}^{K}\mathbbm{1}_{\left\{s_{1}^{ \left(k\right)}=s\right\}}.\]
We can construct \(Q^{\text{ MLE}}\) based on \(R^{\text{ MLE}}\) and \(P^{\text{ MLE}}\) according to (11), and since all Nash equilibria have the same value for zero-sum games, \(Q^{\text{ MLE}}\) is unique for every Markov perfect equilibrium of the Markov game with rewards \(R^{\text{ MLE}}\) and transitions \(P^{\text{ MLE}}\). Then we have that \(\mathcal{T}\left(D\right)\) is a singleton \(Q^{\text{ MLE}}\).
**Example 7** (Theory of Mind for Confidence Bound Victims).: Given a dataset \(D\in\mathcal{D}\), if the attacker believes the victims estimate the Markov game by estimating the rewards and transitions within some confidence region around some point estimates such as the maximum likelihood estimates, as described in [14], then \(\mathcal{T}\left(D\right)\) would be a polytope with Q functions induced by the Markov games \(\left(\mathcal{S},\mathcal{A},P,R,H\right)\) with \(P\) and \(R\) satisfying, for every \(h\in\left[H\right],s\in\mathcal{S},\mathbf{a}\in\mathcal{A}\),
\[R_{h}\left(s,\mathbf{a}\right) \in\mathcal{C}_{h}^{\left(R\right)}\left(s,\mathbf{a}\right) \coloneqq\left\{R\in\mathbb{R}:\left|R-\hat{R}_{h}\left(s,\mathbf{ a}\right)\right|\leq\rho_{h}^{\left(R\right)}\left(s,\mathbf{a}\right)\right\}, \tag{17}\] \[P_{h}\left(s,\mathbf{a}\right) \in\mathcal{C}_{h}^{\left(P\right)}\left(s,\mathbf{a}\right) \coloneqq\left\{P\in\Delta\mathcal{S}:\left\|P-\hat{P}_{h}\left(s, \mathbf{a}\right)\right\|_{1}\leq\rho_{h}^{\left(P\right)}\left(s,\mathbf{a} \right)\right\}, \tag{18}\]
for some point estimates \(\hat{P},\hat{R}\), and radii \(\rho^{\left(R\right)}\) and \(\rho^{\left(P\right)}\). We note that \(\mathcal{T}\left(D\right)\) is a polytope in \(\mathcal{Q}\), but it has an exponential number of vertices. We can construct a tight hypercube around this polytope and call it the outer approximation of \(\mathcal{T}\left(D\right)\). It contains all the Q functions in the following set, for every \(h\in\left[H\right],s\in\mathcal{S},\mathbf{a}\in\mathcal{A}\),
\[Q_{h}\left(s,\mathbf{a}\right) \in\left[\underline{Q}_{h}\left(s,\mathbf{a}\right),\overline{Q}_ {h}\left(s,\mathbf{a}\right)\right], \tag{19}\] \[\underline{Q}_{h}\left(s,\mathbf{a}\right) \coloneqq\min_{R\in\mathcal{C}_{h}^{\left(R\right)}\left(s, \mathbf{a}\right)}R+\min_{P\in\mathcal{C}_{h}^{\left(P\right)}\left(s,\mathbf{a }\right)}\sum_{s^{\prime}\in\mathcal{S}}P\left(s^{\prime}\right)\max_{\pi_{1} \in\Delta\mathcal{A}_{1}}\min_{\pi_{2}\in\Delta\mathcal{A}_{2}}\underline{Q}_{h +1}\left(s^{\prime},\pi\right),\] \[\overline{Q}_{h}\left(s,\mathbf{a}\right) \coloneqq\max_{R\in\mathcal{C}_{h}^{\left(R\right)}\left(s, \mathbf{a}\right)}R+\max_{P\in\mathcal{C}_{h}^{\left(P\right)}\left(s,\mathbf{a }\right)}\sum_{s^{\prime}\in\mathcal{S}}P\left(s^{\prime}\right)\max_{\pi_{1} \in\Delta\mathcal{A}_{1}}\min_{\pi_{2}\in\Delta\mathcal{A}_{2}}\overline{Q}_{h +1}\left(s^{\prime},\pi\right).\]
We omit Example 2 and Example 3 for Markov games since the constructions are identical, except it is done for every stage game. As described in Example 7, we formally define the outer approximation of the theory-of-mind set for Markov games as follows.
**Definition 11** (Outer Approximation of Theory of Mind).: An outer approximation of \(\mathcal{T}\left(D\right)\) is a set denoted by \(\overline{\mathcal{T}}\left(D\right)\) that satisfies \(\mathcal{T}\left(D\right)\subseteq\overline{\mathcal{T}}\left(D\right)\) for every \(D\in\mathcal{D}\), and can be written in the form,
\[\overline{\mathcal{T}}\left(D\right)=\left\{Q\in\mathcal{Q}:\left|Q_{h}\left( s,\mathbf{a}\right)-\hat{Q}_{h}\left(s,\mathbf{a}\right)\right|\leq\rho_{h}^{ \left(Q\right)}\left(s,\mathbf{a}\right),\forall\ \mathbf{a}\in\mathcal{A},h\in\left[H\right],s\in\mathcal{S}\right\}, \tag{20}\]
for some point estimate \(\hat{Q}\) and radius \(\rho^{\left(Q\right)}\).
We call \(\overline{\mathcal{T}}\left(D\right)\) a linear outer approximation if \(\hat{Q}\) is linear in \(\left\{\left\{r_{h}^{\left(k\right)}\right\}_{h=1}^{H}\right\}_{k=1}^{K}.\)
### The Cheapest Way to Move ToM into UN for Markov Games
In this subsection, we restate the attacker's problem for multi-agent reinforcement learners.
**Definition 12** (Attacker's Problem).: The attacker's problem with target policy \(\pi^{\dagger}\) is,
\[\inf_{D^{\dagger}\in\mathcal{D}\left(D\right)} C\left(D,D^{\dagger}\right) \tag{21}\] \[s.t.\ \mathcal{T}\left(D^{\dagger}\right)\subseteq\mathcal{U} \left(\pi^{\dagger}\right).\]
For reward poisoning problems, we consider the following \(L_{1}\) cost.
**Example 8** (\(L_{1}\) Cost Function).: For reward poisoning problem, where \(\mathcal{D}^{\left(R\right)}\left(D\right)\) is all possible datasets in the form \(D^{\dagger}=\left\{\left\{\left(s_{h}^{\left(k\right)},\mathbf{a}_{h}^{\left( k\right)},r_{h}^{\dagger,\left(k\right)}\right)\right\}_{h=1}^{H}\right\}_{k=1}^{K}\) that are modified from \(D=\left\{\left\{\left(s_{h}^{\left(k\right)},\mathbf{a}_{h}^{\left(k\right)}, r_{h}^{\left(k\right)}\right)\right\}_{h=1}^{H}\right\}_{k=1}^{K}\), we define the \(L_{1}\) cost by \(C^{\left(1\right)}\left(D,D^{\dagger}\right)=\sum_{k=1}^{K}\sum_{h=1}^{H} \left|r_{h}^{\left(k\right)}-r_{h}^{\dagger,\left(k\right)}\right|.\)
We use the same \(\iota\) strictness relaxation of the unique Nash set and the linear outer approximation of the theory-of-mind set to convert (21) into a linear program, which can be solved efficiently. We state this observation as the following theorem.
**Theorem 2** (Reward Poisoning Linear Program).: _Given \(\iota>0\) and a linear \(\overline{\mathcal{T}}\), the following problem is a relaxation of the attacker's reward poisoning problem and can be converted into a linear program,_
\[\min_{D^{\dagger}\in\mathcal{D}^{\left(R\right)}\left(D\right)} C^{\left(1\right)}\left(D,D^{\dagger}\right) \tag{22}\] \[s.t.\ \overline{\mathcal{T}}\left(D^{\dagger}\right)\subseteq \underline{\mathcal{U}}\left(\pi^{\dagger};\iota\right).\]
**Example 9** (Maximum Likelihood Centered Linear Program).: In the case \(\hat{R}=R^{\text{\,{MLE}}}\) and \(\hat{P}=P^{\text{\,{MLE}}}\), and we construct \(\overline{\mathcal{T}}\left(D\right)\) as described in Example 7, (22) can be converted into a linear program even without explicitly constructing the \(\overline{\mathcal{T}}\left(D\right)\) set. We provide an intuition here and the formal construction in the proof of Theorem 2,
\[\min_{r^{\dagger}\in\left[-b,b\right]^{K}} \sum_{k=1}^{K}\sum_{h=1}^{H}\left|r_{h}^{\left(k\right)}-r_{h}^{ \dagger,\left(k\right)}\right| \tag{23}\] \[s.t.\ \ R^{\text{\,{MLE}}}\ \text{is a linear function of }r^{\dagger}\text{ satisfying \ \ \eqref{eq:L1}}\] \[P^{\text{\,{MLE}}}\ \text{is independent of }r^{\dagger}\text{ satisfying \ \ \eqref{eq:L1}}\] \[Q^{\text{\,{MLE}}}\ \text{is a linear function of }R^{\text{\,{MLE}}}\ \text{thus }r^{\dagger}\text{ satisfying \ \ \eqref{eq:L1}}\] \[\overline{Q}\text{ and }\underline{Q}\text{ are upper and lower bounds of }\overline{\mathcal{T}}\left(r^{\dagger};Q^{\text{\,{MLE}}}\,\right)\ \text{ satisfying \ \eqref{eq:L2}}\] \[\left(\overline{Q},\underline{Q}\right)\ \text{is in }\underline{ \mathcal{U}}\left(\pi^{\dagger}\right)\ \text{satisfying \ \ \eqref{eq:L1}}\]
Similar to Example 5, we move the hypercube \(\overline{\mathcal{T}}\left(r^{\dagger};Q^{\text{\,{MLE}}}\,\right)\) into the polytope \(\underline{\mathcal{U}}\left(\pi^{\dagger}\right)\) by moving one of the corners into the polytope. Note that if \(\overline{Q}\) and \(\underline{Q}\) are not constructed directly as linear
functions of \(r^{\dagger}\), and are computed by (19), then these constraints are not linear in \(r^{\dagger}\). We avoid this problem by using the dual linear program of (19). We present the details in the appendix in the proof of Theorem 2. All other constraints are linear in \(r^{\dagger}\), and as a result, (23) is a linear program.
In the end, we present a sufficient but not necessary condition for the feasibility of (22) and (21). This condition applies directly to normal-form games with \(H=1\).
**Theorem 3** (Reward Poisoning Linear Program Feasibility).: _For \(\iota>0\), \(\mathcal{T}\left(D\right)\) with \(\hat{Q}=Q^{\text{ MLE}}\), and \(N_{h}(s,\boldsymbol{a})>0\) for every \(h\in\left[H\right],s\in\mathcal{S},\boldsymbol{a}\in\mathcal{A}\) where either \(a_{1}=\pi_{1,h}^{\dagger}\left(s\right)\) or \(a_{2}=\pi_{2,h}^{\dagger}\left(s\right)\), the attacker's reward poisoning problem is feasible if for every \(h\in\left[H\right],s\in\mathcal{S},\boldsymbol{a}\in\mathcal{A}\),_
\[\rho_{h}^{\left(Q\right)}\left(s,\boldsymbol{a}\right)\leq\frac{b-\iota}{4H}. \tag{24}\]
To construct a feasible attack under (24), we use the poisoned rewards in (25). An example where each agent has three actions and the target action profile being action \(\left(1,1\right)\) is shown in Table 1. With this \(r^{\dagger}\), the maximum likelihood estimate of the game has a unique Nash equilibrium \(\pi_{h}^{\dagger}\left(s\right)\) with a value of \(0\) in every stage \(\left(h,s\right)\). Furthermore, if either the radius of rewards or the radius of Q functions for the theory-of-mind set is less than \(\frac{b-\iota}{4H}\), we can show inductively that \(\pi_{h}^{\dagger}\left(s\right)\) remains the unique Nash equilibrium in every stage \(\left(h,s\right)\), thus showing that every Q function in the theory-of-mind set is also in the unique Nash set, which means the attack is feasible. The complete proof is in the appendix.
## 4 Discussions
We discuss a few extensions.
* Faking a Unique Mixed Strategy Nash Equilibrium: due to the sensitivity of mixing probabilities from small perturbations of the reward function, as long as the theory-of-mind set has non-zero volume, it is impossible to install a mixed strategy profile (or stochastic policy for Markov games) as the unique equilibrium in general. However, this could be possible when the theory-of-mind set is a singleton. To characterize the unique Nash set for a mixed strategy profile, we need to extend Proposition 1 to include an additional invertibility condition on the reward function, but it is difficult to convert this condition into a linear constraint. We leave the technical details for future work.
* Faking an Optimal Policy for Single-Agent Reinforcement Learners: to attack a single-agent Markov decision process, we observe that a policy \(\pi\) is the unique optimal policy if and only if \(\pi\) is deterministic and is the strict optimal policy. As a result, the unique optimal policy set is also a polytope and can be viewed as a special case of the unique Nash set for a one-player game. In the case of reward poisoning, the attacker's problem can be formulated as a linear program similar to (22).
* Faking a Unique Coarse Correlated Equilibrium in Every Stage Game: for two-player zero-sum Markov games, \(\pi\) is the unique Markov Perfect Coarse Correlated Equilibrium if and only if \(\pi\) is the unique Markov Perfect Equilibrium. Therefore, the results in the previous section apply directly.
* Faking a Unique Markov Perfect Dominant Strategy Equilibria for General-Sum Games: for \(n\)-player general-sum Markov games, if \(\pi\) is a deterministic policy and it is a Markov Perfect Strict Dominant Strategy Equilibrium, then \(\pi\) is the unique Markov Perfect Equilibrium. The attacker's formulation in [14] can be viewed as a special case of our results when Nash equilibria are replaced by dominant strategy equilibria.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \(\mathcal{A}_{1}\setminus\mathcal{A}_{2}\) & \(1^{\dagger}\) & \(2\) & \(3\) \\ \hline \(1^{\dagger}\) & \(0\) & \(-b\) & \(-b\) \\ \hline \(2\) & \(b\) & no change & no change \\ \hline \(3\) & \(b\) & no change & no change \\ \hline \end{tabular}
\end{table}
Table 1: A Feasible Attack |
2304.11369 | Uniqueness of Markov random fields with higher-order dependencies | Markov random fields on a countable set $\sf V$ are studied. They are
canonically set by a specification $\gamma$, for which the dependence structure
is defined by a pre-modification $(h_e)_{e\in {\sf E}}$ -- a consistent family
of functions $h_e : S^e\to [0,+\infty)$, where $S$ is a standard Borel space
and $\sf E$ is an infinite collection of finite $e\subset {\sf V}$. Different
$e$ may contain distinct number of elements, which, in particular, means that
the dependence graph ${\sf H}=({\sf V}, {\sf E})$ is a hypergraph. Given $e\in
{\sf E}$, let $\delta (e)$ be the logarithmic oscillation of $h_e$. The result
of this work is the assertion that the set of all fields $\mathcal{G}(\gamma)$
is a singleton whenever $\delta(e)$ satisfies a condition, a particular version
of which can be $\delta(e) \leq \varkappa g(n_{\sf L}(e))$, holding for all $e$
and some $\sf H$-specific $\varkappa\in (0,1)$. Here $g$ is an increasing
function, e.g., $g(n) = a+\log n$, and $n_{\sf L}(e)$ is the degree of $e$ in
the line-graph ${\sf L}({\sf H})$, which may grow ad infinitum. This uniqueness
condition is essentially less restrictive than those based on classical
Dobrushin's methods, according to which either of $|e|$, $n_{\sf L}(e)$ and
$\delta(e)$ should be globally bounded. We also prove that its fulfilment
implies that the unique element of $\mathcal{G}(\gamma)$ is globally Markov. | Dorota Kepa-Maksymowicz, Yuri Kozitsky | 2023-04-22T10:46:40Z | http://arxiv.org/abs/2304.11369v3 | # Uniqueness of Markov random fields with higher-order dependencies
###### Abstract.
Markov random fields on a countable set \(\mathsf{V}\) are studied. They are canonically set by a specification \(\gamma\), for which the dependence structure is defined by a pre-modification \((h_{e})_{e\in\mathsf{E}}\) - a consistent family of functions \(h_{e}:S^{e}\to[0,+\infty)\), where \(S\) is a standard Borel space and \(\mathsf{E}\) is an infinite collection of finite \(e\subset\mathsf{V}\). Different \(e\) may contain distinct number of elements, which, in particular, means that the dependence graph \(\mathsf{H}=(\mathsf{V},\mathsf{E})\) is a hypergraph. Given \(e\in\mathsf{E}\), let \(\delta(e)\) be the logarithmic oscillation of \(h_{e}\). The result of this work is the assertion that the set of all fields \(\mathcal{G}(\gamma)\) is a singleton whenever \(\delta(e)\) satisfies a condition, a particular version of which can be \(\delta(e)\leq\varkappa g(n_{\mathsf{L}}(e))\), holding for all \(e\) and some \(\mathsf{H}\)-specific \(\varkappa\in(0,1)\). Here \(g\) is an increasing function, e.g., \(g(n)=a+\log n\), and \(n_{\mathsf{L}}(e)\) is the degree of \(e\) in the line-graph \(\mathsf{L}(\mathsf{H})\), which may grow ad infinitum. This uniqueness condition is essentially less restrictive than those based on the classical Dobrushin uniqueness theorem, according to which either of \(|e|\), \(n_{\mathsf{L}}(e)\) and \(\delta(e)\) should be globally bounded. We also prove that its fulfilment implies that the unique element of \(\mathcal{G}(\gamma)\) is globally Markov.
Key words and phrases:Specification; hypergraph; uniqueness; Dobrushin condition; graph animal; tempered degree growth 2020 Mathematics Subject Classification: 60G60; 60C05; 60K35; 82B20
## 1. Setup
There exists a permanent interest to Markov and similar random fields based on discrete structures, which can be explained by their numerous applications in mathematical physics, spatial statistics, image analysis, and many other sciences, see, e.g., [1, 15, 17, 25, 35, 38, 39, 40, 41]. A systematic presentation of the the theory of such fields may be found in [24, 39, 40]. In this work, we mostly use Georgii's monograph [24] as the source for notions, facts and notations on this subject.
Let \(\mathsf{V}\) be a countable set and \((S,\mathcal{S})\) a standard Borel space. The latter means that it is measurably isomorphic to a complete and separable metric space. A random field on \(\mathsf{V}\) is a collection of random variables \((\sigma_{x})_{x\in\mathsf{V}}\) (also called _spins_) defined on some probability space that take values in \(S\) (_single-spin space_). In the canonical realization, a random field is a probability measure on \((\Sigma,\mathcal{F})\), where \(\Sigma=S^{\mathsf{V}}\) and \(\mathcal{F}=\mathcal{S}^{\mathsf{V}}\). Typically, the dependence type of a random field is _specified_ by a family \(\gamma=(\gamma_{\Lambda})_{\Lambda\in\mathcal{V}}\) of probability kernels, where \(\mathcal{V}\) is the collection of all nonempty finite subsets \(\Lambda\subset\mathsf{V}\). For \(\Delta\subset\mathsf{V}\), let \(\mathcal{F}_{\Delta}\) be the sub-\(\sigma\)-algebra of \(\mathcal{F}\) generated by the maps \(\Sigma\mapsto\sigma_{\Delta}=(\sigma_{x})_{x\in\Delta}\in S^{\Delta}\). Then for \(\Lambda\in\mathcal{V}\), the external \(\sigma\)-algebra of events outside \(\Lambda\) is \(\mathcal{F}_{\Lambda^{c}}\), \(\Lambda^{c}:=\mathsf{V}\setminus\Lambda\). A probability measure on \((\Sigma,\mathcal{F})\) is said to be specified by \(\gamma=(\gamma_{\Lambda})_{\Lambda\in\mathcal{V}}\) if it satisfies the condition
\[\mu(A|\mathcal{F}_{\Lambda^{c}})=\gamma_{\Lambda}(A|\cdot)\qquad\mu-\text{ almost surely}, \tag{1.1}\]
holding for all \(A\in\mathcal{F}\) and \(\Lambda\in\mathcal{V}\). The set of all \(\mu\) satisfying (1.1) is denoted by \(\mathcal{G}(\gamma)\). It is known, see [24, Proposition 1.24, page 17], that a given \(\mu\) belongs to \(\mathcal{G}(\gamma)\) if and only if it solves
\[\mu(A)=(\mu\gamma_{\Lambda})(A):=\int_{\Sigma}\gamma_{\Lambda}(A|\sigma)\mu(d \sigma),\qquad A\in\mathcal{F},\quad\Lambda\in\mathcal{V}, \tag{1.2}\]
known as the Dobrushin-Lanford-Ruelle equation. Usually, the existence of measures satisfying (1.1), (1.2) follows by rather standard arguments, whereas proving uniqueness is a much more nontrivial achievement.
Let \(\chi\) be a probability measure on \((S,\mathcal{S})\). For \(\Lambda\in\mathcal{V}\), by \(\chi^{\Lambda}\) we denote the corresponding product measure on \((S^{\Lambda},\mathcal{S}^{\Lambda})\). Then the _independent_ specification consists of the kernels
\[\gamma^{\chi}_{\Lambda}(\cdot|\sigma)=\chi_{\Lambda}(\cdot|\sigma):=(\chi^{ \Lambda}\times\delta_{\sigma_{\Lambda^{c}}})(\cdot), \tag{1.3}\]
where \(\delta_{\sigma_{\Lambda^{c}}}\) is the Dirac measure on \((S^{\Lambda^{c}},\mathcal{S}^{\Lambda^{c}})\), and the unique element of \(\mathcal{G}(\gamma^{\chi})\) is the product measure \(\chi^{\mathsf{V}}\). Thus, one may expect that uniqueness persists if the dependence encoded in \(\gamma\) is in a sense weak. The typical procedure of each work on the uniqueness of this kind, including the present one, is to realize this idea in a given context.
Let \(\rho=(\rho_{\Lambda})_{\Lambda\in\mathcal{V}}\) be a family of measurable functions \(\rho_{\Lambda}:\Sigma\to\mathds{R}_{+}:=[0,+\infty)\) such that \(\rho\chi=(\rho_{\Lambda}\chi_{\Lambda})_{\Lambda\in\mathcal{V}}\) is a specification. Here \(\chi_{\Lambda}\) is as in (1.3). Then \(\rho\) is called a \(\chi\)-modification [24, page 18]. Furthermore, let \((h_{\Lambda})_{\Lambda\in\mathcal{V}}\) be a family of measurable functions \(h_{\Lambda}:\Sigma\to\mathds{R}_{+}:=[0,+\infty)\) which enjoy the following consistency property
\[h_{\Lambda}(\sigma)h_{\Lambda^{\prime}}(\sigma^{\prime})=h_{\Lambda}(\sigma^{ \prime})h_{\Lambda^{\prime}}(\sigma), \tag{1.4}\]
holding for all \(\varnothing\neq\Lambda\subset\Lambda^{\prime}\), \(\Lambda^{\prime}\in\mathcal{V}\) and \(\sigma,\sigma^{\prime}\in\Sigma\) such that \(\sigma_{\Lambda^{c}}=\sigma^{\prime}_{\Lambda^{c}}\). Assume also that
\[0<\chi_{\Lambda}(h_{\Lambda})<\infty,\qquad\chi_{\Lambda}(h_{\Lambda}):=\int _{\Sigma}h_{\Lambda}d\chi_{\Lambda}, \tag{1.5}\]
holding for all \(\Lambda\in\mathcal{V}\). Then \(h\) is called a _pre-modification_. In this case, \(\rho=(h_{\Lambda}/\chi_{\Lambda}(h_{\Lambda}))_{\Lambda\in\mathcal{V}}\) is a \(\chi\)-modification, see [24, Remark 1.32, page 22].
If the pre-modification \(h\) is such that
\[h_{\Lambda}(\sigma)=\exp\left(\Phi_{\Lambda}(\sigma)\right), \tag{1.6}\]
for a certain family of functions \(\{\Phi_{\Lambda}\}_{\Lambda\in\mathcal{V}}\) (such functions are called _interaction potentials_), the corresponding random field is a _Gibbs field_. In the setting of this work, conditions under which the elements of a given pre-modification \(h\) can be written as in (1.6) were first obtained in [34]. Mostly, see e.g., [25, page 9] or [29], interaction potentials have the form
\[\Phi_{\Lambda}(\sigma)=-\sum_{\{x,y\}\in\mathcal{V}^{0}_{\Lambda}}\varphi_{x,y }(\sigma_{x},\sigma_{y}), \tag{1.7}\]
for suitable symmetric measurable functions \(\varphi_{x,y}:S^{2}\to\mathds{R}\), called _binary interaction potentials_. Here \(\mathcal{V}^{0}\) is a collection of pairs \(\{x,y\}\) and \(\mathcal{V}^{0}_{\Lambda}=\{\{x,y\}\in\mathcal{V}^{0}:\{x,y\}\cap\Lambda\neq\varnothing\}\). If one sets \(x\sim y\) whenever \(\{x,y\}\in\mathcal{V}^{0}\), then the collection \(\mathcal{V}^{0}\) determines a simple graph - the _dependence graph_ for \(h\) (and hence for the corresponding specification) - with vertex set \(\mathsf{V}\) and the adjacency relation as just mentioned. This dependence graph determines the properties of the corresponding random fields. For instance, Markov random fields with trees as dependence graphs have many specific properties related to this particular feature, see [24, Chapter 12]. The aforementioned possible 'weakness' should correspond to particular properties of the dependence graph and to sufficiently small values of the logarithmic oscillations of \(h_{\Lambda}\), which in the case of (1.6) corresponds to small oscillations of the interaction potentials. For the Ising spin model (\(\sigma_{x}=\pm 1\) and binary interactions \(\varphi_{x,y}(\sigma_{x},\sigma_{y})=-a\sigma_{x}\sigma_{y}\)) on a particular rooted tree with vertices the degrees of which rapidly grow with distance to the root, the corresponding Gibbs fields are multiple for any \(a>0\), see [31]. For a number of models with binary interactions as in (1.7) and _finite_ single-spin spaces \(S\), a comprehensive analysis of the relationship between uniqueness/nonuniqueness and the graph structure can be found in [25, 29].
In several applications, modeling dependence by binary interactions as in (1.7) proved insufficient and using higher-order interactions is being suggested, see e.g., [5, 12, 22, 23, 36, 41]. In view of this, in the present work we turn to the case where the dependence graph has edges consisting of more than two elements of \(\mathsf{V}\), i.e., is a _hypergraph_. Suppose that \(\mathsf{V}\) and \(\mathcal{V}\) are as above, and there is given an infinite collection \(\mathsf{E}\subset\mathcal{V}\) of distinct subsets \(e\subset\mathsf{V}\), none of which is a singleton.
**Assumption 1.1**.: _There is given a collection \((h_{e})_{e\in\mathsf{E}}\) of measurable functions \(h_{e}:\Sigma\to\mathds{R}_{+}\) such that each \(h_{e}\) is \(\mathcal{F}_{e}\)-measurable and the following holds_
\[\forall e\in\mathsf{E}\ \ \forall\sigma\in\Sigma\qquad m_{e}\leq h_{e}(\sigma) \leq M_{e}, \tag{1.8}\]
_for some \(m_{e},M_{e}\) such that \(m_{e}>0\)._
For \(\Lambda\in\mathcal{V}\), set \(\mathsf{E}_{\Lambda}=\{e\in\mathsf{E}:e\cap\Lambda\neq\varnothing\}\). Then define
\[h_{\Lambda}(\sigma)=\prod_{e\in\mathsf{E}_{\Lambda}}h_{e}(\sigma). \tag{1.9}\]
It is clear that the family \((h_{\Lambda})_{\Lambda\in\mathcal{V}}\) has the consistency property as in (1.4) and hence is a pre-modification. Thus, for \(\chi\) as above, we have that \(\chi_{\Lambda}(h_{\Lambda})>0\), see (1.5), and hence \(\rho=(h_{\Lambda}/\chi_{\Lambda}(h_{\Lambda}))_{\Lambda\in\mathcal{V}}\) is a \(\chi\)-modification. Thereby, \(\gamma=\rho\chi=(\rho_{\Lambda}\chi_{\Lambda})_{\Lambda\in\mathcal{V}}\) is a specification. Our aim is to establish a sufficient condition imposed on the collections \(\mathsf{E}\) and \((m_{e})_{e\in\mathsf{E}}\), \((M_{e})_{e\in\mathsf{E}}\), under which \(\mathcal{G}(\gamma)\) is a singleton. The fact that \(\mathcal{G}(\gamma)\neq\varnothing\) follows by our assumption that \((S,\mathcal{S})\) is a standard Borel space, see [24, Theorem 8.7, page 142]. For \(\Lambda\in\mathcal{V}\), we set
\[\mathsf{E}_{\Lambda}^{o}=\{e\in\mathsf{E}:e\subset\Lambda\},\qquad\partial \mathsf{E}_{\Lambda}=\{e\in\mathsf{E}_{\Lambda}:e\cap\Lambda^{c}\neq\varnothing\},\]
and also
\[\partial\Lambda=\bigcup_{e\in\partial\mathsf{E}_{\Lambda}}\left(e\cap\Lambda^ {c}\right). \tag{1.10}\]
Since all \(e\in\mathsf{E}\) are finite, \(\mathsf{E}_{\Lambda}^{o}\neq\varnothing\) for sufficiently big \(\Lambda\)'s. By (1.9) it directly follows that the elements of \(\mathcal{G}(\gamma)\) are _locally_ Markov random fields in the following sense
\[\mu(A|\mathcal{F}_{\Lambda^{c}})=\mu(A|\mathcal{F}_{\partial\Lambda}),\qquad A \in\mathcal{F} \tag{1.11}\]
holding for all \(\Lambda\in\mathcal{V}\). Following [1, 2, 20, 30], we say that \(\mu\in\mathcal{G}(\gamma)\) is _globally_ Markov, if (1.11) holds for all \(\Lambda\subset\mathsf{V}\). The significance of this property is discussed, e.g., in [1, 3, 27, 30]. For finite single-spin spaces \(S\), the existence of the global Markov property may be related to other properties of the corresponding random fields, see, e.g., [13, 27] and also [30] where its absence is shown. One of our aims is to relate this property to the uniqueness condition which we are going to derive.
Among the most known tools of proving uniqueness there is the celebrated Dobrushin condition, see [24, Chapter 8]. In the present context, this condition is satisfied if the following holds, see [24, Proposition 8.8, page 143],
\[\sup_{x\in\mathsf{V}}\sum_{e\in\mathsf{E}_{x}}\left(|e|-1\right)\delta(e)<2, \tag{1.12}\]
where
\[\mathsf{E}_{x}:=\{e\in\mathsf{E}:x\in e\},\qquad\delta(e):=\log M_{e}-\log m _{e}, \tag{1.13}\]
and \(|e|\) standing for the cardinality of \(e\subset\mathsf{V}\). If one assumes uniform boundedness of \(\delta(e)\), then the condition in (1.12) can be satisfied only in the rather trivial case of uniform boundedness of both \(|e|\) and \(|\mathsf{E}_{x}|\). In the present work, instead of that in (1.12) we obtain (in Theorem 2.6) a condition (see (2.12) below) which works also for unbounded \(|e|\), \(|\mathsf{E}_{x}|\) and \(\delta(e)\). In [1, 2, 20], it was shown that Dobrushin's uniqueness implies that the unique \(\mu\in\mathcal{G}(\gamma)\) is globally Markov. We prove that a similar statement is true also in our case: the fulfillment of the uniqueness condition (2.12) implies the global Markov property of the unique \(\mu\in\mathcal{G}(\gamma)\).
The rest of the paper has the following structure. In Section 2, we introduce all mathematical necessities and then formulate our main result in Theorem 2.6. Thereafter, we provide a number of comments aimed at indicating the significance of the present result and its role in the context of the theory of Markov fields of this kind. The same aim is pursued in Section 3 where we provide two examples. In the first one, we introduce a model, for a version of which the classical Dobrushin condition is not applicable as \(|e|\) are unbounded. At the same time, Theorem 2.6 does work for this model, yielding a result obtained therein. In the second example, we deal with random interactions and obtain a result - Theorem 3.1 - that can be considered as an extension of our previous results of [33] to the case of higher-order dependencies. Finally, Section 2 contains the proof of both mentioned theorems.
## 2. The Result
As mentioned above, the structure of the dependence graph corresponding to a given specification predetermines the properties of \(\mathcal{G}(\gamma)\). Thus, we begin by introducing a special kind of graphs and some related facts and terminology based on our previous works [32, 33]. Recall that, for a finite set \(\Lambda\), by \(|\Lambda|\) we denote its cardinality. For a subset, e.g., \(\Delta\subset\mathsf{V}\), by \(\Delta^{c}\) we denote the complement, i.e., \(\mathsf{V}\setminus\Delta\).
### Graphs of tempered degree growth
Let \(G=(V,W)\) be a countably infinite simple graph, i.e., it does not have loops and multiple edges. We shall also assume that \(G\) does not have isolated vertices. Each edge \(w\in W\) is identified with a pair \(x,y\in X\), and we write \(x\sim y\) if there exists \(w\in W\) such that \(w=\{x,y\}\). A path \(\vartheta(x,y)\) in \(G\) is a sequence \(x_{0},x_{1},\ldots,x_{n}\) such that \(x_{0}=x\), \(x_{n}=y\) and \(x_{l}\sim x_{l+1}\) for all \(l=0,\ldots,n-1\). Its length \(|\vartheta(x,y)|\) is set to be \(n\). A path is called _simple_ if all \(x_{l}\), \(l=0,\ldots,n\) are distinct. Then \(d_{G}(x,y)=\min_{\vartheta(x,y)}|\vartheta(x,y)|\) is a metric on \(V\) if one sets \(d_{G}(x,y)=+\infty\) whenever there is no \(\vartheta\) connecting \(x\) and \(y\). A subgraph, \(G^{\prime}\subset G\), is a graph whose vertex set, \(V^{\prime}\), and edge set, \(W^{\prime}\), are subsets of \(V\) and \(W\), correspondingly. We say that a subset \(V^{\prime}\subset V\) generates a subgraph \(G^{\prime}\) if the edge set of the latter consists of all those \(w\in W\) which satisfy \(w\subset V^{\prime}\). For a subgraph, \(G^{\prime}\), by \(d_{G^{\prime}}\) we mean the metric defined as above with the use of vertices and edges of \(G^{\prime}\) only. It is clear that \(d_{G}(x,y)\leq d_{G^{\prime}}(x,y)\) in that case. A graph \(G\) (resp. subgraph \(G^{\prime}\)) is said to be connected if \(d_{G}(x,y)<\infty\) (resp. \(d_{G^{\prime}}(x,y)<\infty\)) for all its vertices \(x\) and \(y\). For \(x\in V\), we set \(W_{x}=\{w\in W:x\in w\}\). Then the degree of \(x\), denoted \(n_{G}(x)\), is set to be the cardinality of \(W_{x}\). Recall that we assume \(n_{G}(x)>0\) for each \(x\). Additionally, we assume that \(n_{G}(x)<\infty\) for all \(x\in V\), i.e. the graph \(G\) is _locally finite_. If
\[\sup_{x\in V}n_{G}(x)=:\bar{n}_{G}<\infty, \tag{2.1}\]
the graph \(G\) is said to be of bounded degree. In general, we do not assume (2.1) to hold for our graphs.
Let \(A\) be a finite nonempty subset of \(V\), and let \(A\) denote also the subgraph generated by \(A\). We call \(A\) an _animal_ if it is connected. Let \(\vartheta\) be a simple path; by \(A_{\vartheta}\) we denote the graph generated by the vertices of \(\vartheta\). Clearly, \(A_{\vartheta}\) is an animal. Let \(g:\mathds{N}\to\mathds{R}_{+}\) be a strictly increasing function. For an animal, \(A\), set
\[\mathfrak{G}(A;g)=\frac{1}{|A|}\sum_{x\in A}g(n_{G}(x)). \tag{2.2}\]
If the graph is of bounded degree, then \(\mathfrak{G}(A;g)\leq g(\bar{n}_{G})\) for all subsets \(A\subset V\). On the other hand, if the graph fails to satisfy (2.1), then \(\mathfrak{G}(A;g)\) can be made arbitrarily big by taking \(x\) with big enough \(n_{G}(x)\) (a hub) and small enough \(A\). These observations point to the possibility of controlling the increase of \(n_{G}(x)\) by means of the averages as in (2.2). That is, one can say that \(G\) has a tempered degree growth if \(\mathfrak{G}(A;g)\) is globally bounded for specially selected animals of arbitrarily big cardinalities. This, in particular, means that hubs ought to be sparse in such a graph, see [32] for further explanations. To make this idea more precise, we take \(B_{r}(x)=\{y\in V:d_{G}(x,y)\leq r\}\), \(r>0\), \(x\in V\), and then define
\[\mathcal{A}_{r}(x)=\{A\subset B_{r}(x):x\in A,\ |A|\geq r+1\ \text{and}\ A\ \text{is an animal}\}. \tag{2.3}\]
**Definition 2.1**.: _For given \(g\) and \(\bar{a}>0\), let \(\mathfrak{G}(A,g)\) be as in (2.2). The graph \(G\) is said to be \((g,\bar{a})\)-tempered if for each \(x\in V\), there exists a strictly increasing sequence \(\{N_{k}\}_{k\in\mathds{N}}\subset\mathds{N}\) such that the following holds_
\[\sup_{x\in V}\sup_{k\in\mathds{N}}\max_{A\in\mathcal{A}_{N_{k}}(x)}\mathfrak{G }(A;g)=\bar{a}.\]
The property just introduced is closely related to the properties of simple paths in the corresponding graph. In view of this, we introduce the following families of them.
**Definition 2.2**.: _By \(\Theta_{r}(x)\) we denote the family of all simple paths \(\vartheta\) originated at \(x\in V\) such that \(A_{\vartheta}\in\mathcal{A}_{r}(x)\). That is, \(A_{\vartheta}\subset B_{r}(x)\) and \(|\vartheta|\geq r\). Furthermore, for \(N\geq r+1\), we set_
\[\Theta_{r}^{N}(x)=\{\vartheta\in\Theta_{r}(x):|\vartheta|=N-1\}. \tag{2.4}\]
_Note that \(|A_{\vartheta}|=N\) whenever \(\vartheta\in\Theta_{r}^{N}(x)\)._
The advantage of dealing with tempered graphs can be seen from the following fact, proved in our previous work.
**Proposition 2.3**.: _[_33_, Proposition 4]_ _Let \(G\) be \((g,\bar{a})\)-tempered for \(g(n)=\log n\) and some \(\bar{a}\). For a given \(x\in V\), let \(\{N_{k}\}\) be as in Definition 2.1. Then_
\[\forall k\ \forall N\geq N_{k}\qquad|\Theta_{N_{k}}^{N}(x)|\leq\exp(\bar{a}N). \tag{2.5}\]
The following family of graphs has the property as in Definition 2.1. For \(x,y\in V\), set \(m_{-}(x,y)=\min\{n_{G}(x);n_{G}(y)\}\). Let \(\phi:\mathds{N}_{0}\to\mathds{R}_{+}\) be an increasing function. A graph \(G\) is said to belong to the collection \(\mathds{G}_{-}(\phi)\) if it satisfies
\[d_{G}(x,y)\geq\phi(m_{-}(x,y)), \tag{2.6}\]
whenever \(m_{-}(x,y)\geq n_{*}\) for a \(G\)-specific \(n_{*}>0\). By virtue of (2.6), in such graphs hubs should be sparse. In a slightly different form, objects of this kind appeared in [4] as a tool of proving uniqueness for fields with random binary interactions, i.e., where \(h_{\Lambda}\) are as in (1.7) with random \(\varphi_{x,y}\), see [31, 33] for more on this subject. Their temperedness is established by the following statement.
**Proposition 2.4**.: _[_33_, Proposition 1]_ _Assume that there exists a strictly increasing sequence \(\{t_{k}\}_{k\in\mathds{N}}\subset\mathds{N}\), \(t_{k}\to+\infty\), such that \(g\) and \(\phi\) satisfy_
\[\sum_{k=1}^{\infty}\frac{g(t_{k+1})}{\phi(t_{k})}=:\bar{b}<\infty. \tag{2.7}\]
_Then each \(G\in\mathds{G}_{-}(\phi)\) is \((g,2\bar{b})\)-tempered._
Note that the rooted tree used in [31], for which Gibbs states of an Ising model are multiple for all nonzero interactions, is not tempered as the degree of a given \(x\) is \(l!\), where \(l\) is its distance to the root. Then the neighbor of this \(x\) located at distance \(l+1\) has degree \((l+1)!\), i.e., is even bigger hub than \(x\).
**Remark 2.5**.: _Let \(G\) be \((g,\bar{a})\)-tempered with some \(\bar{a}>0\) and \(g(n)\geq g_{*}(n):=\log n\). As follows from the proof of [33, Proposition 4], the cardinality of the set defined in (2.4) satisfies_
\[|\Theta_{N_{k}}^{N}(x)|\leq\max_{\vartheta\in\Theta_{N_{k}}^{N}(x)}\exp\left( N\mathfrak{G}(A_{\vartheta};g_{*})\right)\leq\max_{\vartheta\in\Theta_{N_{k}}^{N}(x)} \exp\left(N\mathfrak{G}(A_{\vartheta};g)\right)\]
\[\leq\max_{A\in\mathcal{A}_{N_{k}}^{N}(x)}\exp\left(N\mathfrak{G}(A;g)\right) \leq\exp\left(\bar{a}N\right).\]
_That is, \(g_{*}\) is the smallest known \(g\) such that the \(g\)-tempered graphs admit estimates as in (2.5)._
### Formulating the result
As already mentioned, the specification \(\gamma\) we are going to deal with is defined by a family \((h_{e})_{e\in\mathsf{E}}\) satisfying Assumption 1.1. Let us turn now to the hypergraph structure of the underlying set \(\mathsf{V}\) defined by \(\mathsf{E}\). Here we mostly employ the terminology of [11]. In this context, each \(e\in\mathsf{E}\) serves as a hyperedge, 'connecting' its elements \(x,y,z,\cdots\in e\). We denote this dependence hypergraph by \(\mathsf{H}\); its vertex and edge sets are \(\mathsf{V}\) and \(\mathsf{E}\), respectively. For \(\Delta\subset\mathsf{V}\), we then set
\[\partial\Delta=\{y\in\Delta^{c}:\exists x\in\Delta\ \exists e\in\mathsf{E}\ \{x,y\}\subset e\}, \tag{2.8}\]
which is just an extension of (1.10) to infinite subsets. For \(x\in\mathsf{V}\), we set
\[n_{\mathsf{H}}(x)=|\mathsf{E}_{x}|, \tag{2.9}\]
where \(\mathsf{E}_{x}\) is as in (1.13). Note that \(\mathsf{E}_{x}\) and \(n_{\mathsf{H}}(x)\) are the edge neighborhood and the edge degree of \(x\), respectively. We will assume that \(\mathsf{H}\) is locally finite, i.e.,
\[\forall x\in\mathsf{V}\qquad n_{\mathsf{H}}(x)<\infty.\]
A convenient way of describing hypergraphs is to use their line-graphs, see [11, Sect. 2.1]. For a given hypergraph \(\mathsf{H}\), its line-graph \(\mathsf{L}(\mathsf{H})\) has \(\mathsf{E}\) as the vertex set, and \(e,e^{\prime}\in\mathsf{E}\) are declared incident if \(e\cap e^{\prime}\neq\varnothing\). For a given \(e\in\mathsf{E}\), we then set
\[\mathcal{E}_{e}=\{e^{\prime}\in\mathsf{E}:e^{\prime}\sim e\},\qquad n_{ \mathsf{L}}(e)=|\mathcal{E}_{e}|, \tag{2.10}\]
which is the vertex neighborhood and the vertex degree of \(e\) in \(\mathsf{L}(\mathsf{H})\), respectively. Let \(\vartheta\) be a simple path in \(\mathsf{L}(\mathsf{H})\) and \(\mathsf{A}_{\vartheta}\) the corresponding animal. Set, cf. (2.2),
\[\mathfrak{D}(\vartheta)=\frac{1}{|\mathsf{A}_{\vartheta}|}\sum_{e\in\mathsf{A }_{\vartheta}}\log\left(e^{2\delta(e)}-1\right), \tag{2.11}\]
where \(\delta(e)\) is the logarithmic oscillation of \(h_{e}\) defined in (1.13). For \(e\in\mathsf{E}\), by \(\mathcal{A}_{r}(e)\) and \(\Theta_{r}(e)\) we denote the families of animals and paths in \(\mathsf{L}(\mathsf{H})\), respectively, defined according to (2.3) and Definition 2.2. Then the temperedness of \(\mathsf{L}(\mathsf{H})\) is set according to Definition 2.1 with the use of \(\mathcal{A}_{r}(e)\). Now we are at a position to state our main result.
**Theorem 2.6**.: _Let the line-graph \(\mathsf{L}(\mathsf{H})\) be \((g,\bar{a})\)-tempered with \(g(n)\geq g_{*}(n)=\log n\) and a certain \(\bar{a}>0\). Then \(\mathcal{G}(\gamma)\) is a singleton whenever,_
\[\sup_{e\in\mathsf{E}}\sup_{k\in\mathsf{IN}}\max_{\vartheta\in\Theta_{N_{k}}( e)}\mathfrak{D}(\vartheta)\leq-(\bar{a}+\epsilon), \tag{2.12}\]
_holding for some \(\epsilon>0\). In this case, the unique \(\mu\in\mathcal{G}(\gamma)\) is globally Markov, i.e., such that \(\mu(\cdot|\mathcal{F}_{\Delta^{c}})=\mu(\cdot|\mathcal{F}_{\partial\Delta})\)\(\mu\)-almost surely, holding for all \(\Delta\subset\mathsf{V}\), see (2.8)._
The strength of the result just stated is illustrated in Section 3 below where we obtain uniqueness for a model to which Dobrushin's method based on (1.12) is not applicable. The proof of Theorem 2.6 follows in Section 4. Let us turn now to its discussing. The essential aspects of this statement are:
1. According to the uniqueness condition in (2.12), the dependencies encoded in \(\gamma\) should be weak 'in average'. This makes it applicable in cases where the dependencies are 'irregular' or random.
2. The just mentioned irregularity may characterize both the dependence graph and the interaction strength. That is, neither \(n_{\mathsf{L}}(e)\) nor \(|e|\) are supposed to be bounded, i.e., satisfying the corresponding versions of (2.1). We allow \(\delta(e)\) to be unbounded as well.
3. The condition in (2.12) is imposed on families \(\Theta_{r}(e)\) of simple paths in \(\mathsf{L}(\mathsf{H})\) of lengths satisfying a certain \(e\)-dependent condition - not for all \(r\). According to it, each \(\vartheta\in\Theta_{r}(e)\) may have \(e^{\prime}\) with an arbitrarily big \(\delta(e^{\prime})\), 'diluted' by \(e^{\prime\prime}\), for which \(e^{2\delta(e^{\prime\prime})}-1<1\).
The Dobrushin uniqueness condition was formulated in [14] as a condition of weak dependence of \(\gamma_{\{x\}}(\cdot|\sigma)\) on \(\sigma_{y}\), \(y\sim x\). In [15], this condition was called \(C_{x}\) condition, and the approach was extended by formulating analogous conditions \(C_{\Lambda}\) for finite \(\Lambda\) covering the underlying set. Soon after, Dobrushin's approach became a cornerstone in the theory of Gibbs random fields, see, e.g., the bibliography in [15]. In particular, under the condition of a complete analyticity of the interaction potentials, see [16, 17], uniqueness was shown to hold.
Let us recall that in our consideration, the single-spin space \(S\) is just a standard Borel space. At the same time, in most of the models, including [15], \(S\) is finite, which substantially simplifies the theory. For such models, there has been elaborated a more efficient technique of proving uniqueness than that based on Dobrushin's condition, see [6, 7] and also [25, Chapter 7]. In this technique, uniqueness of a Gibbs random field on a given graph is implied by the absence of the bond Bernoulli (disagreement) percolation on the same graph, see [7, Theorem 1] or [25, Theorem 7.2, page 76]. In our case, the key aspect of the theory is the estimate of the number of paths given in Proposition 2.3, see also Remark 2.5. According to (2.5), the percolation threshold \(p_{c}\) for the Bernoulli site percolation is just \(e^{-\bar{a}}\), which can readily be obtained as
in (4.20) below. That is, the main aspect of our approach is that we work directly with the underlying graph, the properties of which give rise to both non-percolation and uniqueness of Markov random fields.
There exist situations where the original Dobrushing condition is sharp, see [24, Theorem 16.27]. Here the interactions are binary and long ranged, which means that \(\partial\Lambda=\Lambda^{c}\), see (1.10), and hence (1.11) becomes trivial. We hope that, for binary interactions with finite range, our approach can yield more refined uniqueness conditions if one takes cf. (1.7), with \(e\) selected in an 'optimal' way. For instance, one may take \(e\) with properly increasing cardinalities in such a way that the corresponding \(\delta(e)\) are bounded by \(\varkappa g(n_{\mathsf{L}}(e))\) with \(g\geq g_{*}\), e.g., \(g(n)=n^{2}\), whereas the corresponding line-graph is \(g\)-tempered with this \(g\), see Remark 2.5, as well as Proposition 2.7 and the first example in Section 3 below. Another direction where an appropriate modification of our approach can be of use is studying random fields corresponding to hierarchically arranged sets \(\mathsf{E}\), see, e.g., [9]. We plan to turn to these issues in a forthcoming work.
### Further comments
The following statement provides a more explicit version of the uniqueness condition in (2.12).
**Proposition 2.7**.: _Let \(\mathsf{H}\), \(g\) and \(\bar{a}\) be as in Theorem 2.6. Assume that, for some \(\epsilon\in(0,1)\), there exists \(\varkappa\in(0,1)\), dependent on \(\bar{a}\) and \(\epsilon\) only, such that the following holds_
\[\forall e\in\mathsf{E}\qquad\delta(e)\leq\varkappa[\bar{a}+g(n_{\mathsf{L}}(e ))]. \tag{2.13}\]
_Then (2.12) is satisfied with this \(\epsilon\)._
Proof.: By (2.13) we have
\[e^{2\delta(e)}-1\leq 2\delta(e)e^{2\delta(e)}\leq\exp\bigg{(}\log\varkappa+( 2\varkappa+1)[\bar{a}+g(n_{\mathsf{L}}(e))]\bigg{)} \tag{2.14}\]
\[\leq\exp\bigg{(}\log\varkappa+3[\bar{a}+g(n_{\mathsf{L}}(e))]\bigg{)}.\]
We use this and (2.14) in (2.11) and arrive at
\[\sup_{e\in\mathsf{E}}\sup_{k\in\mathsf{N}}\max_{g\in\mathcal{O}_{N_{k}}(e)} \mathfrak{D}(\vartheta)\leq\log\varkappa+6\bar{a},\]
which means that (2.12) holds for \(\varkappa=e^{-7\bar{a}-\epsilon}\), which yields the proof.
To clarify the interconnections between the properties of \(\mathsf{H}\), \(\mathsf{L}(\mathsf{H})\) and \((h_{e})_{e\in\mathsf{E}}\), let us impose some further restrictions. In particular, we assume that
\[\bigcap_{e\in\mathsf{E}_{x}}e=\{x\}. \tag{2.15}\]
This separability assumption implies the following.
1. Each \(e\) can contain at most one \(x\) such that \(n_{\mathsf{H}}(x)=1\), i.e., such that is contained only in this \(e\).
2. If distinct \(x\) and \(y\) are contained in a given \(e\), and if they are contained in a certain \(e^{\prime}\sim e\), then there exist another \(e_{1},e_{2}\in\mathsf{E}\) such that \(x\in e_{1}\), \(y\in e_{2}\), and also \(x\in e_{2}^{c}\) and \(y\in e_{1}^{c}\).
These observations further yield
\[|e|-1\leq n_{\mathsf{L}}(e)\leq\sum_{x\in e}(n_{\mathsf{H}}(x)-1)\leq|e|\max_ {x\in e}(n_{\mathsf{H}}(x)-1). \tag{2.16}\]
The proof of (a) is obvious. To prove (b) one observes that the assumption that \(y\) is out of any \(e_{1}\) which contains \(x\) contradicts (2.15). The lower bound for \(n_{\mathsf{L}}(e)\) in (2.16) follows first by (a) - as each but possibly one of \(y\in e\) should have \(e^{\prime}\in\mathcal{E}_{e}\), see (2.10) - and then by (b), which yields that each such \(y\) should have its own \(e^{\prime}\). The upper bound follows by (b), where subtracting \(1\) is to take into account that \(x\) belongs to \(e\). Clearly (2.15) is essential for the properties of \(\mathsf{H}\). However, from the point of view of the application to the random fields considered here it is not
too restrictive for the following reason. First of all, one has to mention that the construction can be modified to include the possibility to take the single-state (spin) space \(S\) dependent on \(x\), i.e., to take \(\prod_{x\in\mathsf{V}}S_{x}\) instead of \(S^{\mathsf{V}}\). Then one repeats the whole construction after imposing suitable conditions on \(S_{x}\), uniform in \(x\). Now if (2.15) fails to hold, one introduces the equivalence relation and passes to equivalence classes (each is finite) consisting of those \(x,y,...\) that belong to the intersections. Then the factor-hypergraph obtained thereby enjoys the separability in question. Thereafter, one sets strings \(\sigma_{[x]}=(\sigma_{x},\sigma_{y},\dots)\) as new spins lying in the corresponding product spaces and described by the corresponding products of \(\chi\). Similar arguments we used in [4] where \(x\) and \(y\) were declared similar if \(\varphi_{x,y}\) is big, which allowed for including such terms in the new reference measure in place of \(\chi^{\mathsf{V}}\).
The bounds obtained in (2.16) show that \(n_{\mathsf{L}}(e)\) can be big (and hence \(e\) can be a hub) because: (a) \(|e|\) is big; (b) many of \(x\in e\) have many neighbors in \(\mathsf{H}\). The bounds as in (2.16) can be used as follows. Assume that, cf. (2.9),
\[\sup_{x\in\mathsf{V}}n_{\mathsf{H}}(x)=:\bar{n}_{\mathsf{H}}<\infty.\]
Then \(n_{\mathsf{L}}(e)\) satisfies the following two-sided estimate
\[|e|-1\leq n_{\mathsf{L}}(e)\leq|e|(\bar{n}_{\mathsf{H}}-1),\]
and hence is controlled by \(|e|\). This may allow one to use \(|e|\) in definitions like (2.2) and (2.6) instead of \(n_{\mathsf{L}}(e)\). Also, if \(d_{\mathsf{L}}(e,e^{\prime})\) satisfies
\[d_{\mathsf{L}}(e,e^{\prime})\geq\phi(\min\{|e|;|e^{\prime}|\}),\]
for \(\phi\) and \(g=g_{*}\) satisfying (2.7), see Remark 2.5, then \(\mathsf{L}(\mathsf{H})\) satisfies the conditions of Theorem 2.6. This might yield a more direct and thus convenient way of controlling the graph \(\mathsf{H}\) than that based on the degrees \(n_{\mathsf{L}}\).
## 3. Examples
To illustrate the abilities of the uniqueness condition formulated in Theorem 2.6, we provide two examples. We begin by introducing a class of models where the underlying line-graphs are tempered.
### The overlapping cliques model
Let \(\{n_{l}\}_{l\in\mathds{N}}\subset\mathds{N}\) be given. With the help of this sequence, we furnish \(\mathsf{V}\) with the following structure. One starts with a subset \(e_{1}\subset\mathsf{V}\) containing \(n_{1}=|e_{1}|\) elements. Then one takes subsets \(e_{2,1},\dots,e_{2,n_{1}}\) containing \(n_{2}\) elements each, in such a way that: (a) \(e_{2,i}\cap e_{2,j}=\varnothing\), \(i\neq j\); (b) \(e_{1}\cap e_{2,i}\) is a singleton, the unique element of which is denoted \(x_{i}\). In other words, each \(x_{i}\in e_{1}\) is an element of the corresponding \(e_{2,i}\). Next, one takes \(e_{3,i,k}\), \(i=1,\dots n_{1}\), \(k=1,\dots,n_{2}-1\). All \(e_{3,i,k}\) are pairwise disjoint, each containing \(n_{3}\) elements and \(e_{3,i,k}\cap e_{2,i}=\{x_{i,k}\}\). That is, each \(x_{i,k}\in e_{2,i}\) is an element of \(e_{3,i,k}\). This procedure is then continued ad infinitum yielding
\[\mathsf{E}:=\{e_{m,i_{1},\dots,i_{m-1}}:m\in\mathds{N},i_{1}\leq n_{1},i_{2} \leq n_{2}-1,\dots i_{m-1}\leq n_{m-1}-1\},\]
which exhausts \(\mathsf{V}\). Thereby, one gets the hypergraph \(\mathsf{H}=(\mathsf{V},\mathsf{E})\). Its linear graph \(\mathsf{L}(\mathsf{H})\) is a rooted tree of which \(\{n_{l}\}\) is the degree sequence. If one sets an appropriate graph structure on each \(e_{m,i_{1},\dots,i_{m-1}}\), then the resulting graph becomes a Husimi tree, see [36, page 259], [37, page 220], or [19]. In particular, one can turn each \(e_{m,i_{1},\dots,i_{m-1}}\) into a clique - a complete graph \(K_{n_{m}}\), see [19]. Now we define, cf. (1.8),
\[h_{e}(\sigma)=\exp\left(K\varphi_{e}(\sigma_{e})\right),\qquad 0<a_{e}\leq \varphi_{e}(\sigma_{e})\leq b_{e},\quad K>0, \tag{3.1}\]
which yields
\[\delta(e)=K(b_{e}-a_{e})=:Kc_{e}. \tag{3.2}\]
Recall that each \(e_{m,i_{1},\dots,i_{m-1}}\) with the same \(m\) contains the same number \(n_{m}\) of elements. Assume that they are isomorphic if furnished with a graph structure. This, in particular, means that all \(\varphi_{e}\), and hence \(c_{e}\), are the same for such \(e\). In this case, we also write \(e_{m}\) meaning one of these sets. An example can be the model as in [36], where each \(n_{l}=4\) and
all \(e\) are \(C_{4}\) with the corresponding \(\varphi_{e}\), see also [37, page 220], where \(n=3\). By construction, each \({\sf E}_{x}\), see (1.13), contains two elements: \(e_{m}\) and \(e_{m+1}\) for \(x\)-dependent \(m\). For instance, each \(x\in e_{1}\) is contained also in some \(e_{2}\). Each but one element of \(e_{2,i}\) is contained also in \(e_{3,i,k}\) for some \(k\). In this case, the Dobrushin uniqueness condition (1.12) takes the form
\[K\sup_{m\geq 1}\big{[}c_{e_{m}}(n_{m}-1)+c_{e_{m+1}}(n_{m+1}-1)\big{]}<2. \tag{3.3}\]
Thus, to verify (3.3) both sequences \(\{c_{e_{m}}\}\) and \(\{n_{m}\}\) ought to be bounded. If they all are constant, i.e., \(c_{e_{m}}=c\) and \(n_{m}=n\), as it is in [36, 37], then (3.3) turns into
\[K<1/c(n-1). \tag{3.4}\]
In this case, the line graph \({\sf L}({\sf H})\) is an \(n\)-tree and we have that \(n_{\sf L}=n\); hence, \(\bar{a}=\log n\), see Definition 2.1. And also \(\mathfrak{D}(\vartheta)=\mathfrak{D}(e)=\log(e^{2Kc}-1)\), which means that the condition in (2.12) takes the form
\[K<\frac{1}{2c}\log\frac{n+1}{n},\]
that is qualitatively comparable with (3.4) and has a similar asymptotic as \(n\to+\infty\). For unbounded sequences \(\{c_{e_{m}}\}\) and \(\{n_{m}\}\) - where (1.12), hence (3.3), does not work - we have the following result. Assume that there is given an increasing sequence \(\{l_{s}\}\subset{\mathds{N}}\), which we use to impose the following conditions on the sequence \(\{n_{l}\}\): \(n_{1+s}\leq n_{1}\), for \(1\leq s\leq l_{1}-1\); \(n_{1+l_{1}+s}\leq n_{1+l_{1}}\) for \(1\leq s\leq l_{2}-1\), an so on. In other words, the increase of \(n_{m}\) is allowed only when \(m\) takes values \(m_{s}\), \(s\geq 1\); where \(m_{1}=1\) and \(m_{s}:=1+l_{1}+\cdots+l_{s-1}\) for \(s\geq 2\). Moreover, the elements of these sequences are supposed to satisfy
\[l_{s}\geq\phi(n_{m_{s}}):=\left[\log n_{m_{s}}\right]^{2}. \tag{3.5}\]
That is, the tree - the line-graph of the hypergraph defined above, satisfies (2.6) with this \(\phi\). Now we take \(t_{k}=\exp(a^{k})\), \(k\in{\mathds{N}}\) for some \(a>1\). For this sequence and \(g(t)=\log t\), we then have
\[\bar{b}=\sum_{k=1}^{\infty}\frac{g(t_{k+})}{\phi(t_{k})}=a\sum_{k=1}^{\infty} a^{-k}=\frac{a}{a-1}. \tag{3.6}\]
By Proposition 2.4 it follows that the tree \({\sf L}({\sf H})\) is \((g,\bar{a})\)-tempered with such \(g\) and \(\bar{a}=2a/(a-1)\). By Proposition 2.7 we then conclude that the set of the corresponding Markov fields is a singleton if
\[Kc_{e_{m}}<e^{-14}(2+\log n_{m}), \tag{3.7}\]
by which \(c_{e_{m}}\) can also be unbounded. Assume now that \(\varphi_{e}(\sigma_{e})\) has the following form
\[\varphi_{e}(\sigma_{e})=\frac{1}{|e|}\sum_{\{i,j\}\subset e}\sigma_{i}\sigma_ {j},\qquad\sigma_{i}=\pm 1, \tag{3.8}\]
which means that each \(e\) is turned into a clique, and the interaction within each clique is of Curie-Weiss type. That is, we are dealing with an Ising spin model on a Husimi tree of this kind. Simple calculations mean that
\[c_{e}=\frac{|e|-1}{2}+\frac{1}{|e|}\left[\frac{|e|}{2}\right]\leq|e|, \tag{3.9}\]
where \([\alpha]\) stands for the integer part of \(\alpha>0\). That is, \(c_{e}\) as in (3.9) fails to satisfy (3.7) for big \(|e|\) if the growth of the sequence \(\{n_{l}\}\) is governed by (3.5). In view of this, we impose a more restrictive condition on the growth of this sequence. Namely, instead of (3.5) we set
\[l_{s}\geq\phi(n_{m_{s}})=[n_{m_{s}}]^{2}.\]
In this case, for \(t_{k}=a^{k}\), \(a>1\), we have that (3.6) holds true for \(g(t)=t\) and \(\phi(t)=t^{2}\). Thus, \({\sf L}({\sf H})\) is \((g,\bar{a})\)-tempered with this \(g\) and \(\bar{a}=2a/(a-1)\). Then the conditions of Theorem 2.6 are satisfied if, cf. (3.7) and (3.9),
\[K<2e^{-14},\]
where we have taken into account that \(n_{m}\geq 2\).
### Random interactions
In this subsection, we obtain a generalization to higher-order dependencies (many-body interactions) of our previous result in [33], which in turn is a refinement and a generalization of the classical Bassalygo-Dobrushin work [4]. Assume that the functions \(h_{e}\) - which determine the specification \(\gamma\), and hence the set \(\mathcal{G}\gamma)\) - are random. Namely, each \(h_{e}\) is as in (3.1) with a random function \(\varphi_{e}\), for which the bounds \(a_{e}\) and \(b_{e}\) are also random. In particular, \(\varphi_{e}\) may be as in (3.8) times a random numerical factor \(q_{e}\). In this case, the set \(\mathcal{G}(\gamma)\) also gets random.
**Theorem 3.1**.: _Let the line-graph \(\mathsf{L}(\mathsf{H})\) be as in Theorem 2.6 and \(h_{e}\) be as in (3.1) with independent and identically distributed \(\varphi_{e}\) such that the bounds \(c_{e}\), see (3.2), satisfy_
\[\forall K>0\ \forall e\qquad\mathsf{E}e^{K_{ee}}<\infty. \tag{3.10}\]
_Then there exists \(K_{*}>0\) such that the set \(\mathcal{G}(\gamma)\) is almost surely a singleton whenever \(K<K_{*}\). The unique \(\mu\in\mathcal{G}(\gamma)\) has the global Markov property._
The proof of this theorem is given after the proof of Theorem 2.6 performed in the next section. Here we provide some relative comments. Markov random fields with random interactions naturally appear in statistical physics as states of thermodynamic equilibrium (Gibbs states) of disordered physical systems, e.g., desordered magnets. A systematic presentation of the mathematical theory of disordered systems can be found in [10]. According to the classification adopted herein, in Theorem 3.1 we deal with quenched Gibbs states. First works dealing with such objects appeared in the 1980'th [4, 21]. Then they were continued in [8, 18, 26], see also [33] and the articles quoted in this work. In the seminal work by Bassalygo and Dobrushin [4], the question of uniqueness of corresponding Gibbs states on regular lattices was studied. As a result, a uniqueness condition was obtained by a technique based on _gluing out_ vertices of the original lattice with subsequent passing to coarse-grained graphs of a special structure, in which vertices of high degree (hubs) were sparse. Despite the title of that paper, the technique used there is too complicated and sometimes unclear. In [33] we obtained a similar result valid also for graphs with unbounded vertex degrees, in which _unboundedness_ is controlled in a certain way, i.e., is tempered. Up to the best of our knowledge, this is the first result of this kind obtained for Gibbs fields with binary random interactions on unbounded degree graphs. In Theorem 3.1 we extend this result to the case of higher-order interactions. Additionally, we obtain here the global Markov property for the unique \(\mu\in\mathcal{G}(\gamma)\). A further extension can be dropping the condition of the identical distribution of \(c_{e}\), and then imposing 'averaged weakness' conditions like the one in (2.12). We plan to turn to this issue in a separate work.
## 4. The Proofs
In view of Remark 2.5, it is enough to prove Theorem 2.6 for \(g_{*}\). Hence, below we set \(g(n)=g_{*}(n)=\log n\).
### Preparatory part
We begin by making more precise our notations. By \(\mathsf{C},\mathsf{D}\subset\mathsf{E}\), etc, we denote nonempty finite sets of vertices of the line-graph \(\mathsf{L}(\mathsf{H})\), and also its subgraphs generated by these sets of vertices. By \(\mathsf{A}\subset\mathsf{E}\) we always mean such a subset, for which the generated graph is connected. Define
\[\mathsf{B}_{r}(e)=\{e^{\prime}\in\mathsf{E}:d_{\mathsf{L}}(e,e^{\prime})\leq r \},\qquad\mathsf{S}_{r}(e)=\{e^{\prime}\in\mathsf{E}:d_{\mathsf{L}}(e,e^{ \prime})=r\}.\]
Next, for a subset, \(\mathsf{C}\subset\mathsf{E}\), we set
\[\langle\mathsf{C}\rangle=\{x\in\mathsf{V}:\exists e\in\mathsf{C}\ x\in e\},\]
that is, \(\langle\mathsf{C}\rangle\) is the collection of all \(x\in\mathsf{V}\) contained in all \(e\in\mathsf{C}\).
For \(r>0\) and a given \(x\in\mathsf{V}\), we fix some \(e_{x}\in\mathsf{E}_{x}\), and then set
\[\Lambda_{x,r}=\langle\mathsf{B}_{r}(e_{x})\rangle. \tag{4.1}\]
For each \(e\in\mathsf{S}_{r+1}(e_{x})\), it follows that \(e\cap\Lambda_{x,r}\neq\varnothing\) as this \(e\) has neighbors in \(\mathsf{B}_{r}(e_{x})\). At the same time, if \(e\in\mathsf{S}_{q}(e_{x})\) with \(q>r+1\), then \(e\cap\Delta_{x,r}=\varnothing\), which means that, see (1.10),
\[\partial\Lambda_{x,r}=\langle\mathsf{S}_{r+1}(e_{x})\rangle\setminus\Lambda_{x,r}=\bigcup_{e\in\mathsf{S}_{r+1}(e_{x})}\left(e\cap\Lambda_{x,r}^{c}\right). \tag{4.2}\]
Following [20] we prove Theorem 2.6 by establishing so called _strong uniqueness_, which implies both the uniqueness in question and the global Markov property. To this end, we fix some nonempty \(\Delta\subset\mathsf{V}\), and also \(\omega\in\Sigma\). Then set
\[\mathcal{V}_{\Delta}=\{\Lambda\in\mathcal{V}:\Lambda\subset\Delta\}. \tag{4.3}\]
For \(\sigma\in\Sigma\), by \(\sigma_{\Delta}\times\omega_{\Delta^{c}}\) we mean the element of \(\Sigma\) such that \((\sigma_{\Delta}\times\omega_{\Delta^{c}})_{x}=\sigma_{x}\) for \(x\in\Delta\), and \((\sigma_{\Delta}\times\omega_{\Delta^{c}})_{x}=\omega_{x}\) for \(x\in\Delta^{c}\). For such fixed \(\Delta\) and \(\omega\), we then define
\[\gamma_{\Lambda}^{\Delta,\omega}(A|\sigma_{\Delta})=\gamma_{\Lambda}(A|\sigma _{\Delta}\times\omega_{\Delta^{c}}),\qquad A\in\mathcal{F}_{\Delta},\quad \Lambda\in\mathcal{V}_{\Delta}, \tag{4.4}\]
which is a probability kernel from \(\mathcal{S}^{\Delta\setminus\Lambda}\) to \(\mathcal{F}_{\Delta}\). Then \(\gamma^{\Delta,\omega}=(\gamma_{\Lambda}^{\Delta,\omega})_{\Lambda\in\mathcal{ V}_{\Delta}}\) is a specification, which coincides with \(\gamma\) for \(\Delta=\mathsf{V}\) and any \(\omega\). By \(\mathcal{G}(\gamma^{\Delta,\omega})\) we denote the set of all probability measures on \(\mathcal{F}_{\Delta}\) which satisfy, cf. (1.2),
\[\mu\gamma_{\Lambda}^{\Delta,\omega}=\mu,\qquad\Lambda\in\mathcal{V}_{\Delta}. \tag{4.5}\]
**Lemma 4.1** (strong uniqueness).: _Let \(\Delta\), \(\omega\) and \(\gamma^{\Delta,\omega}\) be as just described. Then \(\mathcal{G}(\gamma^{\Delta,\omega})\) is a singleton whenever (2.12) is satisfied._
The proof of this lemma is based on a certain property of the functions \(h_{e}\), which we formulate now. Let \(\mathds{1}_{A}\) stand for the indicator of \(A\in\mathcal{S}\). For \(x\in\mathsf{V}\) and \(A\in\mathcal{S}\), we then set \(F_{x}^{A}(\sigma)=\mathds{1}_{A}(\sigma_{s})\). Since the functions \(h_{e}\) are separated away from zero, see (1.8), the following statement is a direct consequence of [24, Theorem 1.33, page 23].
**Proposition 4.2**.: _Let \(\Delta\), \(\omega\) and \(\gamma^{\Delta,\omega}\) be as in Lemma 4.1. Then for each \(\mu_{1},\mu_{2}\in\mathcal{G}(\gamma^{\Delta,\omega})\), the equality \(\mu_{1}(F_{x}^{A})=\mu_{2}(F_{x}^{A})\), holding for all \(x\in\Delta\) and \(A\in\mathcal{S}\), implies \(\mu_{1}=\mu_{2}\)._
### The proof of the lemma
Clearly, the lemma ought to be proved only for infinite \(\Delta\subset\mathsf{V}\). Let \(\{\Lambda_{k}\}_{k\in\mathbb{N}}\subset\mathcal{V}_{\Delta}\), see (4.3), be an ascending sequence that exhausts \(\Delta\). By (4.5), for each \(x\in\Delta\) and \(k\) such that \(x\in\Lambda_{k}\), we have
\[\mu_{1}(F_{x}^{A})-\mu_{2}(F_{x}^{A})=\int_{\Sigma}\int_{\Sigma}\left[\gamma_{ \Lambda_{k}}^{\Delta,\omega}(F_{x}^{A}|\sigma)-\gamma_{\Lambda_{k}}^{\Delta, \omega}(F_{x}^{A}|\sigma^{\prime})\right]\mu_{1}(d\sigma)\mu_{2}(d\sigma^{ \prime}).\]
Then the proof will be done if, for each \(x\in\Delta\), we construct a sequence \(\{\Lambda_{k}\}_{k\in\mathbb{N}}\subset\mathcal{V}_{\Delta}\) with the properties just mentioned, such that
\[\sup_{\sigma,\sigma^{\prime}\in\Sigma}\left|\gamma_{\Lambda_{k}}^{\Delta, \omega}(F_{x}^{A}|\sigma)-\gamma_{\Lambda_{k}}^{\Delta,\omega}(F_{x}^{A}| \sigma^{\prime})\right|\to 0,\qquad\text{as }\ k\to+\infty. \tag{4.6}\]
To proceed further, we recall that the family \((h_{\Lambda})_{\Lambda\in\mathcal{V}}\) is defined in (1.9) with \((h_{e})_{e\in\mathsf{E}}\) satisfying Assumption 1.1. In view of this and the lower bound in (1.8), we introduce
\[\bar{h}_{e}(\sigma_{e})=\frac{1}{m_{e}}h_{e}(\sigma). \tag{4.7}\]
For \(r>0\), \(x\in\Delta\) and \(e_{x}\in\mathsf{E}_{x}\) as in (4.1), (4.2), let \(\mathcal{A}_{r}(e_{x})\) be the corresponding collection of animals in \(\mathsf{L}(\mathsf{H})\), see (2.3), and \(\{N_{k}\}_{k\in\mathbb{N}}\) a sequence as in Definition 2.1. Set
\[\Lambda_{k}=\Lambda_{x,N_{k}}\cap\Delta,\qquad\partial\Lambda_{k}=\{z\in \Lambda_{k}^{c}:\exists e\ e\cap\Lambda_{k}\neq\varnothing\text{ and }z\in e\},\]
and also
\[\mathsf{D}_{k}^{o}=\{e:e\subset\Lambda_{k}\},\qquad\mathsf{D}_{k}=\{e:e\cap \Lambda_{k}\neq\varnothing\text{ and }e\cap\partial\Lambda_{k}\neq\varnothing\}.\]
For \(e\in\mathsf{D}_{k}\), we have the following possibilities: (a) \(d(e,e_{x})\leq N_{k}\); (b) \(d(e,e_{x})=N_{k}+1\). Note that \(d(e,e_{x})>N_{k}+1\) is impossible as \(e\cap\Lambda_{x,N_{k}}\neq\varnothing\). In case (a), each \(z\in e\setminus\Lambda_{k}\) lies in \(\Delta^{c}\), as it lies in \(\Lambda_{x,N_{k}}\) and \(z\in\Delta\) would imply \(z\in\Lambda_{k}\). In case (b), \(z\in e\) can be in \(\Lambda_{k}\), \(\Delta\setminus\Lambda_{k}\) and
\[\Gamma_{e}(\xi^{\omega},\eta^{\omega}):=\bar{h}_{e}(\xi_{e}^{\omega})\bar{h}_{e}( \eta^{\omega}_{e})-1\geq 0, \tag{4.13}\]
see (4.7), and
\[\Psi^{\omega}(\xi,\sigma;\eta,\sigma^{\prime}):=\prod_{e\in\mathsf{D}_{2,k}} \bar{h}_{e}(\xi_{e_{1}}\times\sigma_{e_{2}}\times\omega_{e_{3}})\bar{h}_{e}( \eta_{e_{1}}\times\sigma^{\prime}_{e_{2}}\times\omega_{e_{3}}). \tag{4.14}\]
Now we observe that the term in the second line of (4.12) is anti-symmetric with respect to the interchange \(\xi\leftrightarrow\eta\). Keeping this in mind we rewrite the product in this line in the following form
\[\prod_{e\in\tilde{\mathsf{O}}_{k}}(1+\Gamma_{e}(\xi^{\omega},\eta^{\omega}))= \sum_{\mathsf{C}\subset\tilde{\mathsf{O}}_{k}}\Gamma_{\mathsf{C}}(\xi^{\omega },\eta^{\omega}), \tag{4.15}\]
\[\Gamma_{\mathsf{C}}(\xi^{\omega},\eta^{\omega}):=\prod_{e\in\mathsf{C}} \Gamma_{e}(\xi^{\omega},\eta^{\omega}).\]
Then set, cf. (4.9),
\[\mathsf{D}^{-}_{2,k}=\{e\in\mathsf{S}_{N_{k}}(e_{x}):e\sim e^{\prime}\text{ for some }e^{\prime}\in\mathsf{D}_{2,k}\}. \tag{4.16}\]
Let \(\mathcal{C}\) denote the collection of subsets \(\mathsf{C}\subset\breve{\mathsf{D}}_{k}\) which satisfy: the graph \(\mathsf{C}\cup\{e_{x}\}\) has a connected component, say \(\mathsf{A}\), such that \(\mathsf{A}\cap\mathsf{D}_{2,k}^{-}\neq\varnothing\). Now we plug the first line of (4.15) in (4.12) and observe that the integral therein is nonzero only whenever the sum is taken over \(\mathsf{C}\in\mathcal{C}\), which follows by the anti-symmetricly mentioned above. Indeed, the just mentioned condition means that there exist \(z\in e\cap e^{\prime}\) with \(e\in\mathsf{D}_{2,k}^{-}\) and \(e^{\prime}\in\mathsf{D}_{2,k}\). If this \(z\) lies in \(\Delta\), then the corresponding \(\xi_{z}\) and \(\eta_{z}\) are present in both \(\Gamma_{\mathsf{C}}(\xi^{\omega},\eta^{\omega})\) and \(\Psi^{\omega}(\xi,\sigma;\eta,\sigma^{\prime})\), see (4.14) and (4.8), which destroys the mentioned anti-symmetricity. Furthermore, the graph \(\mathsf{A}\) contains a path, \(\vartheta(e_{x},e)\), connecting \(e_{x}\) to a certain \(e\in\mathsf{S}_{N_{k}}(e_{x})\), see (4.16); hence, \(|\vartheta(e_{x},e)|=:N\geq N_{k}\), which means that this path belongs to \(\Theta^{N}_{N_{k}}(e_{x})\) for some \(N\geq N_{k}\), see (2.4). Set
\[\Theta_{k}=\{\vartheta(e_{x},e):e\in\mathsf{D}_{2,k}^{-}\}.\]
Recall that \(\mathsf{A}_{\vartheta}\) denotes the subgraph generated by the vertices of \(\vartheta\). Since some of \(\mathsf{C}\in\mathcal{C}\) may contain several paths \(\vartheta(e_{x},e)\in\Theta_{k}\), it follows that
\[\sum_{\mathsf{C}\in\mathcal{C}}\Gamma_{\mathsf{C}}(\xi,\eta)\leq\sum_{ \vartheta\in\Theta_{k}}\Gamma_{\mathsf{A}_{\vartheta}}(\xi^{\omega},\eta^{ \omega})\sum_{\mathsf{C}\subset\breve{\mathsf{D}}_{k}\setminus\mathsf{A}_{ \vartheta}}\Gamma_{\mathsf{C}}(\xi^{\omega},\eta^{\omega}), \tag{4.17}\]
for certain \(\mathsf{A}_{\vartheta}\) may appear twice on the right-hand side of (4.17): once in the first sum, and then as a subset of \(\mathsf{C}\). We apply all these arguments in (4.12) and obtain
\[|M_{x,N_{k}}(A)| \leq \frac{2}{Z_{\Lambda_{k}}^{\Delta,\omega}(\sigma)Z_{\Lambda_{k}} ^{\Delta,\omega}(\sigma^{\prime})}\int_{S^{\Lambda_{k}}}\int_{S^{\Lambda_{k} }}\bar{h}_{e_{x}}(\xi^{\omega}_{e_{x}})\bar{h}_{e_{x}}(\eta^{\omega}_{e_{x}})\] \[\times \left(\sum_{\mathsf{C}\in\mathcal{C}}\Gamma_{\mathsf{C}}(\xi^{ \omega},\eta^{\omega})\right)\Psi^{\omega}(\xi,\sigma;\eta,\sigma^{\prime}) \chi^{\Lambda_{k}}(d\xi_{\Lambda_{k}})\chi^{\Lambda_{k}}(d\eta_{\Lambda_{k}})\] \[\leq \frac{2}{Z_{\Lambda_{k}}^{\Delta,\omega}(\sigma)Z_{\Lambda_{k}} ^{\Delta,\omega}(\sigma^{\prime})}\int_{S^{\Lambda_{k}}}\int_{S^{\Lambda_{k} }}\bar{h}_{e_{x}}(\xi^{\omega}_{e_{x}})\bar{h}_{e_{x}}(\eta^{\omega}_{e_{x}})\] \[\times \left(\sum_{\vartheta\in\Theta_{k}}\Gamma_{\mathsf{A}_{\vartheta }}(\xi^{\omega},\eta^{\omega})\right)\left(\sum_{\mathsf{C}\subset\breve{ \mathsf{D}}_{k}\setminus\mathsf{A}_{\vartheta}}\Gamma_{\mathsf{C}}(\xi^{ \omega},\eta^{\omega})\right)\Psi^{\omega}(\xi,\sigma;\eta,\sigma^{\prime})\] \[\times \chi^{\Lambda_{k}}(d\xi_{\Lambda_{k}})\chi^{\Lambda_{k}}(d\eta_{ \Lambda_{k}}).\]
The next step is to estimate the first two multipliers in the penultimate line of (4.18). By (1.13) and (4.7) it follows that
\[\bar{h}_{e}(\xi_{e})\leq e^{\delta(e)},\qquad\mbox{for all $\xi_{e}\in S^{e}$},\]
which means that, see (4.13),
\[\Gamma_{e}(\xi^{\omega},\eta^{\omega}) \leq e^{2\delta(e)}-1. \tag{4.19}\]
Since each \(\mathsf{A}\) is in \(\mathcal{A}_{N_{k}}(e_{x})\), by (4.19), (2.11) and (2.12) we get, see Definition 2.2,
\[\sum_{\vartheta\in\Theta_{k}}\Gamma_{\mathsf{A}_{\vartheta}}(\xi^{ \omega},\eta^{\omega})=\sum_{N=N_{k}}^{\infty}\sum_{\vartheta\in\Theta_{N_{k}} ^{N}(e_{x})}\prod_{e\in\mathsf{A}_{\vartheta}}\Gamma_{e}(\xi^{\omega},\eta^{ \omega}) \tag{4.20}\] \[\leq\sum_{N=N_{k}}^{\infty}\sum_{\vartheta\in\Theta_{N_{k}}^{N}(e_ {x})}\exp\left(N\mathfrak{D}(\vartheta)\right)\leq\sum_{N=N_{k}}^{\infty}e^{- \epsilon}=\frac{e^{-\epsilon N_{k}}}{e^{\epsilon}-1},\]
where we have used also (2.5). At the same time
\[\sum_{\mathsf{C}\subset\breve{\mathsf{D}}_{k}\setminus\mathsf{A}_{\vartheta}} \Gamma_{\mathsf{C}}(\xi^{\omega},\eta^{\omega})\leq\sum_{\mathsf{C}\subset \breve{\mathsf{D}}_{k}}\Gamma_{\mathsf{C}}(\xi^{\omega},\eta^{\omega})=\prod_{ e\in\breve{\mathsf{D}}_{k}}\bar{h}_{e}(\xi^{\omega}_{e})\bar{h}_{e}(\eta^{ \omega}_{e}). \tag{4.21}\]
Now we use (4.21), (4.20) in (4.18), take into account (4.11) and arrive at
\[|M_{x,N_{k}}(A)|\leq 2(e^{\epsilon}-1)^{-1}e^{-\epsilon N_{k}},\]
which yields (4.6) and thus completes the proof.
### The proof of the theorems
We begin by proving our main statement.
_Proof of Theorem 2.6._ The uniqueness in question follows as a particular case of Lemma 4.1 corresponding to \(\Delta=\mathsf{V}\). The proof of the global Markov property of the unique \(\mu\in\mathcal{G}(\gamma)\) follows by Follmer's arguments [20, pages 266, 267], which we repeat here for the reader's convenience.
Since \((\Sigma,\mathcal{F})\) is a standard Borel space, for \(\mu\in\mathcal{G}(\gamma)\) one may get
\[\mu^{\Delta^{c},\omega}(A)=\mu(A|\mathcal{F}_{\Delta^{c}})(\omega),\quad\mu^{ \partial\Delta,\omega}(A)=\mu(A|\mathcal{F}_{\partial\Delta})(\omega),\qquad A \in\mathcal{F}_{\Delta}. \tag{4.22}\]
Let \(F:\Sigma\to\mathds{R}\) and \(G_{\Lambda}:\Sigma\to\mathds{R}\), \(\Lambda\in\mathcal{V}_{\Delta}\), be bounded and: \(F\) is \(\mathcal{F}_{\Delta^{c}}\)-measurable; \(G_{\Lambda}\) is \(\mathcal{F}_{\Lambda}\)-measurable. Then
\[\mu(FG_{\Lambda})=\int_{\Sigma}F(\omega)\mu^{\Delta^{c},\omega}(G_{\Lambda}) \mu(d\omega). \tag{4.23}\]
At the same time, by (4.4) we have
\[\int_{\Sigma}F(\omega)\left((\mu^{\Delta^{c},\omega}\gamma_{\Lambda}^{\Delta, \omega})(G_{\Lambda})\right)\mu(d\omega)=\int_{\Sigma}F(\omega)\left((\mu^{ \Delta^{c},\omega}\gamma_{\Lambda})(G_{\Lambda})\right)(\omega)\mu(d\omega) \tag{4.24}\]
\[=\int_{\Sigma}F(\omega)\left((\mu^{\Delta^{c},\omega}(G_{\Lambda})\right)( \omega)\mu(d\omega)=\mu(FG_{\Lambda}).\]
As \(F\) one may take any \(\mathcal{F}_{\Delta^{c}}\)- measurable functions; thus, (4.23) and (4.24) imply that \(\mu^{\Delta^{c},\omega}\) satisfies (4.5), which yields \(\mu^{\Delta^{c},\omega}\in\mathcal{G}(\gamma^{\Delta^{c},\omega})\). However, by repeating the same steps with \(\mathcal{F}_{\partial\Delta}\)-measurable functions \(F\) one gets that \(\mu^{\partial\Delta,\omega}\in\mathcal{G}(\gamma^{\Delta^{c},\omega})\), which by (4.22) gives the global Markov property in question. \(\Box\)
_Proof of Theorem 3.1._ For random \(h_{e}\), the quantity introduced in (4.12) is a random variable. Thus, for fixed \(x\) and \(A\), we have a sequence of random variables
\[X_{k}^{K}=|M_{x,N_{k}}(A)|, \tag{4.25}\]
where we indicate the dependence on \(K\). Recall that \(\delta(e)=Kc_{e}\) in this case. Our aim is to show that this sequence is almost surely asymptotically degenerate at zero for each \(x\) and \(A\). By (4.18) we get
\[X_{k}^{K}\leq\sum_{\partial\in\Theta_{k}}\prod_{e\in\mathbb{A}_{\partial}} \left(e^{2Kc_{e}}-1\right). \tag{4.26}\]
According to the assumed i.i.d. of \(c_{e}\), and also to (3.10), the map
\[K\mapsto\tau(K):=\mathds{E}\left(e^{2Kc_{e}}-1\right)\]
is increasing and continuous (by the dominated convergence theorem), and satisfies \(\tau(0)=0\). Then either \(\tau(K)<e^{-\bar{a}}\) for all \(K>0\), or there exists a unique \(K_{*}>0\) such that \(\tau(K_{*})=e^{-\bar{a}}\) and \(\tau(K)<e^{-\bar{a}}\) for \(K<K_{*}\). Assuming \(K_{*}=+\infty\) in the former case, we have that the following holds, see (4.20), (4.26) and Remark 2.5,
\[\mathds{E}X_{k}^{K}\leq\sum_{N=N_{k}}^{\infty}\sum_{\vartheta\in\Theta_{N_{k} }^{\mathcal{O}_{k}}(e_{x})}[\tau(K)]^{N}\leq\sum_{N=N_{k}}^{\infty}\left[e^{ \bar{a}}\tau(K)\right]^{N}=\frac{[e^{\bar{a}}\tau(K)]^{N_{k}}}{1-e^{\bar{a}} \tau(K)},\qquad K<K_{*}.\]
Hence, the sequence \(\{X_{k}^{K}\}\) is asymptotically degenerate at zero in mean. Then it contains a subsequence, which converges to zero almost surely, see, e.g., [28, Theorem 3.4]. In view of Proposition 4.2, this yields the proof. |
2307.08368 | Gender mobility in the labor market with skills-based matching models | Skills-based matching promises mobility of workers between different sectors
and occupations in the labor market. In this case, job seekers can look for
jobs they do not yet have experience in, but for which they do have relevant
skills. Currently, there are multiple occupations with a skewed gender
distribution. For skills-based matching, it is unclear if and how a shift in
the gender distribution, which we call gender mobility, between occupations
will be effected. It is expected that the skills-based matching approach will
likely be data-driven, including computational language models and supervised
learning methods.
This work, first, shows the presence of gender segregation in language
model-based skills representation of occupations. Second, we assess the use of
these representations in a potential application based on simulated data, and
show that the gender segregation is propagated by various data-driven
skills-based matching models.These models are based on different language
representations (bag of words, word2vec, and BERT), and distance metrics
(static and machine learning-based). Accordingly, we show how skills-based
matching approaches can be evaluated and compared on matching performance as
well as on the risk of gender segregation. Making the gender segregation bias
of models more explicit can help in generating healthy trust in the use of
these models in practice. | Ajaya Adhikari, Steven Vethman, Daan Vos, Marc Lenz, Ioana Cocu, Ioannis Tolios, Cor J. Veenman | 2023-07-17T10:06:21Z | http://arxiv.org/abs/2307.08368v1 | # Gender mobility in the labor market with skills-based matching models
###### Abstract
Skills-based matching promises mobility of workers between different sectors and occupations in the labor market. In this case, job seekers can look for jobs they do not yet have experience in, but for which they do have relevant skills. Currently, there are multiple occupations with a skewed gender distribution. For skills-based matching, it is unclear if and how a shift in the gender distribution, which we call _gender mobility_, between occupations will be effected. It is expected that the skills-based matching approach will likely be data-driven, including computational language models and supervised learning methods.
This work, first, shows the presence of gender segregation in language model-based skills representation of occupations. Second, we assess the use of these representations in a potential application based on simulated data, and show that the gender segregation is propagated by various data-driven skills-based matching models. These models are based on different language representations (bag of words, word2vec, and BERT), and distance metrics (static and machine learning-based). Accordingly, we show how skills-based matching approaches can be evaluated and compared on matching _performance_ as well as on the risk of _gender segregation_. Making the gender segregation bias of models more explicit can help in generating healthy trust in the use of these models in practice.
## 1 Introduction
The skills-based matching approach aims to create a better fit between those looking for a job (candidates) and those offering a job (employers) by means of focusing on the skills necessary to be successful at the job. Previous research has shown that job satisfaction of employees increases, when one's skills are well matched with the job activities [11]. Next, matching based on skills has the potential to allow more labor mobility of job seekers. The focus on skills stimulates the formulation of which tasks the job seekers can perform, instead of diplomas obtained or professional experiences in similar functions in a specific sector. The emphasis on what job seekers can do, relative to what they have previously experienced, allows for more job opportunities (possibly in a new sector) for job seekers as well as a bigger pool of applicants for employers, especially in bottleneck occupations. The potential of this approach has been recognized by various organizations through different initiatives [14, 15, 16].
Gender segregation is largely present in the labor market [17, 18]. Previous research has shown that the increase in diversity in teams is associated with increased innovation [13, 14], effectiveness [15], productivity [16], and fair decision making [12]. Moreover, the increase of diversity in occupations with a good representation of different groups, can serve social justice goals for those groups that were historically banned or discouraged to practice certain occupations [1]. Although labor mobility of workers may increase with skills-based matching, it is unclear whether a shift in gender distribution between occupations, which we call _gender mobility_, will be effected.
Recruitment using skills-based matching is most probably to be data-driven [14] in which computational language models are used to interpret and represent skills formulation in machine readable vectors. Using these vectors, machine learning has the opportunity to learn from historical candidate-job matches which skill sets of candidates fit well with the required skill sets of job openings.
This work investigates the effect of data-driven skills-based matching models on the gender segregation in occupations. We first examine whether gender segregation exists in different skills representations (bag of words [14], word2vec [15], and Bert [10]) of occupations. Second, we measure the impact of biases in skills representation on gender segregation for the application of skills-based matching based on simulated data. Typically, these biases are only studied in the language representations [13, 12] without taking the effect on the possible down stream tasks into account. We examine various skills-based matching models based on the above mentioned type of representations and different distance metrics (static and machine learning-based). This allows the evaluation of a model not only according to its performance but also on its effect on the propagation of gender segregation.
This paper is structured as follows. Section 2 elaborates on the related work. In section 3, we describe the dataset and the preprocessing steps that were done. Section 4 describes
the experiments and the outcomes. At last, we conclude in section 5.
## 2 Related work
To put our contribution into perspective, we first highlight related work on gender segregation in the context of skills. Since we consider skills formulations as one of the sources of the segregation as well as the historical presence of genders in the respective jobs, we also review the literature on bias measurement with language models.
We follow related work by defining gender segregation as under (over) representation of a given gender in occupations [1]. One study showed that the female-dominated occupations are often associated with lower salary than male-dominated occupations, which partly stems from low valuation and visibility of female-associated skills [13]. Academics also point towards the fact that the type of skills matter. In countries with more focus on general skills rather than role- or firm-specific skills less occupational gender segregation is present [1]. Here, general skills are argued to be more gender-neutral as women have higher concern for skill portability given potential career interruptions due to family responsibilities. In the U.S., patterns of skills demand signal an increase in gender segregation that reinforces gender income inequality [1]. This is given that the shifts in demand towards lower paid work concern female-associated skills and shifts towards higher paid work concern male associated skills. In Britain, evidence was found that skills commonly acquired by women are increasingly in demand in highly-skilled occupations, such that gender segregation in those occupations may decrease [1]. All in all, the interrelation of skills and gender segregation have been established in related work, where the level and type of skills one has may impact gender segregation. We contribute by investigating the interplay of skills and gender segregation within the scope of how risks for this bias can be measured within AI and language model applications that may foster skills-based matching.
These language models are based on our written texts which reflect the stereotypes and human biases that are embedded in our language [10, 11]. The field of trustworthy AI [10] has therefore established a large knowledge base on measuring the bias in these models [12]. We acknowledge the recent criticism that relevant bias measurement requires the context of the downstream task in which the language model is applied, i.e. a shift from measuring language bias in the models themselves towards measuring the impact language bias has when language models are used in e.g. decision support tools [1, 1]. In our contribution, we therefore measure the risk of language bias exacerbating gender segregation in the context of the application of skills-based matching.
## 3 Dataset
We position our work in the framework of skills of the O*NET database (O*NET 2022). The O*NET database provides a structured hierarchy of sectors to occupations and their corresponding required skills. The skills described as _detailed work activities_ are chosen for the experiments, as they have a similar level of tangibility in describing the required skills as is observed in job vacancies.
To assess the gender segregation we also need the gender distribution of the O*NET occupations. The average gender distribution per O*NET occupation is retrieved from the U.S. Bureau of Labor Statistics (BLS) [1].
We would, ideally, need real historic matching data between the skills of job seekers and skills in job ads such that we can train and test skills-based matching models. As skills-matching is not yet widespread and known initiatives protect their data for privacy reasons, this data is not available. For this reason, we gauge the potential impact of skills matching models on simulated matches of candidates and job openings based on skill profiles of O*net occupations. Two random sub-sets of skills sampled from the same occupation are regarded as a good match while two random sub-set of skills sampled from two different occupations are regarded as a bad match. For the experiments, we choose to sample sub-sets of size 5. The ability of a candidate is present in the combination of the different skills as a whole. One possible way to materialize this in the experiment is by concatenating the skill sets into one string. This results in a dataset of good and bad matches between two skills descriptions. 3940 amount of pairs were created with equal amount of good and bad matches. We choose for the (possible) occurrence of overlap in sampling of two sub-sets of skills from an occupation; this is also likely to be the case in reality as people/job openings will overlap in skill. This set is further equally spit into a train and a test set. For both the train and test we draw skill profiles from all occupations while the specific skills of different occupations do not overlap between the train and test set. This is also the realistic scenario, as the model in practice would also train and test on skill sets of candidates and job openings related to the same occupations. To create a fair test set, some sub-sets of skills of a baker, for example, is trained upon, while the model is tested on other subsets of the the total skill set of a baker.
## 4 Experiments
This section describes the setup and results of two experiments: (1) presence of gender segregation in occupations according to their skills representation, and (2) the evaluation of skills-based matching models according to their performance and the risk of propagating gender segregation.
For both experiments, we consider three types of vectorizers with different levels of complexity to convert a string of skills descriptions into one vector, namely Bag of Words (BoW) [15], word2vec [12] and Sentence-BERT [10]. BoW counts the presence of single or combination of words,
while word2vec and Sentence-BERT are language models which encode the semantic meaning of a word in a vector. Word2vec encodes each word of a sentence independently while Sentence-BERT also encodes the specific sentence context of a word.
All experiments were run in python 3.9.12. The BoW vectorization is implemented using 1-gram and 2-gram words in Scikit-learn 1.1.1. library Pedregosa et al. (2011). For word2vec, the vectors of different words are averaged to create one vector using the Spacy 3.4.1 Honnibal and Montani (2017) library. At last, the sentence transformers 1.2.1. library (implementation of Sentence-BERT) is used, which can transform a description of skills into one vector.
### Gender segregation in occupations based on skills
We investigate the current gender segregation in occupations according to their underlying skills. The list of skills of each O*NET occupation is concatenated together to create one large string per occupation. These strings are fed to the three above mentioned vectorizers. The resulting 3 vectors per occupation are further separately visualized in Figure 11 by mapping them into two dimensions using PCA, which encodes the variance in the data as much as possible. Each point represents an occupation, and the color indicates the average female gender ratio of that occupation according to BLS.
Footnote 1: The interactive web version of the results of both experiments can be found in [https://fate2022-demo.tondatalab.nl/#/model](https://fate2022-demo.tondatalab.nl/#/model)
For all three types of vectorizers, we see clusters of occupations according to their gender ratio. For example, Sentence-BERT shows a cluster of male majority occupations on the middle right and a female majority occupations on top left. The male majority cluster contains mostly technical or labor intensive occupations such as machinist, dishwasher and roofers, while the female majority cluster contains mostly healthcare occupations such as dental assistants, registered nurse and respiratory therapists. Thus, this experiment indicates that there is a risk that language models interpreting the skills sets of occupations are influenced by gender segregation.
### Evaluation of skills-based matching models
In the second experiment, we put the perceived risk for gender segregation in the language representation of skills from the first experiment into the relevant context. In particular, we demonstrate how skills-matching models can be evaluated not only for their matching performance but also for their risk of propagating gender segregation. We demonstrate this setting with three types of the above mentioned vectorizers and three types of similarity measures namely Euclidean distance, cosine similarity and metric learning. This results in 9 versions of skills-based matching models. While Euclidean and Cosine metrics are pre-defined, metric learning is a supervised learning method which learns a task-specific similarity measure from the training data. The _Information Theoretic Metric Learning_ model with default parameters of the metric-learn 0.6.2 libraryde Vazelhes et al. (2020) is used in this experiment.
The dataset containing skills description pairs of good and bad matches described in section 3 is used to train and test the skills-based matching models. Each pair of skills descriptions of the test set is first vectorized and the resulting two vectors are further provided as input to a distance or similarity metric resulting in a matching score. The performance evaluation of the 9 versions according to the AUC metric can be found in Table 11. We didn't further optimize the metric learning algorithm as it is not the focus of this work.
Footnote 1: The interactive web version of the results of both experiments can be found in [https://fate2022-demo.tondatalab.nl/#/model](https://fate2022-demo.tondatalab.nl/#/model)
To assess the risk of propagating gender segregation by the above mentioned skills-based matching models, we create a new test-set. Similar to the training data, a random subset of 5 skills of each O*NET occupation is extracted and concatenated together. Per O*NET occupation the matching score is computed with the rest of the occupations. For each occupation, we find the 10 occupations with the highest matching score with respect to their skill set. Then, the overall _Gender Segregation Risk (GSR)_ of a skills-based matching model is measured by the Pearson correlation between the following variables: the female gender ratio of an occupation and the average female ratio of 10 occupations with the highest matching score. An example of the correlation between these two variables for the BoW and cosine similarity version is shown in Figure 2. This score provides an indication of how likely job seekers will end up in a new occupation with similar gender distribution as their previous occupation if they follow one of the top 10 suggestions made by the matching model. Table 11 shows the GSR of the different matching models. Note that in this experiment, it is not an issue that there might be overlap between the train-set and this new test-set, because the model is not optimized to minimize the GSR.
Footnote 1: The interactive web version of the results of both experiments can be found in [https://fate2022-demo.tondatalab.nl/#/model](https://fate2022-demo.tondatalab.nl/#/model)
We see a positive correlation of higher than 0.5 for all models, indicating that gender segregation is propagated by these models. We also see that this risk is correlated with the performance of the model. This is expected because, for example, a bad model which randomly assigns a matching score, will randomly suggest the top 10 occupations. The gender ratio of this suggested occupations will not be correlated with the query occupation.
This allows one to evaluate and compare different models
\begin{table}
\begin{tabular}{l l l l} \hline \hline Vectorizer & Similarity/Distance Metr. & AUC & GSR \\ \hline \multirow{3}{*}{BoW} & Metric learning & 0,90 & 0,60 \\ & Cosine & 0,94 & 0,72 \\ & Euclidean & 0,88 & 0,59 \\ \hline \multirow{3}{*}{Word2Vec} & Metric learning & 0,91 & 0,68 \\ & Cosine & 0,82 & 0,57 \\ & Euclidean & 0,83 & 0,60 \\ \hline \multirow{3}{*}{Sentence-BERT} & Metric learning & 0,91 & 0,69 \\ & Cosine & 0,94 & 0,70 \\ \cline{1-1} & Euclidean & 0,90 & 0,67 \\ \hline \hline \end{tabular}
\end{table}
Table 1: This table shows the trade-off between AUC performance and GSR for three types of vectorizers (BoW, Word2vec and Sentence-BERT embedding) in combination with three types of similarity/distance metrics (Euclidean distance, Cosine similarity and Metric learning)1.
according to its performance and the GSR. In our simulated application, the combination of BoW and cosine has the best performance. While the BoW with metric learning performs slightly less, it has the best trade-off between high performance and low GSR.
## 5 Conclusion
We measure the risk for gender segregation in skills descriptions and next to that demonstrate with simulated data the risk of propagating gender segregation within a potential skills-matching application that uses language models to interpret skills descriptions. The measured language bias in skills has shown the need to consider the risk for gender segregation when language models are put into use for skills-based matching. To facilitate this, our work provides a first exploration on how quantifying the measured risk for gender segregation can aid design choices in practice by considering both the performance and the risk of propagating gender segregation when choosing a skills-based matching model. The quantification of this bias can contribute towards generating healthy trust in the use of skills-based matching models.
Future work is needed with real data to validate to what extent this risk is also present in applications and to establish how this risk can be integrated in design choices. Moreover, future research is advised to include more diversity aspects such as ethnicity and educational background.
## Acknowledgements
This work was part of FATE and Skills-matching projects which were funded by the Appl.AI program within TNO.
|
2305.01253 | The Role of Summarization in Generative Agents: A Preliminary
Perspective | Generative agents that simulate human society show tremendous potential for
further research and practical applications. Specifically, the generative agent
architecture comprising several meticulously designed modules constitutes the
most critical component. To facilitate progress in this research, this report
presents our integrated perspective on comprehending generative agents through
summarization, since we believe summarization is the most fundamental and
indispensable capacity of generative agents manifested across diverse
scenarios. We hope this report can provide insight into understanding the
importance of summarization capacity in generative agents and motivate future
research. | Xiachong Feng, Xiaocheng Feng, Bing Qin | 2023-05-02T08:35:09Z | http://arxiv.org/abs/2305.01253v1 | # The Role of Summarization in Generative Agents:
###### Abstract
Generative agents (Park et al., 2023) that simulate human society show tremendous potential for further research and practical applications. Specifically, the generative agent architecture comprising several meticulously designed modules constitutes the most critical component. To facilitate progress in this research, this report presents our integrated perspective on comprehending generative agents through summarization, since we believe summarization is the most fundamental and indispensable capacity of generative agents manifested across diverse scenarios. We hope this report can provide insight into understanding the importance of summarization capacity in generative agents and motivate future research.
## 1 Introduction
Recent advancements in Large Language Models (LLMs), such as ChatGPT and GPT-4 (OpenAI, 2023), have rebuilt various domains including natural language processing (Yang et al., 2023), computer vision (Wu et al., 2023) and autonomous robotics (Mai et al., 2023). These cutting-edge models enable novel opportunities to achieve artificial general intelligence (AGI). Owing to the rapid progress of LLMs, there is an emerging consensus that LLMs have attained preliminary intelligence and now demonstrate comparable performance to humans on various tasks (Zhao et al., 2023).
In the current era of large language models, Park et al. (2023) propose Generative Agents: sophisticated computational software powered by fundamental language models that can simulate believable human behaviour within meticulously designed environments and protocols. This well-designed framework offers comprehensive opportunities for exploring and understanding human social dynamics, including long-term goal planning, information transformation, relationship establishment and coordination.
In this report, we present our view on generative agents from the perspective of automatic summarization and demonstrate how various functional components of such agents can be formalized as summarization tasks. Specifically, we identify several key summarization techniques that are integral to implementing generative agents: (1) The retrieve module contains the idea of unsupervised summarization (SS3.1); (2) The reflection module is composed of two sub-modules: extreme summarization (SS3.2) and citation-based summarization (SS3.3); (3) Query-based summarization (SS3.4) supports following Plan module and Act module; (4) Summarization with emojis (SS3.5) provides an intuitive visual interface; (5) The agent's movement in the environment can be abstracted to Graph Summarization (SS3.6); and (6) Dialogue between agents is facilitated by Dialogue Summarization (SS3.7). We hope this paper illuminates the potential of summarization techniques in advancing the development of future generative agents.
## 2 Generative Agents
Generative agents are AI-powered computational software that can simulate believable human behaviour. In this section, we provide a concise overview of the generative agent architecture including several main components. The overall architecture is shown in Figure 1. The fundamental **Memory** module is responsible for storing various types of information related to the agent itself, including basic _observations_ as well as high-level _reflections_ and generated _plans_. The **Retrieve** module then extracts appropriate memories from the memory stream to support downstream modules including _Plan_, _Reflect_ and _Act_. Afterwards, the **Reflect** module provides high-level abstractions of one agent's memories, which serve as another type of memory. Furthermore, the **Plan** module takes the agent's summary and the observed entity's summary into consideration and creates the plan in a
course-to-fine manner. Finally, agents **Act** with the world by performing actions or with other agents by initiating dialogues.
## 3 Key Summarization Techniques
### Unsupervised Summarization
The **Retrieve** module aims to offer pilot memories given the agent's current situation and the entire memory stream. This coincides with the objective of unsupervised summarization which seeks to extract the essential information from a collection of documents given one desired query based on various manufactured features Carbonell and Goldstein-Stewart (1998).
Specifically, the retrieve function takes three distinct features, _Recency_, _Importance_, and _Relevance_, into consideration to effectively derive prominent information from the memory stream. In detail, _Recency_ posits that recently accessed memories are important since we human beings are frequently processing short-term tasks. _Importance_ is inferred directly by the LLM depending on its tremendous background knowledge by simply prompting the LLM. _Relevance_ assigns a higher score to those most relevant memories with respect to the agent's current situation. The final retrieval score is a weighted sum of three scores: \(\mathrm{score}=\alpha_{\mathrm{recency}}\cdot\mathrm{recency}+\alpha_{ \mathrm{importance}}\cdot\mathrm{importance}+\alpha_{\mathrm{relevance}}\cdot \mathrm{relevance}\). With the integration of the above three features, the retrieve module successfully conducts unsupervised summarization over the memory to produce digest information for the following steps.
### Extreme Summarization
**Reflection** is one of the most critical components of the generative agent, which summarizes the agent's recent situation and creates high-level thoughts. The whole reflection can be divided into two steps, in which the first step is extreme summarization. Concretely speaking, it aims to condense the agent's 100 most recent memory records into three key topics in the question generation manner.
Specifically, the module achieves the goal by prompting the LLM via "Given only the information above, what are 3 most salient high-level questions we can answer about the subjects in the statements?". The results include three highly condensed questions that can be viewed as extreme summaries of the agent's recent memories since several previous studies verify the tight connection between summarization and question generation Narayan et al. (2020); Feng et al. (2021).
### Citation-based Summarization
The second step of **Reflection** can be viewed as a citation-based summarization task, which receives several retrieved documents (memories) with indexes and aims to produce summaries with evidence references. This is also in line with the previous related work generation task and the open
Figure 1: Illustration of the generative agent architecture and key summarization techniques inside the architecture.
domain reading comprehension task, both of which require providing concrete evidence to support their generated results Chen et al. (2021).
Specifically, the module achieves the goal by prompting the LLM via "Statements about Klaus Mueller,..., What 5 high-level insights can you infer from the above statements? (example format: insight (because of 1, 5, 3))". The output abstracts relevant memories into the reflection with citations: "Klaus Mueller is dedicated to his research on gentrification (because of 1, 2, 8, 15)".
### Query-based Summarization
In fact, query-based summarization permeates the core architecture of the entire generative agent with the help of the **Retrieve** module. In this part, we mainly focus on three tasks that will support the subsequent **Plan** and **Act** modules.
Agent's Summary DescriptionAgent's summary description summarizes the agent's identity information, current occupation situation and self-assessment, which serves as a critical clue to making plans and taking reactions. In detail, relevant memories are first obtained via three queries "[name]'s core characteristics", "[name]'s current daily occupation", and "[name's] feeling about his recent progress in life", and then three resulting summaries are combined into the whole agent's summary description.
Previous Day SummaryPrevious Day Summary plays an important role in the plan creation process, which ensures the agent achieves consistent and long-term goals. Although no detailed information is provided in the original paper Park et al. (2023), we assume the implicit query "[name]'s previous day plan" is used to retrieve relevant memories and produce the final summary.
Observed Entity SummaryThe observed entity summary that compresses (1) the relationship between the agent and the entity and (2) the status of the entity is an important basis for whether the agent takes action. The summary consists of two parts obtained via queries "What is [observer]'s relationship with the [observed entity]?" and "[Observed entity] is [action status of the observed entity]". Taking both agent's summary description and observed entity summary into consideration, the agent decides whether or not to react by prompting the LLM "Should John react to the observation, and if so, what would be an appropriate reaction?"
### Summarization with Emojis
To give quick access to the agent's status, Park et al. (2023) implements a high-level emoji-based abstraction on the sandbox interface by prompting the LLM. For example, "Isabella Rodriguez is checking her emails" appears as. As the saying goes, a picture is worth a thousand words, the emoji interface intuitively summarizes the agent's current status and integrates into the whole system.
### Graph Summarization
Agents who lived in Smallville can perform movements to reach the appropriate location. The Smallville realizes a tree representation, where the root node denotes the entire world, children nodes describe areas and leaf nodes indicate objects. The agent's movement is decided by first transforming the tree representation into natural language and then prompting the LLM via "Which area should [name] go to?". In other words, the movement of an agent can be formalized as an implicit graph summarization task Kaushik (2003). Given the world graph, the agent finds one suitable path from the current location towards the target destination.
### Dialogue Summarization
The agents interact with each other through dialogue. At the initial point, one agent decides to trigger the dialogue based on the action given the agent's summary description and observed agent (entity) summary. To make the dialogue coherent and informative, the following utterances are generated by considering additional dialogue summaries. In the original paper Park et al. (2023), pure dialogue histories are used to facilitate dialogue generation. We believe that when facing long and verbose dialogue histories, dialogue summarization can be an effective method to address such a challenge. Additionally, on the demo page, the dialogue summary also provides a quick overview of the core contents of a dialogue.
## 4 Conclusion
In this report, we aim to understand generative agents from a unified view of summarization. We systematically analyze several key summarization techniques and show how individual modules inside the generative agent architecture can be formalized as traditional summarization tasks. We believe future generative agents can be substantially enhanced with advanced summarization abilities. |
2303.03912 | Document-level Relation Extraction with Cross-sentence Reasoning Graph | Relation extraction (RE) has recently moved from the sentence-level to
document-level, which requires aggregating document information and using
entities and mentions for reasoning. Existing works put entity nodes and
mention nodes with similar representations in a document-level graph, whose
complex edges may incur redundant information. Furthermore, existing studies
only focus on entity-level reasoning paths without considering global
interactions among entities cross-sentence. To these ends, we propose a novel
document-level RE model with a GRaph information Aggregation and Cross-sentence
Reasoning network (GRACR). Specifically, a simplified document-level graph is
constructed to model the semantic information of all mentions and sentences in
a document, and an entity-level graph is designed to explore relations of
long-distance cross-sentence entity pairs. Experimental results show that GRACR
achieves excellent performance on two public datasets of document-level RE. It
is especially effective in extracting potential relations of cross-sentence
entity pairs. Our code is available at https://github.com/UESTC-LHF/GRACR. | Hongfei Liu, Zhao Kang, Lizong Zhang, Ling Tian, Fujun Hua | 2023-03-07T14:14:12Z | http://arxiv.org/abs/2303.03912v1 | # Document-level Relation Extraction with Cross-sentence Reasoning Graph
###### Abstract
Relation extraction (RE) has recently moved from the sentence-level to document-level, which requires aggregating document information and using entities and mentions for reasoning. Existing works put entity nodes and mention nodes with similar representations in a document-level graph, whose complex edges may incur redundant information. Furthermore, existing studies only focus on entity-level reasoning paths without considering global interactions among entities cross-sentence. To these ends, we propose a novel document-level RE model with a **GR**aph information **A**ggregation and **C**ross-sentence **R**easoning network (GRACR). Specifically, a simplified document-level graph is constructed to model the semantic information of all mentions and sentences in a document, and an entity-level graph is designed to explore relations of long-distance cross-sentence entity pairs. Experimental results show that GRACR achieves excellent performance on two public datasets of document-level RE. It is especially effective in extracting potential relations of cross-sentence entity pairs. Our code is available at [https://github.com/UESTC-LHF/GRACR](https://github.com/UESTC-LHF/GRACR).
Keywords:Deep learning Relation extraction Document-level RE.
## 1 Introduction
Relation extraction (RE) is to identify the semantic relation between a pair of named entities in text. Document-level RE requires the model to extract relations from the document and faces some intractable challenges. Firstly, a document contains multiple sentences, thus relation extraction task needs to deal with more rich and complex semantic information. Secondly, subject and object entities in the same triple may appear in different sentences, and some entities have aliase, which are often named entity mentions. Hence, the information utilized by document-level RE may not come from a single sentence. Thirdly, there may be interactions among different triples. Extracting the relation between two entities from different triples requires reasoning with contextual features. Figure 1 shows an example from DocRED dataset [21]. It is easy to predict intra-sentence relations because the subject and object appear in the same sentence. However, it has a problem in identifying the inter-sentence relation between "Swedish" and "Royal
Swedish Academy", whose mentions are distributed across different sentences and there exists long-distance dependencies.
[21] proposed DocRED dataset, which contains large-scale human-annotated documents, to promote the development of sentence-level RE to document-level RE. In order to make full use of the complex semantic information of documents, recent works design document-level graph and propose models based on graph neural networks (GNN) [4]. [1] proposed an edge-oriented model that constructs a document-level graph with different types of nodes and edges to obtain a global representation for relation classification. [12] defined the document-level graph as a latent variable and induced it based on structured attention to improve the performance of document-level RE models by optimizing the structure of document-level graph. [17] proposed a model that learns global representations of entities through a document-level graph, and learns local representations of entities based on their contexts. However, these models simply average the embeddings of mentions to obtain entity embeddings and feed them into classifiers to obtain relation labels. Entity and mention nodes share a similar embedding if certain entity has only one mention. Therefore, putting them in the same graph will introduce redundant information and reduce discrimination.
To address above issues, we propose a novel GNN-based document-level RE model with two graphs constructed by semantic information from the document. Our key idea is to build document-level graph and entity-level graph to fully exploit the semantic information of documents and reason about relations between entity pairs across sentences. Specifically, we solve two problems:
First, how to integrate rich semantic information of a document to obtain entity representations? We construct a document-level graph to integrate complex semantic information, which is a heterogeneous graph containing mention nodes and sentence nodes. Representations of mention nodes and sentence nodes are computed by the pre-trained language model BERT [3]. The built document-level graph is input into the R-GCNs [13], a relational graph neural network, to make nodes contain the information of their neighbor nodes. Then, representations of entities are obtained by performing logsumexp pooling operation on representations of mention nodes. In previous methods, representations of entity nodes are obtained from representations of mention nodes. Hence putting them in the same graph will introduce redundant information and reduce discriminability. Unlike previous document-level graph construction, our document-level graph contains only sentence nodes and mention nodes to avoid redundant information caused by repeated node representations.
Second, how to use connections between entities for reasoning? In this paper, we exploit connections between entities and propose an entity-level graph for reasoning.
Figure 1: An example of document-level RE excerpted from DocRED dataset.
The entity-level graph is built by the positional connections between sentences and entities to make full use of cross-sentence information. It connects long-distance cross-sentence entity pairs. Through the learning of GNN, each entity node can aggregate the information of its most relevant entity nodes, which is beneficial to discover potential relations of long-distance cross-sentence entity pairs.
In summary, we propose a novel model called GRACR for document-level RE. Our main contributions are as follows:
\(\bullet\) We propose a simplified document-level graph to integrate rich semantic information. The graph contains sentence nodes and mention nodes but not entity nodes, which avoids introducing redundant information caused by repeated node representations.
\(\bullet\) We propose an entity-level graph for reasoning to discover potential relations of long-distance cross-sentence entity pairs. An attention mechanism is applied to fuse document embedding, aggregation, and inference information to extract relations of entity pairs.
\(\bullet\) We conduct experiments on two public document-level relation extraction datasets. Experimental results demonstrate that our model outperforms many state-of-the-art methods.
## 2 Related Work
The research on document-level RE has a long history. The document-level graph provides more features for entity pairs. The relevance between entities can be captured through graph learning using GNN [10]. For example, [2] utilized GNN to aggregate the neighborhood information of text graph nodes for text classification. Following this, [1] constructed a document-level graph with heterogeneous nodes and proposed an edge-oriented model to obtain a global representation. [7] characterized the interaction between sentences and entity pairs to improve inter-sentence reasoning. [25] introduced context of entity pairs as edges between entity nodes to model semantic interactions among multiple entities. [24] constructed a dual-tier heterogeneous graph to encode the inherent structure of document and reason multi-hop relations of entities. [17] learned global representations of entities through a document-level graph, and learned local representations based on their contexts. [12] defined the document-level graph as a latent variable to improve the performance of RE models by optimizing the structure of the document-level graph. [23] proposed a double graph-based graph aggregation and inference network (GAIN). Different from GAIN, our entity-level graph is a heterogeneous graph and we use R-GCNs to enable interactions between entity nodes to discover potential relations of long-distance cross-sentence entity pairs. [18] constructed a document-level graph with rhetorical structure theory and used evidence to reasoning. [14] constructed the input documents as heterogeneous graphs and utilized Graph Transformer Networks to generate semantic paths.
Unlike above document-level graph construction methods, our document-level graph contains only sentence nodes and mention nodes to avoid introducing redundant information. Moreover, previous works don't directly deal with cross-sentence entity pairs. Although entities in different sentences are indirectly connected in the graph, e.g., the minimum distance between entities across sentences is 3 and the information needs to
pass through two different nodes when interacting in GLRE [17]. We directly connect cross-sentence entity pairs with potential relations through bridge entities to shorten the distance of information transmission, which reduces the introduction of noise.
In addition, there are some works that try to use pre-trained models directly instead of introducing graph structures. [16] applied a hierarchical inference method to aggregate the inference information of different granularity. [22] captured the coreferential relations in context by a pre-training task. [9] proposed a mention-based reasoning network to capture local and global contextual information. [20] used mention dependencies to construct structured self-attention mechanism. [26] proposed adaptive thresholding and localized context pooling to solve the multi-label and multi-entity problems. These models take advantage of the multi-head attention of Transformer instead of GNN to aggregate information.
However, these studies focused on the local entity representation, which overlooks the interaction between entities distributed in different sentences [11]. To discover potential relations of long-distance cross-sentence entity pairs, we introduce an entity-level graph built by the positional connections between sentences and entities for reasoning.
## 3 Methodology
In this section, we describe our proposed GRACR model that constructs a document-level graph and an entity-level graph to improve document-level RE. As shown in Figure 2, GRACR mainly consists of 4 modules: encoding module, document-level graph aggregation module, entity-level graph reasoning module, and classification module. First, in encoding module, we use a pre-trained language model such as BERT [3] to encode the document. Next, in document-level graph aggregation module, we construct a heterogeneous graph containing mention nodes and sentence nodes to integrate rich semantic information of a document. Then, in entity-level graph reasoning module, we also propose a graph for reasoning to discover potential relations of long-distance and cross-sentence entity pairs. Finally, in classification module, we merge the context
Figure 2: Architecture of our proposed model.
information of relation representations obtained by self-attention [15] to make final relation prediction.
### Encoding Module
To better capture the semantic information of document, we choose BERT as the encoder. Given an input document \(D=[w_{1},w_{2},\ldots,w_{k}]\), where \(w_{j}(1\leq j\leq k)\) is the \(j^{th}\) word in it. We then input the document into BERT to obtain the embeddings:
\[\mathbf{H}\!=\![\mathbf{h}_{1},\mathbf{h}_{2},\ldots,\mathbf{h}_{k}]\!=\! \mathrm{Encoder}([w_{1},w_{2},\ldots,w_{k}]) \tag{1}\]
where \(\mathbf{h}_{j}\in\mathbb{R}^{d_{w}}\) is a sequence of hidden states outputted by the last layer of BERT.
To accumulate weak signals from mention tuples, we employ logsumexp pooling [5] to get the embedding \(e_{i}^{h}\) of entity \(\mathbf{e}_{i}\) as initial entity representation.
\[\mathbf{e}_{i}^{h}=\log\sum_{j=1}^{N_{\mathbf{e}_{i}}}\exp\left(\boldsymbol{h }_{\mathbf{m}_{j}^{i}}\right) \tag{2}\]
where \(\mathbf{m}_{j}^{i}\) is the mention \(\mathbf{m}_{j}\) of entity \(\mathbf{e}_{i}\), \(h_{\mathbf{m}_{j}^{i}}\) is the embedding of \(\mathbf{m}_{j}^{i}\), \(N_{\mathbf{e}_{i}}\) is the number of mentions of entity \(\mathbf{e}_{i}\) in \(D\).
As shown in Eq.(2), the logsumexp pooling generates an embedding for each entity by accumulating the embeddings of its all mentions across the whole document.
### Document-level Graph Aggregation Module
To integrate rich semantic information of a document to obtain entity representations, we construct a document-level graph (Dlg) based on \(\mathbf{H}\).
Dlg has two different kinds of nodes:
\(Sentence\ nodes\), which represent sentences in \(D\). The representation of a sentence node \(s_{i}\) is obtained by averaging the representations of contained words. We concatenate a node type representation \(\mathbf{t}_{s}\in\mathbb{R}^{d_{t}}\) to differentiate node types. Therefore, the representations of \(s_{i}\) is \(\mathbf{h}_{s_{i}}=\left[\mathrm{avg}_{w_{j}\in s_{i}}\left(\mathbf{h}_{j} \right);\mathbf{t}_{s}\right]\), where \([;]\) is the concatenation operator.
\(Mention\ nodes\), which represent mentions in \(D\). The representation of a mention node \(m_{i}\) is achieved by averaging the representations of words that make up the mention. We concatenate a node type representation \(\mathbf{t}_{m}\in\mathbb{R}^{d_{t}}\). Similar to sentence nodes, the representation of \(m_{i}\) is \(\mathbf{h}_{m_{i}}=\left[\mathrm{avg}_{w_{j}\in m_{i}}\left(\mathbf{h}_{j} \right);\mathbf{t}_{m}\right]\).
There are three types of edges in Dlg:
\(\bullet\) Mention-mention edge. To exploit the co-occurrence dependence between mention pairs, we create a mention-mention edge. Mention nodes of two different entities are connected by mention-mention edges if their mentions co-occur in the same sentence.
\(\bullet\) Mention-sentence edge. Mention-sentence edge is created to better capture the context information of mention. Mention node and sentence node are connected by mention-sentence edges if the mention appears in the sentence.
\(\bullet\) Sentence-sentence edge. All sentence nodes are connected by sentence-sentence edges to eliminate the effect of sentences sequence in the document and facilitate inter-sentence interactions.
Then, we use an L-layer stacked R-GCNs [13] to learn the document-level graph. R-GCNs can better model heterogeneous graph that has various types of edges than GCN. Specifically, its node forward-pass update for the \((l+1)^{(th)}\) layer is defined as follows:
\[\mathbf{n}_{i}^{l+1}\!=\!\sigma\left(\mathbf{W}_{0}^{l}\mathbf{n}_{i}^{l}\!+\! \sum_{x\in X}\sum_{j\in N_{i}^{x}}\frac{1}{|N_{i}^{x}|}\mathbf{W}_{x}^{l} \mathbf{n}_{j}^{l}\right) \tag{3}\]
where \(\sigma(\cdot)\) means the activation function, \(N_{i}^{x}\) denotes the set of neighbors of node \(i\) linked with edge \(x\), and \(X\) denotes the set of edge types. \(\mathbf{W}_{x}^{l},\mathbf{W}_{0}^{l}\in\mathbb{R}^{d_{n}\times d_{n}}\) are trainable parameter matrices and \(d_{n}\) is the dimension of node representation.
We use the representations of mention nodes after graph convolution to compute the preliminary representation of entity node \(e_{i}\) by logsumexp pooling as \(e_{i}^{\text{pre}}\), which incorporates the semantic information of \(e_{i}\) throughout the whole document. However, the information of the whole document inevitably introduce noise. We employ attention mechanism to fuse the initial embedding information and semantic information of entities to reduce noise. Specifically, we define the entity representation \(e_{i}^{\text{Dlg}}\) as follows:
\[e_{i}^{\text{Dlg}}=\mathrm{softmax}\left(\frac{e_{i}^{\text{pre}}\mathbf{W}_{ i}^{e_{i}^{\text{pre}}}\left(e_{i}^{h}\mathbf{W}_{i}^{e_{i}^{h}}\right)^{T}}{ \sqrt{d_{e_{i}^{h}}}}\right)e_{i}^{h}\mathbf{W}_{i}^{e_{i}^{h}} \tag{4}\]
and
\[e_{i}^{\text{pre}}=\log\sum_{j=1}^{N_{\mathbf{n}_{i}}}\exp\left(n_{m_{j}^{i}}\right) \tag{5}\]
where \(\mathbf{W}_{i}^{e_{i}^{\text{pre}}}\) and \(\mathbf{W}_{i}^{e_{i}^{h}}\in\mathbb{R}^{d_{n}\times d_{n}}\) are trainable parameter matrices. \(n_{m_{j}^{i}}\) is mention semantic representations after graph convolution. \(d_{e_{i}^{h}}\) is the dimension of \(e_{i}^{h}\).
### Entity-level Graph Reasoning Module
To discover potential relations of long-distance cross-sentence entity pairs, we introduce an entity-level graph (_Elg_) reasoning module. _Elg_ contains only one kind of node:
\(Entity\ node\), which represents entities in \(D\). The representation of an entity node \(e_{i}\) is obtained from document-level graph defined by Eq. (5). We concatenate a node type representation \(\mathbf{t}_{e}\in\mathbb{R}^{t_{e}}\). The representations of \(e_{i}\) is \(\mathbf{h}_{e_{i}}=[e_{i}^{pre};\mathbf{t}_{e}]\).
There are two kinds of edges in _Elg_:
\(\bullet\) Intra-sentence edge. Two different entities are connected by an intra-sentence edge if their mentions co-occur in the same sentence. For example, _Elg_ uses an intra-sentence edge to connect entity nodes \(e_{i}\) and \(e_{j}\) if there is a path \(PI_{i,j}\) denoted as \(\mathbf{m}_{i}^{\mathbf{s}_{1}}\rightarrow\mathbf{s}_{1}\rightarrow\mathbf{ m}_{j}^{\mathbf{s}_{1}}\). \(\mathbf{m}_{i}^{\mathbf{s}_{1}}\) and \(\mathbf{m}_{j}^{\mathbf{s}_{1}}\) are mentions of an entity pair \(<\)\(\mathbf{e}_{i}\), \(\mathbf{e}_{j}\)\(>\) and they appear in sentence \(\mathbf{s}_{1}\). "\(\rightarrow\)" denotes one reasoning step on the reasoning path from entity node \(e_{i}\) to \(e_{j}\).
\(\bullet\) Logical reasoning edge. If the mention of entity \(\mathbf{e}_{k}\) has co-occurrence dependencies with mentions of other two entities in different sentences, we suppose that \(\mathbf{e}_{k}\) can be used as a bridge between entities. Two entities distributed in different sentences are connected by a logical reasoning edge if a bridge entity connects them. There is a logical reasoning
path \(PL_{i,j}\) denoted as \(\mathbf{m}_{i}^{\mathbf{s}_{1}}\rightarrow\mathbf{s}_{1}\rightarrow\mathbf{m}_{k} ^{\mathbf{s}_{1}}\rightarrow\mathbf{m}_{k}^{s_{2}}\rightarrow\mathbf{s}_{2} \rightarrow\mathbf{m}_{j}^{\mathbf{s}_{2}}\), and we apply a logical reasoning edge to connect entity nodes \(e_{i}\) and \(e_{j}\).
Similar to Dlg, we apply an L-layer stacked R-GCNs to convolute the entity-level graph to get the reasoned representation of entity \(e_{i}^{\textit{Elg}}\). In order to better integrate the information of entities, we employ the attention mechanism to fuse the aggregated information, the reasoned information, and the initial information of entity to form the final representation of entity.
\[e_{i}^{rep} = \operatorname{softmax}\left(\frac{e_{i}^{\textit{Dlg}}\mathbf{W} _{i}^{e_{i}^{\textit{Elg}}}\left(e_{i}^{\textit{Elg}}\mathbf{W}_{i}^{e_{i}^{ \textit{Elg}}}\right)^{T}}{\sqrt{d_{e_{i}^{\textit{Elg}}}}}\right)e_{i}^{h} \mathbf{W}_{i}^{e_{i}^{h}} \tag{6}\]
where \(\mathbf{W}_{i}^{e_{i}^{\textit{Dlg}}}\) and \(\mathbf{W}_{i}^{e_{i}^{\textit{Elg}}}\in\mathbb{R}^{d_{n}\times d_{n}}\) are trainable parameter matrices. \(d_{e_{i}^{\textit{Elg}}}\) is the dimension of \(e_{i}^{\textit{Elg}}\).
### Classification Module
To classify the target relation \(r\) for an entity pair \(<\)\(e_{m}\), \(e_{n}\)\(>\), we concatenate entity final representations and relative distance representations to represent one entity pair:
\[\hat{e}_{m}=\left[e_{m}^{rep};s_{mn}\right],\hat{e}_{n}=\left[e_{n}^{rep};s_{ nm}\right] \tag{7}\]
where \(s_{mn}\) denotes the embedding of relative distance from the first mention of \(e_{m}\) to that of \(e_{n}\) in the document. \(s_{nm}\) is similarly defined.
Then, we concatenate the representations of \(\hat{e}_{m}\), \(\hat{e}_{n}\) to form the target relation representation \(\mathbf{o}_{r}=\left[\hat{e}_{m};\hat{e}_{n}\right]\).
Furthermore, following [17], we employ self-attention [15] to capture context relation representations, which can help us exploit the topic information of the document:
\[\mathbf{o}_{c}=\sum_{i=1}^{p}\theta_{i}\mathbf{o}_{i}=\sum_{i=1}^{p}\frac{ \exp\left(\mathbf{o}_{i}\mathbf{W}\mathbf{o}_{r}^{T}\right)}{\sum_{j=1}^{p} \exp\left(\mathbf{o}_{j}\mathbf{W}\mathbf{o}_{r}^{T}\right)}\mathbf{o}_{i} \tag{8}\]
where \(\mathbf{W}\in\mathbb{R}^{d_{r}\times d_{r}}\) is a trainable parameter matrix, \(d_{r}\) is the dimension of target relation representations. \(\mathbf{o}_{i}\) is the relation representation of the \(i^{th}\) entity pair. \(\theta_{i}\) is the attention weight for \(\mathbf{o}_{i}\). \(p\) is the number of entity pairs.
Finally, we use a feed-forward neural network (FFNN) on the target relation representation \(o_{r}\) and the context relation representation \(o_{c}\) for prediction. What's more, we transform the multi-classification problem into multiple binary classification problems, since an entity pair may have different relations. The predicted probability distribution of \(r\) over the set \(R\) of all relations is defined as follows:
\[y_{r}=\operatorname{sigmoid}\left(\operatorname{FFNN}\left([\mathbf{o}_{r}; \mathbf{o}_{c}]\right)\right) \tag{9}\]
where \(\ y_{r}\in\{0,1\}\).
We define the loss function as follows:
\[\mathit{L}=-\sum_{r\in R}\left(y_{r}^{*}\log\left(y_{r}\right)+\left(1-y_{r}^{ *}\right)\log\left(1-y_{r}\right)\right) \tag{10}\]
where \(y_{r}^{*}\in\{0,1\}\) denotes the true label of \(r\). We employ Adam optimizer to optimize this loss function.
## 4 Experiments and Results
### Dataset
We evaluate our model on DocRED and CDR dataset. The dataset statistics are shown in Table 1. The DocRED dataset [21], a large-scale human-annotated dataset constructed from Wikipedia, has 3,053 documents, 132,275 entities, and 56,354 relation facts in total. DocRED covers a wide variety of relations related to science, art, time, personal life, etc. The Chemical-Disease Relations (CDR) dataset [8] is a human-annotated dataset, which is built for the BioCreative V challenge. CDR contains 1,500 PubMed abstracts about chemical and disease with 3,116 relational facts.
### Experiment Settings and Evaluation Metrics
To implement our model, we choose uncased BERT-base [3] as the encoder on DocRED and set the embedding dimension to 768. For CDR dataset, we pick up BioBERT-Base v1.1 [6], which re-trained the BERT-base-cased model on biomedical corpora.
All hyper-parameters are tuned based on the development set. Other parameters in the network are all obtained by random orthogonal initialization [17] and updated during training.
For a fair comparison, we follow the same experimental settings from previous works. We apply \(F_{1}\) and Ign \(F_{1}\) as the evaluation metrics on DocRED. \(F_{1}\) scores can be obtained by calculation through an online interface. Furthermore, Ign \(F_{1}\) means that the \(F_{1}\) score ignores the relational facts shared by the training and development/test sets. We compare our model with three categories of models. sequence-based models use neural architectures such as CNN and bidirectional LSTM as encoder to acquire embeddings of entities. Graph-based models construct document graphs and use GNN to learn graph structures and implement inference. Instead of using document graph, transformer-based models adopt pre-trained language models to extract relation.
For CDR dataset, we use training subset to train the model. Depending on whether relation between two entities occur within one sentence or not, \(F_{1}\) can be further split into intra-\(F_{1}\) and inter-\(F_{1}\) to evaluate the model's performance on intra-sentence relations and inter-sentence relations. To make a comprehensive comparison, we also measure the corresponding \(F_{1}\), intra-\(F_{1}\) and inter-\(F_{1}\) scores on development set.
### Main Results
**Results on DocRED**. As shown in Table 2, our model outperforms all baseline methods on both development and test sets. Compared with graph-based models, both \(F_{1}\) and Ign \(F_{1}\) of our model are significantly improved. Compared to GLRE, which is the most relevant approach to our method, the performance improves 1.07% for \(F_{1}\) and 1.14% for Ign \(F_{1}\) on test set. Furthermore, compared to Transformer-based model SSAN, our method improves by 0.54% for \(F_{1}\) and 0.84% for Ign \(F_{1}\) on development set. With respect to sequence-based methods, the improvement is considerable.
**Results on CDR**. Table 3 depicts the comparisons with state-of-the-art models on CDR. Compared to MRN [9], the performance of our model approximately improves about 2.9% for \(F_{1}\), and 3.9% for intra-\(F_{1}\) and 1.6% for inter-\(F_{1}\). DHG and MRN produce similar results. In summary, these results demonstrate that our method is effective in extracting both intra-sentence relations and inter-sentence relations.
### Ablation Study
We conduct a thorough ablation study to investigate the effectiveness of two key modules in our method: an aggregation module and an reasoning module. From Table 4, we can observe that all components contribute to model performance.
(1) When the reasoning module is removed, the performance of our model on the DocRED development set for Ign \(F_{1}\) and \(F_{1}\) scores drops by 0.41% and 0.43%, respectively. Furthermore, we analyze the role of each edge in the reasoning module. \(F_{1}\) drops by 0.23% or 0.25% when we remove intra-sentence edge or logical reasoning edge. Likewise, removing the aggregation module results in 0.24% and 0.16% drops in Ign \(F_{1}\) and \(F_{1}\). This phenomenon verifies the effectiveness of the aggregation module and the reasoning module.
(2) A larger drop occurs when two modules are removed. The \(F_{1}\) score dropped from 59.73% to 59.16% and the Ign \(F_{1}\) score dropped from 57.85% to 57.33%. This study validates that all modules work together can handle RE task more effective.
(3) When we apply the document-level graph with entity nodes and more complex edge types like GLRE, the \(F_{1}\) score dropped from 59.73% to 58.97% and the Ign \(F_{1}\) score dropped from 57.85% to 57.13%. This result suggests that document-level graph containing complex and repetitive node information and edges can lead to information redundancy and degrade model performance.
### Intra- and Inter-sentence Relation Extraction
In this subsection, we further analyze both intra- and inter-sentence RE performance on DocRED. The experimental results are listed in Table 5, from which we can find that GRACR outperforms the compared models in terms of intra- and inter-\(F_{1}\). For example, our model obtains 0.62% intra-\(F_{1}\) and 0.44% inter-\(F_{1}\) gain on DocRED. The improvements suggest that GRACR not only considers intra-sentence relations, but also handles long-distance inter-sentence relations well.
### Case Study
As shown in Figure 3, GRACR infers the relations of \(<\)Swedish, Royal Swedish Academy of Sciences\(>\) based on the information of \(S1\) and \(S7\). "Swedish" and "Royal Swedish Academy of Sciences" distributed in different sentences are connected by entity-level graph because they appear in the same sentence with "Johan Gottlieb Gahn". Entity-level graph connects them together to facilitate reasoning about their relations. More importantly, our method is in line with the thinking of human logical reasoning. For example, from ground true we can know that "Gahn"'s country is "Swedish". Therefore,
Figure 3: Case study on the DocRED development set. Entities are colored accordingly.
we can speculate that there is a high possibility that the organization he joined has a relation with "Swedish".
## 5 Conclusion
In this paper, we propose GRACR, a graph information aggregation and logical cross-sentence reasoning network, to better cope with document-level RE. GRACR applies a document-level graph and attention mechanism to model the semantic information of all mentions and sentences in a document. It also constructs an entity-level graph to utilize the interaction among different entities to reason the relations. Finally, it uses an attention mechanism to fuse document embedding, aggregation, and inference information to help identify relations. Experimental results show that our model achieves excellent performance on DocRED and CDR.
#### Acknowledgements
This work was supported by the National Natural Science Foundation of China (Nos. 62276053, 62271125) and the Sichuan Science and Technology Program (No. 22ZDYF3621).
|
2307.00734 | On the choice of training data for machine learning of geostrophic
mesoscale turbulence | 'Data' plays a central role in data-driven methods, but is not often the
subject of focus in investigations of machine learning algorithms as applied to
Earth System Modeling related problems. Here we consider the case of eddy-mean
interaction in rotating stratified turbulence in the presence of lateral
boundaries, a problem of relevance to ocean modeling, where the eddy fluxes
contain dynamically inert rotational components that are expected to
contaminate the learning process. An often utilized choice in the literature is
to learn from the divergence of the eddy fluxes. Here we provide theoretical
arguments and numerical evidence that learning from the eddy fluxes with the
rotational component appropriately filtered out results in models with
comparable or better skill, but substantially improved robustness. If we simply
want a data-driven model to have predictive skill then the choice of data
choice and/or quality may not be critical, but we argue it is highly desirable
and perhaps even necessary if we want to leverage data-driven methods to aid in
discovering unknown or hidden physical processes within the data itself. | F. E. Yan, J. Mak, Y. Wang | 2023-07-03T03:43:21Z | http://arxiv.org/abs/2307.00734v1 | # On the choice of training data for machine learning of geostrophic mesoscale turbulence
###### Abstract
We propose a novel approach to the modeling of the geostrophic mesoscale turbulence in the context of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence.
## 1 Introduction
The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence.
The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence.
The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence.
The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence.
The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence.
The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence.
The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence.
The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence.
The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence.
The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence.
The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geost geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geost geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geost geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geost mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geost geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geost geostrophic mesoscale turbulence is a key ingredient for the study of the geost mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geost geost geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geost geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geost geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geost geost mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geost geost geost mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geost geost mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geost mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geost geost mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geost geost mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geost geost mesoscale turbulence is a key ingredient for the study of the geost mesoscale turbulence. The geostrophic mesoscale turbulence is a key ingredient for the study of the geost mesoscale turbulence. The geost geost geostrophic mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geost geost geost mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geost mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geost mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geost mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geost mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geost mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geost mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geost geost mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geost mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geost mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geost mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geost geost geost geost turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The geost mesoscale turbulence is a key ingredient for the study of the geostrophic mesoscale turbulence. The
###### Abstract
'Data' plays a central role in data-driven methods, but is not often the subject of focus in investigations of machine learning algorithms as applied to Earth System Modeling related problems. Here we consider the case of eddy-mean interaction in rotating stratified turbulence in the presence of lateral boundaries, a problem of relevance to ocean modeling, where the eddy fluxes contain dynamically inert rotational components that are expected to contaminate the learning process. An often utilized choice in the literature is to learn from the divergence of the eddy fluxes. Here we provide theoretical arguments and numerical evidence that learning from the eddy fluxes with the rotational component appropriately filtered out results in models with comparable or better skill, but substantially improved robustness. If we simply want a data-driven model to have predictive skill then the choice of data choice and/or quality may not be critical, but we argue it is highly desirable and perhaps even necessary if we want to leverage data-driven methods to aid in discovering unknown or hidden physical processes within the data itself.
## Plain Language Summary
Data-drive methods and machine learning are increasingly being utilized in various problems relating to Earth System Modeling. While there are many works focusing on the machine learning algorithms or the problems themselves, there has been relative few investigations into the impact of data choice or quality, given the central role the data plays. We consider here the impact of data choice for a particular problem of eddy-mean interaction of relevance to ocean modeling, and provide theoretical arguments and numerical evidence to suggest that one choice (informed by our theoretical understanding of the underlying problem) is preferable over a more standard choice utilized in the literature. While the choice of data choice and/or quality may not be critical if we simply want a data-driven model to 'work', we argue it is highly desirable (possibly even a necessity) if we want to go beyond having models that just 'work', such as leveraging data-driven methods to help us in discovering unknown or hidden physical processes within the data itself.
## 1 Introduction
Data-driven methods and machine learning algorithms are increasingly being utilized in problems relating to Earth system and/or climate modeling, and there is no doubt such methods have a strong potential in greatly enhancing model skill and/or reducing computation cost in various numerical models. Some examples of usage includes dynamical processes in the atmosphere (e.g., Brenowitz & Bretherton, 2019; Yuval & O'Gorman, 2020; Mooers et al., 2021; Connolly et al., 2023; Sun et al., 2023), climate modeling (e.g., Besombes et al., 2021; Sonnewald & Lguensat, 2021), see ice prediction (e.g., Bolibar et al., 2020; Andersson et al., 2021), identification problems in oceanography (e.g., Jones et al., 2019; Thomas et al., 2021; Sonnewald et al., 2019, 2023), and our primary focus here, on ocean mesoscale turbulence (e.g., Bolton & Zanna, 2019; Zanna & Bolton, 2021; Guillaumin & Zanna, 2021). We refer the reader to the works of Reichstein et al. (2019), Irrgang et al. (2021), Sonnewald et al. (2021) and Camps-Valls et al. (2023) for a more comprehensive review.
One criticism of some data-driven methods and machine learning algorithms is the 'black-box' nature of the resulting models. In general, for a problem with input \(x\) and output \(y\), a focus of data-driven methods is to find some mapping \(f\) such that \(f(x)=y\), where \(f\) could be deterministic or probabilistic depending on the algorithm used to obtain \(f\). The lack of interpretability for \(f\) in certain instances brings into question several important issues with the use of data-driven methods. The first is robustness and applicability in different regimes: are the models doing the right things for the 'right'
reasons (or at least not the 'wrong' ones)? If for the 'wrong' reasons, then it is perfectly plausible that trained up models can behave erratically when taken outside the trained regimes and, given the nonlinear and convoluted nature of the model itself, generate subtly wrong results that might be close to impossible to check. The second relates to further utilities of the methods themselves: is it possible to use such methods to aid process discovery from the data itself? A lack of interpretability would suggest a negative answer to that question. With that in mind, there has been an increasing focus on physically constrained and/or interpretable/explaimable models (e.g., Zhang & Lin, 2018; Brenowitz et al., 2020; Zanna & Bolton, 2021; Beucler et al., 2021; Kashinath et al., 2021; Sonnewald & Lguensat, 2021; Yuval et al., 2021; Barnes et al., 2022; Clare et al., 2022; Lopez-Gomez et al., 2022; Guan et al., 2023). While the tools and algorithms do exist, this is a fundamentally harder problem, since the training step ultimate becomes one of a constrained optimization problem.
While the algorithms and nature of the resulting model \(f\) (e.g. linear vs. nonlinear, generative vs. discriminative, model complexity) are important details, at the very base level we are really dealing with the problem of _data regression_. We would thus expect _data choice_ and/or _data quality_ to critically affect the training, the performance or the useful information that could be extracted/encoded by the model, but are issues that have not received much investigation. If we simply want a model that 'works' in the sense of producing a'skilled' prediction in whatever metric we think is relevant, then the issue of data quality and/or content may not be critical, since we are simply looking for some optimal fit. If on the other hand we are interested in the harder problem of optimal fit with constraints, such as having a model that works for the 'right' reasons (e.g. satisfying physical conservation laws), or using data-driven methods for process discovery (e.g. telling us about the underlying physics of a problem), then one might imagine the choice and quality of data exposed to the model should be important. Furthermore, certain data may be more accessible for the machine learning algorithms to extract/predict features from (e.g. smoothness and/or spatio-temporal scale of data), which has practical consequences for the optimization procedure at the model training and prediction step.
To demonstrate that not all choices of data are equal, we consider in this work the problem of eddy-mean interaction in rotating stratified turbulence in the presence of boundaries, a setting that is particularly relevant to ocean modeling and parameterization of geostrophic mesoscale eddies. The problem relates to the presence of rotational fluxes (e.g. J. C. Marshall & Shutts, 1981; Fox-Kemper et al., 2003; Maddison et al., 2015), and we provide some theoretical arguments and evidence on why learning from the _eddy force function_, which is one method to deal with the presence of dynamically inert rotational fluxes, might be preferable to learning from the divergence of the eddy fluxes. We will largely leverage the experimental procedure of Bolton and Zanna (2019), albeit with important differences to be detailed. While the present investigation is largely empirical and relies on input of external knowledge that is somewhat specific to the present problem, the present work serves to open a discussion into data choice and/or quality, as well as probing the available information content in data in the general case, possibly in a more systematic and objective fashion than one performed here.
The technical problem statement relating to rotational fluxes and its plausible impact on data quality for data-driven methods are outlined in SS2. In SS3 we outline our experimental procedure, numerical model used and data-driven method. SS4 summarizes the impact of data choice on the skill of the trained models. SS5 considers the issue of model robustness via investigating the models' skill and their sensitivity to noise in the training data. We close in SS6 and provide outlooks, focusing particularly on further experiments to probe the information content of data being for use in data-driven methods of relevance to the present eddy-mean interaction problem.
## 2 Rotational fluxes and the eddy force function
### Formulation
For this particular work we consider turbulent motion under the influence of strong rotation and stratification. Specifically, we consider the Quasi-Geostrophic (QG) limit (e.g. Vallis, 2006), which is a widely-used and applicable limit for oceanic mesoscale dynamics where the motion is geostrophic at leading order. If we consider the standard Reynolds decomposition with
\[A=\overline{A}+A^{\prime},\qquad\overline{A+B}=\overline{A}+\overline{B},\quad \overline{A^{\prime}}=0, \tag{1}\]
where the overbar denotes a mean (with the projection operator assumed to commute with all relevant derivatives), and a prime denotes a deviation from the mean, the mean QG Potential Vorticity (PV) equation takes the form
\[\frac{\partial\overline{q}}{\partial t}+\nabla\cdot(\overline{\mathbf{u}}\overline {q})=-\nabla\cdot\overline{\mathbf{u}^{\prime}q^{\prime}}+\overline{Q}. \tag{2}\]
Here, \(t\) denotes the time, \(\nabla\) denotes the horizontal gradient operator, so that the PV \(q\) is defined as
\[q=\nabla^{2}\psi+\beta y+\frac{\partial}{\partial z}\frac{f_{0}}{N_{0}^{2}} \frac{\partial b}{\partial z}, \tag{3}\]
where \(\psi\) is the streamfunction, \(f=f_{0}+\beta y\) is the Coriolis frequency (background value and leading order meridional variation), \(N_{0}\) is the (static) buoyancy frequency related to the imposed background stratification, \(b=f_{0}\partial\psi/\partial z\) is the buoyancy, \(\mathbf{u}=\nabla^{\perp}\psi=(-\partial\psi/\partial y,\partial\psi/\partial x)\) is the non-divergent geostrophic velocity, and \(Q\) represents all forcing and dissipation.
An aim in studies of eddy-mean interaction is to understand the inter-dependence of the nonlinear eddy flux terms on the right hand side of Eq. (2) and the mean state. A particular goal with eddy parameterization is to relate the eddy flux term \(\overline{\mathbf{u}^{\prime}q^{\prime}}\) with some large-scale mean state, normally as
\[\overline{\mathbf{u}^{\prime}q^{\prime}}\sim f(\overline{q},\ldots;\kappa,\ldots), \tag{4}\]
where \(f\) is some mapping between mean states (such as \(\overline{q}\)) and associated parameters (such as \(\kappa\)) to the eddy fluxes. Once such a relation exists, we take a divergence, from which we obtain the eddy forcing on the mean. A notable example would be PV diffusion (e.g., Green, 1970; J. C. Marshall, 1981; Rhines & Young, 1982), where we directly postulate for the form of \(F\) as
\[\overline{\mathbf{u}^{\prime}q^{\prime}}=-\kappa\nabla\overline{q}\qquad\Rightarrow \qquad-\nabla\cdot\overline{\mathbf{u}^{\prime}q^{\prime}}=\nabla\cdot(\kappa \nabla\overline{q})\,. \tag{5}\]
We emphasize the ordering of the operations here: we obtain a functional relation between the mean and eddy fluxes first, then we take a divergence to obtain the eddy forcing (cf. Fickian diffusion closures).
### The issue of rotational fluxes
The form as given in Eq. (4) is suggestive that data-driven approaches would be useful by either directly regressing/learning for the mapping \(f\), or when a mapping (cf. parameterization) such as Eq. (5) is given, to learn for the parameters such as \(\kappa\). However, there is a subtlety involved here, arising from the fact that it is the divergence of the eddy fluxes that arises (and is generic beyond the QG system, where the eddy forcing arises from a divergence of the Eliassen-Palm flux tensor, with the eddy fluxes as the tensor components, e.g. Young, 2012; Maddison & Marshall, 2013). A two-dimensional vector field such as \(\overline{\mathbf{u}^{\prime}q^{\prime}}\) can, via a Helmholtz decomposition, be written as
\[\overline{\mathbf{u}^{\prime}q^{\prime}}=\nabla\tilde{\mathbf{\Psi}}+\hat{\mathbf{e}}_{z} \times\nabla\tilde{\mathbf{\Phi}}+\tilde{\mathbf{H}}, \tag{6}\]
where \(\hat{\mathbf{e}}_{z}\) is the unit vector pointing in the vertical, and the terms are respectively a divergent (vanishing under a curl), rotational (vanishing under a divergence), and harmonic component (vanishing under both a curl and divergence). Since the eddy forcing on the mean appears as a divergence, the rotational (and harmonic) eddy fluxes are dynamically inert, and one might expect that the presence of such dynamically inert fluxes is going to be detrimental to the regression/learning by data-driven methods. Similar issues arise for example arise in a diagnostic problem for the PV diffusivity \(\kappa\), where rotational fluxes are known to severely contaminate the calculation (e.g. Fig.1 Mak et al., 2016).
One way to get around this problem is to perform a Helmholtz decomposition as above and only perform learning/regression/diagnoses using only the divergent term \(\nabla\tilde{\Psi}\). This approach is however complicated by the issue of gauge freedom in the presence of boundaries (e.g., Fox-Kemper et al., 2003; Maddison et al., 2015; Mak et al., 2016). The standard Helmholtz decomposition as commonly employed (e.g. in electromagnetism problems) is unique because we have periodic or rapidly decaying boundary conditions. The non-uniqueness of the Helmholtz decomposition in the presence of boundaries arises from the fact that there is generically no inherited natural boundary condition for arbitrary choices of vector fields (although there may be ones that are physically relevant depending on the problem), and that the divergent term \(\nabla\tilde{\Psi}\) is unique only up to an arbitrary rotational gauge.
One possibility might be to utilize the divergence of the eddy flux directly (e.g. \(\nabla\cdot\overline{\mathbf{u}^{\prime}q^{\prime}}\)). This is somewhat the approach taken for example in the works of Bolton and Zanna (2019) and Zanna and Bolton (2021), who considers applying data-driven methods to learn about sub-grid momentum forcing in an ocean relevant model. While they report positive results from data-driven methods in their work, there are some points that are worth revisiting, particularly regarding learning from the divergence of the eddy flux. One issue is the spatial resolution of data itself: the eddy flux data itself is already small-scale, and now we want its divergence, which is an even finer scale quantity, so there could be sensitivity of the data to the numerical model resolution itself. Following on from this point is the issue of _robustness_. The learning problem here is trying to find a mapping between very small-scale data and large-scale data (e.g., divergence of eddy flux and say some function of the streamfunction), and questions arise whether this leads to sensitivity to the training data, or whether such a choice is unnecessarily taxing on the machine learning algorithms. A final point is more subtle and more speculative, to do with _commutativity_, i.e. ordering of operations. Eddy parameterizations are usually formulated as in Eq. (4): we learn a \(f(\ldots)\,=\,\overline{\mathbf{u}^{\prime}q^{\prime}}\), from which we take a divergence of the learned \(f\) to get the eddy forcing. If we are learning from \(\nabla\cdot\overline{\mathbf{u}^{\prime}q^{\prime}}\), then the ordering is different, because we are really learning for some \(\nabla\cdot\overline{\mathbf{u}^{\prime}q^{\prime}}\,=\,f(\ldots)\), where we would hope that \(\hat{f}\,=\,\nabla\cdot F\). There is however no reason to expect such an equality in general, since the resulting mappings \(F\) or \(\hat{F}\) obtained from machine learning algorithms are nonlinear.
If we are simply interested in something that just 'works', then these aforementioned points may not actually matter. If, on the other hand, we are interested in learning about the underlying physics via data-driven methods, then it is not clear whether the aforementioned properties (or the lack thereof) become fundamental limitations in the applicability of the procedure.
### The eddy force function
If we instead consider learning from data at the eddy flux level, then we probably want to filter out the rotational component in some way, ideally in a unique fashion. While the statement about the non-uniqueness of the Helmholtz decomposition holds for generic tracer fluxes in the presence of boundaries, it turns out, for the QG system and for the
eddy PV flux, there is in fact a natural boundary condition that is inherited from the no-normal flow condition (Maddison et al., 2015). The decomposition
\[\overline{\mathbf{u^{\prime}q^{\prime}}}=-\nabla\Psi^{q}_{\rm eff}+\hat{\mathbf{e}}_{z} \times\nabla\Phi^{q}_{\rm eff}+\mathbf{H}^{q}, \tag{7}\]
where \(\Psi^{q}_{\rm eff}\) denotes the eddy force function (note the extra minus sign on the gradient term compared to Eq. 6), and may be obtained from solving the Poisson equation
\[\nabla\cdot\overline{\mathbf{u^{\prime}q^{\prime}}}=-\nabla^{2}\Psi^{q}_{\rm eff} \tag{8}\]
subject to homogeneous Dirichlet boundary conditions \(\Psi^{q}_{\rm eff}=0\). Such an object is uniquely defined (from fixing the gauge freedom via the naturally inherited boundary condition), and \(\Psi^{q}_{\rm eff}\) can be proved to be optimal in the \(H^{0}_{0}\) sense, i.e. \(-\nabla\Psi^{q}_{\rm eff}\) is a minimizer in \(L^{2}\), or that the dynamically active part of the eddy flux encoded by divergent part is as "uncontaminated' as possible, at least in a simply connected domain (see Appendix A of Maddison et al., 2015). Furthermore, via the linearity assumption of the eddy force function and boundary condition inheritance (Maddison et al., 2015), we can define an eddy force function for the components that contribute towards the definition of eddy PV flux: for example, from the definition of PV given in Eq. (3), we can decompose
\[\overline{\mathbf{u^{\prime}}\zeta^{\prime}}=-\nabla\Psi^{\zeta}_{\rm eff}+\hat{ \mathbf{e}}_{z}\times\nabla\Phi^{\zeta}_{\rm eff}+\mathbf{H}^{\zeta}, \tag{9}\]
where \(\zeta~{}=~{}\nabla^{2}\psi\) is the relative vorticity, giving rise to a relative vorticity or momentum eddy force function \(\Psi^{\zeta}_{\rm eff}\)(related to the Reynolds stress via the Taylor identity, e.g. Maddison & Marshall, 2013), computed via an analogous Poisson equation to Eq. (8) also with homogeneous Dirichlet boundary conditions, and similarly for a buoyancy eddy force function \(\Psi^{b}_{\rm eff}\). For concreteness, the discussion will focus on the PV eddy force function \(\Psi^{q}_{\rm eff}\), but we document results from all three contributions in the later sections.
The eddy force functions have been previously demonstrated to be a useful quantity for diagnoses problems (e.g., Mak et al., 2016, in diagnosing eddy diffusivities via inverse approaches), and we might expect that it would be a useful quantity for data-driven methods applied to eddy parameterization of rotating stratified turbulence. To compare with the discussion above, the eddy force function is a larger-scale object, which might lead to weaker sensitivity during the training phase compared to training on \(\nabla\cdot\overline{\mathbf{u^{\prime}q^{\prime}}}\). The gradient of the eddy force function \(-\nabla\Psi^{q}_{\rm eff}\) uniquely defines the dynamically relevant eddy flux, suggesting that \(-\nabla\Psi^{q}_{\rm eff}\) would serve as a better choice of data compared to training on \(\overline{\mathbf{u^{\prime}q^{\prime}}}\), since the latter contains dynamically irrelevant data. Additionally, given parameterizations are more naturally formulated as a relation between the eddy fluxes and the mean state (cf. Eq. 4), \(-\nabla\Psi^{q}_{\rm eff}\) avoids the possible issue with commutativity mentioned above.
## 3 Model details
Taking into account the above discussion, we explore here whether the eddy force function serves as a potentially useful object for machine learning of ocean mesoscale turbulence. For a problem \(y=f(x)\), the focus here is principally on the skill of the models \(f\), trained on various output data \(y\) for the same inputs \(x\), where skill is to be measured by various mismatches between \(y_{\rm data}\) and \(y_{\rm predict}~{}=~{}f(x_{\rm data})\). We detail here a set of experiments to test and explore the following hypotheses:
1. models trained upon the filtered eddy flux \(-\nabla\Psi^{q}_{\rm eff}\) would be more skillful than ones trained upon the full eddy flux \(\overline{\mathbf{u^{\prime}q^{\prime}}}\),
2. models trained upon the filtered eddy flux \(-\nabla\Psi^{q}_{\rm eff}\) would possibly be comparable in skill to ones trained upon the divergence of the eddy flux \(\nabla\cdot\overline{\mathbf{u^{\prime}q^{\prime}}}\), but the latter models might be more sensitive to data quality.
The experimental approach will largely mirror that of Bolton and Zanna (2019). However, one important fundamental difference of our work is the choice of average, which impacts the definition of eddies from Eq. (1). Where Bolton and Zanna (2019) take a low-pass spatial filter as the projection operator, here we employ a time-average and has the property that \(\overline{A^{\prime}}=0\) in line with properties of a Reynolds operator. Our eddy forcing then is in the more familiar form of a nonlinear eddy flux (e.g. \(\nabla\overline{\cdot\mathbf{u}^{\prime}q^{\prime}}\)), rather than as a difference between the spatially averaged quantities (e.g., \(\mathbf{S}=\overline{\nabla\cdot(\mathbf{u}q)}-\nabla\cdot(\overline{\mathbf{u}}\,\overline {q})\), Eq. 7 of Bolton and Zanna, 2019). The current definition of the eddy force function \(\Psi^{q}_{\text{eff}}\) assumes a Reynolds average (Maddison et al., 2015), and while there are likely extensions and relaxation of assumptions possible, for simplicity we do not pursue this avenue and utilize time-averaging.
### Numerical ocean model setup
The physical setup we consider is essentially the same three-layer QG square double gyre configuration as Bolton and Zanna (2019) (cf. Berloff, 2005; Karabasov et al., 2009; D. P. Marshall et al., 2012; Mak et al., 2016), but solved with a pseudo-spectral method instead of using the finite difference CABARET scheme of Karabasov et al. (2009). The numerical model (qgm2) generating the data presented in this work utilizes the parameters detailed in Mak et al. (2016), with the stratification parameters chosen such that the first and second Rossby deformation radii are 32.2 and 18.9 km, with a horizontal grid spacing of \(\Delta x=\Delta y=7.5\) km (which is 512 by 512 in horizontal grid points), a horizontal viscosity value of \(\nu=50\) m\({}^{2}\) s\({}^{-1}\), and a time-step of \(\Delta t=30\) mins. A wind forcing with peak wind stress of \(\tau_{0}=0.8\) N m\({}^{-2}\) is used (correcting a typo in Table 1 of Mak et al., 2016). The model is spun up from rest for 20,000 days, and a further integration period of 5,000 days after this spin up is performed for computing time-averages.
The accumulated time-averages of the eddy fluxes are used to compute the eddy force function \(\Psi_{\text{eff}}\) via solving the Poisson equation in Eq. (8) with homogeneous Dirichlet boundary conditions, performed per layer. For this procedure, we leverage the FEn-iCS software (Logg and Wells, 2010; Logg et al., 2012; Alnaes et al., 2014) following the previous works of Maddison et al. (2015) and Mak et al. (2016), making use of the high level abstraction, automatic code generation capabilities and the numerous inbuilt solvers that are particularly suited to elliptic equations we have here. The data from each grid point of the numerical model are the nodal values on a regular structured triangular mesh, with a projection onto a piecewise linear basis (CG1). All derivative operations are performed on the finite element mesh, and the nodal values of the relevant fields are restructured into arrays for feeding into the machine learning algorithms.
Fig. 1 shows some sample output data in the surface layer. The two horizontal components of the time-averaged eddy PV fluxes in panels \((b,c)\) are the datasets returned by the numerical model, which is sampled onto a finite element mesh as a vector object. The resulting object's divergence can then be computed, and the result is given in panel \((a)\). As expected, the divergence of the eddy PV flux has more smaller-scale fluctuations and is less smooth than the eddy PV fluxes. Solving the relevant Poisson equation in FEn-iCS, the PV eddy force function \(\Psi^{q}_{\text{eff}}\) is shown in panel \((d)\). From Maddison et al. (2015), the gradient of the eddy force function \(\nabla\Psi^{q}_{\text{eff}}\) has a physical interpretation when considered together with the time-mean streamfunction \(\overline{\psi}\) (not shown, but see Maddison et al., 2015), interpreted as whether eddies are accelerating the mean-flow (if \(\nabla\Psi^{q}_{\text{eff}}\cdot\nabla\overline{\psi}>0\), interpreted as an input of energy into the mean by eddies) or decelerating the mean flow (if \(\nabla\Psi^{q}_{\text{eff}}\cdot\nabla\overline{\psi}<0\), interpreted as an extraction of energy from the mean by eddies). Here, the eddy force function can be shown to correspond to the regimes where the eddies are slowing down the mean-flow via baroclinic instability when the Western Boundary Current first separates (the first positive-negative pattern emanating from the western boundary, which is anti-correlated with \(\nabla\overline{\psi}\)), while the next dipole pattern (the first negative-positive patterns, which is correlated with \(\nabla\overline{\psi}\)) is an eddy forcing of the
mean-flow corresponding to an eddy driven regime (cf. Waterman and Jayne, 2011; Waterman and Hoskins, 2013).
From this \(\Psi_{\rm eff}^{q}\), the horizontal components of the gradient leads to the eddy PV fluxes with the rotational component removed, and are shown in panels \((e,f)\). While not obvious at first sight, the divergence of the full eddy PV flux (panels \(b,c\)) and the divergence of the filtered eddy PV flux (panels \(e,f\)) are both equal to \(\nabla\overline{\cdot\mathbf{u}^{\prime}q^{\prime}}\) (panel \(a\)) up to numerical solver errors (here at least four orders of magnitude smaller than the data). In this instance, note also that the filtered eddy flux has qualitatively different spatial patterns to the full eddy flux, and that the filtered eddy flux is around an order of magnitude smaller than the full eddy fluxes. The behavior is consistent with observations that the rotational eddy fluxes can be large (e.g. Griesel et al., 2009), and suggests we probably do want to filter the dynamically inert component out should we utilize eddy flux data to learn about geostrophic turbulence.
### Model training procedure
Following Bolton and Zanna (2019) we employ Convolutional Neural Networks (CNNs; e.g., SS9, Goodfellow et al., 2016) to map between the specified inputs and targets. In line with the intended investigation, the choice of parameters for training the CNNs are kept fixed and chosen as in Bolton and Zanna (2019), and the main quantity we vary is the choice of output data. The mappings that are returned as a CNN are denoted:
* \(f_{\rm div}^{q}(\ldots)\), with output data as the divergence of the eddy PV flux \(\nabla\cdot\overline{\mathbf{u}^{\prime}q^{\prime}}\),
* \(f_{\rm full}^{q}(\ldots)\), with output target data as the full eddy PV flux \(\overline{\mathbf{u}^{\prime}q^{\prime}}\),
* \(f_{\rm eff}^{q}(\ldots)\), with output data as the dynamically active eddy PV flux as defined through a gradient of the PV eddy force function (cf. Eq. 8) \(-\nabla\Psi_{\rm eff}^{q}\).
Figure 1: \((a)\) The divergence of the eddy PV flux (units of s\({}^{-2}\)), calculated from the diagnosed time-averaged \((b)\) zonal and (c) meridional component of the PV fluxes (units of m s\({}^{-2}\)). \((d)\) The associated eddy force function \(\Psi_{\rm eff}^{q}\) (units of m\({}^{2}\) s\({}^{-2}\)) calculated from the data shown in panel \(a\), and the \((e)\) zonal and \((f)\) meridional component of \(-\nabla\Psi_{\rm eff}^{q}\), the associated eddy PV fluxes with the dynamically inert rotational component removed (units of m s\({}^{-2}\)). Note the different choices of colorbar limits between the data range in panels \((b,c)\) and \((e,f)\).
Note that \(f^{q}_{\text{div}}(\ldots)\) predicts a scalar field, while the \(f^{q}_{\text{full/eff}}(\ldots)\) returns a vector field. A possible choice could be to train a model from the eddy force function, and from the trained model's predicted eddy force function compute its Laplacian to obtain the divergence of the eddy flux. As mentioned above, this is an extremely difficult test for model skill since gradient operations amplify mismatches, and we comment on related results and observations are in the conclusions section.
To train up these mappings in the present time-averaged case, we follow the schematic given in Fig. 2, partially inspired by the approach of Bolton and Zanna (2019). The model domain is partitioned into small overlapping boxes. The input and output data associated with each of these boxes are paired up, and the pairs are each assigned a number and randomly shuffled (i.e. sampling from a uniform probability distribution function) depending on a choice of a random seed, and subsequently assigned to the training set (for training up the model) and validation (for tuning the hyperparameters in order to minimize a specified loss function) set with a 80:20 ratio. A model is trained up, and the skill of the model is its ability to be able to predict the global field. In the 512 by 512 pixel domain, we take the small boxes to be 40 by 40 pixels, with a stride of six, resulting in a collection of \(80^{2}=6400\) images of the domain. For statistical significance, an ensemble of 20 such models were trained up, each ensemble member only differing in the choice of the random seed, and the same sets of random seeds are used for the ensembles to be compared against. The CNNs are built using the PyTorch platform (Paszke et al., 2019), where the CNN architecture consists of three hidden convolutional layers with square kernels (of size 8, 4 and 4 respectively), with a two-dimension max pooling layer with square kernel of size 2, and a fully-connected linear activation layer as the output. The CNNs are trained with a batch size of 64, using the Adam optimizer (Kingma and Ba, 2015) with a mean squared error loss function. An early stopping criterion is used to monitor the loss function during the training to avoid over-fitting; for simplification, we use a constant learning rate of \(10^{-4}\) during training.
Figure 2: Model training strategy demonstrated here with a snapshot of the instantaneous PV from model output. The domain is partitioned into small square regions of size 40 by 40 pixels, overlapping in the \(x\) and \(y\) direction by 6 pixels, resulting in 6400 entries of input and output data. Each pair of input and output data is assigned with equal probability to be in the training set and validation set at the 80:20 ratio, from which a trained model results. An ensemble of models with 20 members is created, and are tested according to the procedure detailed in text.
## 4 Model skill
We first evaluate the predictive skill of the various models to the choice of target data. The skill of the models are judged by its ability to reduce mismatches of the divergence of the eddy PV flux, via repeated predictions of smaller patches (here taken with a stride of 2 pixels), with averages taken as necessary. Note that while \(f^{q}_{\rm div}(\ldots)\) already predicts the divergence of the eddy PV flux, we will take a divergence of the outcome of \(f^{q}_{\rm full/eff}(\ldots)\) to give the predicted divergence of the eddy PV flux. The normalized mismatch between data and prediction will be judged as
\[\epsilon^{q}_{L^{2}}(F^{q}_{(\cdot)})=\frac{\|\nabla\cdot\overline{\mathbf{u}^{ \prime}q^{\prime}}-F^{q}_{(\cdot)}(\ldots)\|^{2}_{L^{2}}}{\|\nabla\cdot \overline{\mathbf{u}^{\prime}q^{\prime}}\|^{2}_{L^{2}}}, \tag{10}\]
where \(F^{q}_{(\cdot)}\) denotes the divergence of the eddy PV flux from the models \(f^{q}_{(\cdot)}(\ldots)\), and the \(L^{2}\) norm is defined as
\[\|g\|^{2}_{L^{2}}=\int_{A}g^{2}\;\mathrm{d}A \tag{11}\]
for some scalar field \(g\). Each ensemble member will make a set of predictions with an associated mismatch, and the associated averages and standard deviations computed to gauge model skill.
We note that the test for skill chosen here is inherently harder and biased _against_ the models trained on the eddy PV fluxes (filtered or otherwise), since an extra divergence operation is required in computing the mismatches. The above choice to compare the divergence of the eddy PV flux was taken noting that we want a quantity that is comparable across the three sets of models, and there is a theoretical issue in comparing quantities at the eddy PV flux level (since that requires integrating the prediction of \(F^{q}_{\rm div}(\ldots)\), which is then subject to a choice of boundary condition). One could argue whether it is the \(L^{2}\) mismatches we are ultimately interested in, since we may for example be interested in the patterns of the forcing, rather than the exact locations of the forcing. As a compromise, we consider the Sobolev semi-norms (e.g. Thiffeault, 2012) given by
\[\|g\|^{2}_{H^{p}}=\int_{A}|(-\nabla^{2})^{p}g|^{2}\;\mathrm{d}A=\sum_{k^{2}+l^ {2}\neq 0}(k^{2}+l^{2})^{p}|\hat{g}_{k,l}|^{2}, \tag{12}\]
where \(\hat{g}_{k,l}\) are the Fourier coefficients of \(g\), \((k,l)\) are the respective wavenumbers, and the link between integral and sum follows from Parseval's theorem (e.g. if \(p=0\) then it is the \(L^{2}\) norm above when the \(k\,=\,l\,=\,0\) mode is included). Sobolev semi-norms with negative \(p\) will weigh the lower wavenumbers (i.e. the larger-scale patterns) more, and in this instance a lower normalized mismatch
\[\epsilon^{q}_{H^{p}}(F^{q}_{(\cdot)})=\frac{\|\nabla\cdot\overline{\mathbf{u}^{ \prime}q^{\prime}}-F^{q}_{(\cdot)}(\ldots)\|^{2}_{H^{p}}}{\|\nabla\cdot \overline{\mathbf{u}^{\prime}q^{\prime}}\|^{2}_{H^{p}}} \tag{13}\]
indicates that the mismatches at the large-scales are smaller. Since we are dealing with finite approximations so that \(k^{2}+l^{2}<\infty\), we can perform the computation, although the formal definition for the \(\hat{H}^{p}\) semi-norms is generally for fields with zero mean and on a periodic domain and such that the infinite sum converges. For the work here we will focus on the case of \(p=-1/2\), sometimes referred to as the mix-norm (e.g. Thiffeault, 2012); conclusions below are qualitatiely the same if \(p\,=\,-1\) or \(p\,=\,-2\) were chosen (not shown).
### Models trained on eddy PV fluxes
We first focus on models trained up on the data based on the eddy PV flux \(\overline{\mathbf{u}^{\prime}q^{\prime}}\) with the time-mean streamfunction \(\overline{\psi}\) as the input. Fig. 3 shows the predicted divergence of the eddy PV flux \(F^{q}_{\rm div/full/eff}(\overline{\psi})\) as an output from one of the model ensemble members. Compared to the target given in Fig. 1\((a)\), the predictions are more smooth with
fewer small-scale features, arising from a combination of the fact that CNNs were used, and that our prediction step leads to some averaging of the overlaping regions. Visually, the predictions \(F^{q}_{\rm div}(\overline{\psi})\) and \(F^{q}_{\rm eff}(\overline{\psi})\) are almost indistinguishable (the latter having a slightly stronger signal downstream of the Western Boundary Current). On the other hand, the prediction \(F^{q}_{\rm full}(\overline{\psi})\) shows more fluctuation features than the other two cases. The larger amount of small-scale features in \(F^{q}_{\rm full}(\overline{\psi})\) likely arises because the model is predicting the eddy PV flux first, before taking a numerical divergence of the data, so any small fluctuations that arise from the prediction is amplified by the divergence operation. In that regard, the fact that the prediction \(F^{q}_{\rm eff}(\overline{\psi})\) is so similar to \(F^{q}_{\rm div}(\overline{\psi})\) is rather remarkable.
Fig. 4 shows the more quantitative measure of computing various mismatches in the \(L^{2}\) norm and the \(\dot{H}^{-1/2}\) semi-norm given in Eq. (11) and (12) respectively. The results show that the models trained upon the filtered eddy PV flux \(-\nabla\Psi^{q}_{\rm eff}\) outperforms the models trained upon the full eddy PV flux \(\overline{\mathbf{u}^{\prime}q^{\prime}}\), and have a comparable or even better performance compared to the models trained up on the divergence of the eddy PV flux \(\nabla\cdot\overline{\mathbf{u}^{\prime}q^{\prime}}\). The differences in skill are visually obvious between the models trained on the full eddy flux \(\overline{\mathbf{u}^{\prime}q^{\prime}}\) and the filtered eddy flux \(-\nabla\Psi^{q}_{\rm eff}\). The difference between the models trained from the filtered eddy flux \(-\nabla\Psi^{q}_{\rm eff}\) and the divergence of the eddy flux \(\nabla\cdot\overline{\mathbf{u}^{\prime}q^{\prime}}\), while notable in the \(\dot{H}^{-1/2}\) measure, is too close to call in the \(L^{2}\) measure (e.g. we do not have \(p<0.05\) using the Student's \(t\)-test (Student, 1908) under the null hypothesis that the means of \(F^{q}_{\rm div}(\overline{\psi})\) and \(F^{q}_{\rm eff}(\overline{\psi})\) are the same).
The results here lend support to our expectation that the presence of rotational fluxes contaminate and degrade the accuracy of a trained up model, and that the eddy force function provides an viable alternative for use in machine learning approaches that addresses the problem of dynamically inert rotational fluxes, leading to at least comparable performance from a skill point of view (and some evidence to suggest it might be better, although that is dependent on the choice of metric). The observation that \(F^{q}_{\rm eff}(\overline{\psi})\) is comparable to \(F^{q}_{\rm div}(\overline{\psi})\) is all the more remarkable when we note that tests based on the models' ability in reproducing the divergence of the eddy flux is intrinsically harder and biased against models trained on \(-\nabla\Psi^{q}_{\rm eff}\), since an additional divergence operation that is expected to amplify errors is required to produce \(F^{q}_{\rm eff}(\overline{\psi})\).
### Other choice of eddy fluxes and inputs
By the linearity assumption in deriving the eddy force function and the definition of PV, analogous eddy force functions for momentum and buoyancy may be defined by
Figure 3: Prediction of the divergence of eddy PV flux (units of m\({}^{2}\) s\({}^{-2}\)) from one of the ensemble member of models. (_a_) \(F^{q}_{\rm div}(\overline{\psi})\), (_b_) \(F^{q}_{\rm full}(\overline{\psi})\), (_c_) \(F^{q}_{\rm eff}(\overline{\psi})\). The target reference data shown in Fig. 1\(a\).
a similar decomposition but using the eddy relative vorticity flux \(\overline{\mathbf{u^{\prime}}\zeta^{\prime}}\) (related to the Reynolds stress via the Taylor identity) and \(\overline{\mathbf{u^{\prime}}b^{\prime}}\) (related to the form stress). Following the notation outline above, Fig. 5 show the target data \(\nabla\cdot\overline{\mathbf{u^{\prime}}\zeta^{\prime}}\) and \(\nabla\cdot\overline{\mathbf{u^{\prime}}b^{\prime}}\), and the analogous predictions of the divergence of the fluxes denoted by \(F^{\zeta/b}_{\text{div}/\text{full}/\text{eff}}(\overline{\psi})\).
For the models trained on the data relating to the eddy PV flux shown in Fig. 1, the predictions are more smooth than the diagnosed target data, and is particularly noticeable for prediction of the divergence of the eddy relative vorticity flux in Fig. 5(\(b,c,d\)). For the eddy buoyancy case, the diagnosed target data is already relatively smooth. We note that, visually, \(F^{b}_{\text{full}}(\overline{\psi})\) in Fig. 5(\(g\)) seems to be possess extra features particularly in the downstream region, while \(F^{b}_{\text{eff}}(\overline{\psi})\) and \(F^{b}_{\text{div}}(\overline{\psi})\) in Fig. 5(\(f,h\)) seems to be captur
Figure 4: Ensemble average and quartiles of the mismatch as measured by the normalized (\(a\)) \(L^{2}\) norm (\(b\)) \(\dot{H}^{-1/2}\) semi-norm, given by Eq. (10) and (13) respectively, for the models predicting the divergence of the eddy PV flux (Fig. 1\(a\)). Blue denotes models trained on the divergence of the eddy fluxes, orange denotes models trained on the full eddy fluxes, and green denotes models trained on the filtered eddy fluxes.
Figure 5: Target data and predictions associated with (top row) eddy relative vorticity flux (related to the Reynolds stress, units of m s\({}^{-2}\)) and (bottom row) eddy buoyancy flux (related to the form stress, units also of m s\({}^{-2}\) taking into account of the extra factors). Showing (\(a,e\)) the divergence of the time-averaged eddy relative vorticity and buoyancy flux, and a sample (\(b,f\)) \(F^{\zeta/b}_{\text{div}}(\overline{\psi})\), (\(c,g\)) \(F^{\zeta/b}_{\text{full}}(\overline{\psi})\), (\(d,h\)) \(F^{\zeta/b}_{\text{eff}}(\overline{\psi})\) from one of the ensemble members.
ing the patterns in the target data well, with some visual hints that the prediction from \(F_{\rm div}^{b}(\overline{\psi})\) has slightly sharper features.
For a more quantitative measure, we show in Fig. 6 the \(L^{2}\) and \(\dot{H}^{-1/2}\) mismatches in \(F_{\rm div/full/eff}^{q/\zeta/b}(\overline{\psi}/\overline{q}/\overline{\zeta})\), totaling the \(3^{3}=27\) possible combinations. The conclusions over all these possible choices are largely what was drawn from before but with minor differences. The models trained up on the filtered eddy fluxes outperform those trained upon the full eddy fluxes (except for the case of eddy relative vorticity fluxes), and are comparable or better than models trained on the divergence of the flux (except in the case of the eddy buoyancy fluxes).
Noting that eddy PV fluxes have contributions from the eddy buoyancy as well as eddy relative vorticity fluxes, it is curious that while models trained on the filtered eddy fluxes compared with models trained on the divergence of the flux appear to perform worse for the eddy buoyancy flux (bottom row of Fig. 6), but has reasonable performance in the eddy relative vorticity flux case (middle row of Fig. 6) such that, together, the resulting skill in the eddy PV flux (top row of Fig. 6) still remains comparable (and possibly slightly better in the \(\dot{H}^{-1/2}\) semi-norm, indicating better matching in terms of large-scale patterns). One possible explanation for the degradation in performance for eddy buoyancy fluxes is that \(\nabla\cdot\overline{u^{\prime}b^{\prime}}\) is already relatively smooth and larger-scale (Fig. 5\(e\)), which might be favorable for direct use as training data. On the other hand, the eddy relative vorticity fluxes are inherently smaller-scale (Fig. 5\(a\)), and the presence of small-scale fluctuation might be unfavorable for direct use as training data, but does not affect models trained on the filtered fluxes as such since the training data is by definition
more smooth. The performance of models based on the full eddy relative vorticity fluxes is somewhat surprising, but may be to do with the smaller component of the rotational fluxes: examining the decomposition into divergent and rotational parts via the eddy force function (cf. Fig. 1\(b,c,e,f\), not shown) it is found that the divergent component is smaller by about a factor of 2 in the eddy relative vorticity flux, but a factor of 10 in the eddy buoyancy and PV flux. The results seem to suggest that the main benefits of filtering dynamically inert rotational fluxes would be in the eddy buoyancy and PV.
For completeness, we show in Fig. 7 the analogous eddy force functions associated with the predictions from the trained models from one of the ensemble members (although observations detailed here are robust upon examining the outputs from other members); note the appropriate mismatches would be closely related to the \(\dot{H}^{-2}\) semi-norm as defined in Eq. (12), but with a difference in the boundary conditions. The predictions from models trained on the filtered eddy fluxes (panels \(d,h,i\)) have patterns that are largely aligned with the diagnosed eddy force functions from the data (panels \(a,e,i\)) up to minor discrepancies (e.g. downstream patterns in panel \(d\) compared to panel \(a\), and panel \(l\) compared to panel \(i\)). The predictions from models trained on the full eddy fluxes (panels \(c,g,k\)) show similar patterns although with somewhat more mismatches, particularly in the PV and buoyancy eddy force functions. By contrast, the predictions from the divergence of the eddy fluxes (panels \(b,f,j\)) show large-scale disagreements in all three variables, the mismatches being visually the gravest in the PV and buoyancy variables. Given that the eddy force function encodes the dynamically active eddy fluxes, and has an interpretation that \(\nabla\Psi_{\rm eff}\)-\(\nabla\overline{\psi}\) encodes the sign of energy exchange between the mean and
Figure 7: \((a,e,i)\) Target eddy force functions \(\Psi_{\rm eff}^{q/c/b}\), and eddy force functions associated with prediction from \((b,f,j)\) divergence of the eddy PV, relative vorticity and buoyancy fluxes, \((c,g,k)\) full eddy PV, relative vorticity and buoyancy fluxes, and \((d,h,i)\) filtered eddy PV, relative vorticity and buoyancy fluxes, from one of the ensemble members. All data shown here are in units of m\({}^{2}\) s\({}^{-2}\).
eddy component (Maddison et al., 2015), the finding here suggests the predictions from models trained on the divergence of the eddy fluxes are very likely representing erroneous energy transfers, particularly for processes associated with eddy buoyancy fluxes.
## 5 Model robustness
The above observations of model skill and its sensitivity to small-scale fluctuations brings into question the issue of robustness particularly for the models trained on the divergence of the eddy fluxes. To explore the sensitivity of skill to noise in the data, we consider a set of experiments where we add noise \(\eta(x,y)\) to the data at the _training_ stage, and judge the models' performance on its ability in predicting the target data without noise. To make sure we are comparing models in a consistent manner, we add an appropriately scaled Gaussian distributed noise \(\eta(x,y)\) to the eddy fluxes \((\overline{\mathbf{u}^{\prime}q^{\prime}},\overline{\mathbf{u}^{\prime}\zeta^{\prime} },\overline{\mathbf{u}^{\prime}b^{\prime}})\), from which we compute the divergence of the eddy flux as well as the eddy force function from the noisy data, and train up the models using the procedure outlined above. In that sense the whole set of models are exposed to the _same_ choice of noise, since 1 unit of noise at the divergence level is not necessarily the same as 1 unit of noise at the streamfunction level. The noise level here is measured in units of the standard deviation of the eddy flux data. The hypothesis is that the models trained on the filtered eddy fluxes are more robust than those trained on the divergence of the eddy fluxes, and able to maintain model skill with increased levels of noise.
A note to make here is that the stochastic noise \(\eta(x,y)\) in this regard is formally non-differentiable in space, so that the divergence operation on it is not well-defined. In terms of numerical implementation, however, the random numbers sampled from the appropriately scaled Gaussian distribution are the nodal values of the finite element mesh used in FEniCS, and there is a projection onto a linear basis, so that a derivative operation on the projected \(\eta(x,y)\) is allowed within FEniCS, though the operation may be numerically sensitive. An approach we considered is filtering the noise field. We consider solving for some \(\tilde{\eta}(x,y)\) satisfying
\[(1-L^{2}\nabla^{2})^{2}\tilde{\eta}=\eta \tag{14}\]
with no-flux boundary conditions, and it is the resulting \(\tilde{\eta}(x,y)\) that is added to the training data. The resulting \(\tilde{\eta}\) is by construction differentiable at least once so that a divergence is well-defined. For the operator \((1{-}L^{2}\nabla^{2})^{2}\), the associated Green's function has a characteristic length-scale \(L\) that can be interpreted as a filtering length-scale where the radial spectral power density decreases significantly after \(L\) (closely related to the Matern auto-covariance, e.g. Whittle, 1963; Lindgren et al., 2018). Note that 'noise level' here refers to the magnitude of \(\eta(x,y)\), and that \(\max|\tilde{\eta}(x,y)|<\max|\eta(x,y)|\) by construction.
The \(L^{2}\) and \(\dot{H}^{-1/2}\) mismatches of \(F^{q/\zeta/b}_{\text{div/full/eff}}(\overline{\psi})\) to the data as a function of noise level for the ensemble of models is shown in Fig. 8, and consistently we find that the models trained up on the eddy force function out-perform the models trained upon the divergence of the eddy flux. The former shows a relative insensitivity to noise level, while the latter shows a rapid degradation in skill with noise level. It would seem that the use of eddy force function data alleviates the sensitivity to small-fluctuations in data, at least in the present measure and approach.
The reduced sensitivity to noise might have been anticipated, since the eddy force function is a result of an elliptic solve of a Poisson equation, where the noisy data is acted upon by an inverse Laplacian operator that leads to substantial smoothing. We would however argue that the relative insensitivity to noise is somewhat surprising, since there is no guarantee the presence of even reduced fluctuations at the streamfunction level would stay small after spatial derivatives operations, since we are using the divergence of the eddy flux as the target for the measure of skill. While one could also argue that the present
robustness test is inherently a hard test for models trained upon the divergence of the eddy flux, we argue the conclusions are robust regardless of whether the noise is added at the flux, divergence of flux or streamfunction level. In fact, the use of the divergence of a flux as training data _is_ likely the cause for sensitivity to noise: a inherently small-scale field is sensitive to the presence of noise in data, so is likely going to lead to issues with robustness.
The conclusions in the above are qualitatively robust for different choices of the filtering length-scale \(L\): with reduced \(L\), the degradation of skill in models trained on the divergence of the eddy fluxes is more rapid with noise level, but the skill of models trained on the filtered eddy fluxes is still relatively insensitive to noise level, and consistently more skillful than models trained on the divergence of the eddy fluxes. The conclusions are also robust for different choices of inputs (\(\overline{\zeta}\) and \(\overline{q}\)), and with sample calculations employing other choices of smoothing, coarse-graining (e.g., Aluie, 2019) or filtering (e.g., Grooms et al., 2021) of the noise field \(\eta(x,y)\).
## 6 Conclusions and outlooks
Data-driven methods are increasingly being employed in problems of Earth System Modeling, and there is no doubt that such methods provide a powerful tool that can in principle be leveraged to not only improve our modeling efforts, but also deepen our underlying understanding of the problems. Most works in the literature thus far has focused on demonstrating the efficacy of the machine-learning methods and algorithms.
Here we take a complimentary line of investigation in considering the choice and quality of data itself being fed into the algorithms, for a case where we have some theoretical understanding to inform our choice of data. While one could argue this is not en
Figure 8: Ensemble average and quartiles of the normalized (\(a\)) \(L^{2}\) norm (\(b\)) \(\dot{H}^{-1/2}\) semi-norm, given by Eq. (10) and (13) respectively, for the models predicting the divergence of the eddy (rows) PV flux, relative vorticity (cf. momentum) flux, and buoyancy flux as a function of noise level (in units of standard deviation of the training data), for models using the time-mean streamfunction \(\overline{\psi}\) as the input. Blue denotes models trained on the divergence of the eddy fluxes, orange denotes models trained on the full eddy fluxes, and green denotes models trained on the filtered eddy fluxes.
tirely necessary if we just want something that 'works' in the relevant metric(s) for the problem, we argue it is incredibly useful and if not necessary if we want to be leveraging data-driven methods to learn about the underlying physical problems, and/or have beyond 'black-box' models. Furthermore, the choice of data can in principle improve the training and/or the performance of the data-driven models themselves, so there is a need for such an investigation into data quality and information content.
For this work we focused on the problem of eddy-mean interaction in rotating stratified turbulence in the presence of boundaries, relevant to the modeling and parameterization of ocean dynamics. In such systems it is known that the large-scale mean affects and is affected by the small-scale eddy fluxes, and while we might want to leverage data-driven methods to learn about the relationship between the mean and the eddy fluxes, it is known that in the presence of boundaries the eddy feedback onto the mean is invariant up to a rotational gauge (e.g. J. C. Marshall & Shutts, 1981; Fox-Kemper et al., 2003; Eden et al., 2007). In practice the dynamically inert component could be quite large (e.g., Griesel et al., 2009,, Fig. 1 here), and its presence might be expected to contaminate diagnoses and/or performance of data-driven models. One possible way round is to train models based on its divergence (e.g., Bolton & Zanna, 2019; Zanna & Bolton, 2021). Here we propose that data with the dynamically inert eddy fluxes filtered out could be used instead. The approach outlined here we argued here may have the advantage in that the resulting field is inherently larger-scale, which would help with model training and sensitivity, and be theoretically more appropriate to use if we want to learn about the underlying physics of the problem, because we do not expect the operations to be commutative (i.e. given the nonlinearity, learning from the divergence is not guaranteed to be the same as the divergence of the learned result).
The experimental approach here largely follows that of Bolton and Zanna (2019), where we diagnose the various data from a quasi-geostrophic double gyre model to train the model, and compare the model's performance in its prediction. For filtering the eddy flux we employ the eddy force function (e.g. D. P. Marshall & Pillar, 2011; Maddison et al., 2015; Mak et al., 2016), which in the present simply connected quasi-geostrophic system is provably optimal in the \(L^{2}\) norm (and thus unique; see Appendix of Maddison et al., 2015). We made the choice here to measure a model's skill in being able to reproduce the divergence of the eddy fluxes over an ensemble of models with 20 members and over a variety of input choices. The findings here are that the models trained on the eddy force function are \((a)\) more skillful than those trained on the full eddy flux (except for the relative vorticity eddy fluxes), \((b)\) at least comparable in skill than models trained on the divergence of the eddy fluxes (except for the buoyancy eddy fluxes), and on occasion better, especially in the \(\dot{H}^{-1/2}\) semi-norm compared to the \(L^{2}\) norm, where the former biases matching of the large-scale patterns of the resulting predictions, and \((c)\) more robust in that the models are less sensitive to noise in the training data. The first finding is perhaps not unexpected. The latter two findings we argue are not entirely obvious, given divergence operations acting at various steps. For example, sample calculations where a model is trained on the eddy force function directly (and then taking a Laplacian to obtain a prediction of the divergence of eddy flux) leads to larger mismatches, which we attribute to the fact that any mismatches in the predicted eddy force function is significantly amplified by the two derivative operations. With that in mind, the fact that models trained on the filtered flux reported here leveraging the eddy force function as a way to filter data leads to models with comparable or better skill and superior robustness is a non-trivial result.
Exceptions to the above conclusions are that models trained on the divergence of the eddy buoyancy flux are more skillful (bottom row of Fig. 6), and models trained on the eddy relative vorticity flux appear comparable whether the rotational component is filtered out or not (middle row of Fig. 6). The former might be justified in that the eddy buoyancy flux is already relatively smooth and somewhat larger-scale, so that training
on its divergence is not such an issue; however, we also note that the buoyancy eddy force functions associated with the predictions of model trained on the divergence of the eddy buoyancy flux seems to perform the worse (bottom of Fig. 7), implying erroneous predictions of eddy energy pathways. The latter behavior is possibly to do with the observation that the dynamically inert rotational component is comparable to the dynamically active divergent component in the eddy relative vorticity flux (as opposed to the rotational component being a factor of ten smaller in the eddy PV and buoyancy flux; see Fig. 1\(b,c,e,d\) for eddy PV flux), so the effect of filtering is somewhat marginal. One saving grace is that, in the quasi-geostrophic system, the potential vorticity (with contributions from relative vorticity and buoyancy) is the master variable, and that while models trained up on the relative vorticity or buoyancy fluxes perform better separately, the models trained up on the eddy force function has skill and robustness in the master variable. We note that the conclusions reported here appear to be robust even if we use data with only some of the rotational component filtered out in sample calculations (e.g., solving for \(\bar{\psi}\) in Eq. 6 with no normal flux boundary conditions, not shown), although we lose a little bit of skill and the physical interpretation associated with the eddy force function.
One thing we caution here is drawing a one-to-one comparison of the present work with that of Bolton and Zanna (2019) and Zanna and Bolton (2021). While it is true those works utilize a similar model, experimental procedure and data, the main theoretical difference is that the choice of average is different: their work utilizes a spatial average, and the eddy flux data there is defined as the difference between the filtered divergence and the divergence of the filtered field (if making an assumption of the zero divergence condition on the resulting velocities). Here we utilize a _time_ average, which is in line with the definition of the eddy force function in Maddison et al. (2015), which requires a Reynolds average. While we have not attempted a similar investigation in the case of spatial averaging, it is not implausible that there is an analogous object to the eddy force function when a spatial average is employed, or that a simple Helmholtz-type decomposition could yield the desired filtering of the dynamically inert rotational component, but is beyond the scope of the present investigation.
Because of the choice of time average, we have limited data in time, and one could wonder whether our conclusions are simply to do with the limited data availability. This is unlikely the case: we also carried out an analogous investigation with rolling time averages as well as _ensemble_ averages (not shown), and the conclusions drawn from those results are essentially identical to those here. This is perhaps not surprising noting that the rolling time averages for a long enough window and the ensemble averages shown no strong deviations from each other, but we note this is likely only true for a sufficiently simple system with no strong evidence of internal modes of variability, such as the one employed here.
The main intention of the present work is to demonstrate that not all data choices are equal when fed to data-driven methods, and it is not always advisable throwing all the available data at the machine and trust that the machine will figure out what to do with it (although one could argue that might reduce the inherent biases). For the case of rotating stratified turbulence, the eddy force function is potentially a useful quantity if we aim to leverage data-drive methods for model skill or for learning about the underlying physics of the problem, given the various theoretical expectations highlighted in this work. Other choices may be possible: in a periodic domain often used in rotating turbulence studies (e.g., Frezat et al., 2022; Ross et al., 2023), a standard Helmholtz decomposition could be used to solve for the divergent component, although the eddy force could still be used for physical interpretation. We note that while skill in reproducing eddy forcing is one target, we have not examined here on the ability of the model to reproduce the mean state, and the present procedure might be termed an 'offline' approach. Learning 'online' (e.g., Frezat et al., 2022) may be more appropriate for param
eterization purposes to improve on the mean response, and it would be of interest to see whether filtering of the eddy flux as discussed here would confer any benefits to model learning.
The present work also highlights questions relating to information content of data. While quantifying absolute data information content is likely quite difficult, it should be at least possible to compute a relative measure, even if empirically. Preliminary investigation indicates that as the amount of data exposed to the machine learning algorithm is reduced, the accuracy of models trained upon the full eddy flux or the divergence of the eddy flux degrades much faster than models trained upon the eddy force function. One might ask an analogous question of the input data. The work of Bolton and Zanna (2019) suggests for example that training with data from regions with higher eddy kinetic energy leads to better model performance in terms of accuracy, suggestive of higher information content in said region. Within the present experimental framework, instead of training using all the data and performing a random sampling of the sub-regions considered in this work, we could consider instead not using all the data, and perform training based on a biased sampling that favor regions with higher eddy energy content, with the hypothesis that the latter case leads to models with higher accuracy from a statistical point of view. Further, we could investigate the case of multiple inputs, where we hypothesize that eddy energy and a mean state variable as inputs might lead to improved performance compared to say two mean state variables: in the current quasi-geostrophic setting, the mean state variables are functionally related to each other, possibly leading to redundant information, while the eddy energy might be dependent on the mean state, but is capturing eddy statistics instead and providing complementary information. This investigation is ongoing and will be reported elsewhere in due course.
## Data Availability Statement
This work utilizes FEniCS (2019.1.0) that is available as a Python package. The source code for the model (qgm2, from James Maddison), sample model data and scripts used for generating the plots in this article from the processed data are available through [http://dx.doi.org/10.5281/zenodo.8072817](http://dx.doi.org/10.5281/zenodo.8072817).
## Acknowledgments
This research was funded by both RGC General Research Fund 16304021 and the Center for Ocean Research in Hong Kong and Macau, a joint research center between the Qingdao National Laboratory for Marine Science and Technology and Hong Kong University of Science and Technology. We thank James Maddison and Liiyung Yeow for various scientific and technical comments in relation to the present investigation, and the former for providing the qgm2 code for use in the present work.
|
2302.05092 | Eadro: An End-to-End Troubleshooting Framework for Microservices on
Multi-source Data | The complexity and dynamism of microservices pose significant challenges to
system reliability, and thereby, automated troubleshooting is crucial.
Effective root cause localization after anomaly detection is crucial for
ensuring the reliability of microservice systems. However, two significant
issues rest in existing approaches: (1) Microservices generate traces, system
logs, and key performance indicators (KPIs), but existing approaches usually
consider traces only, failing to understand the system fully as traces cannot
depict all anomalies; (2) Troubleshooting microservices generally contains two
main phases, i.e., anomaly detection and root cause localization. Existing
studies regard these two phases as independent, ignoring their close
correlation. Even worse, inaccurate detection results can deeply affect
localization effectiveness. To overcome these limitations, we propose Eadro,
the first end-to-end framework to integrate anomaly detection and root cause
localization based on multi-source data for troubleshooting large-scale
microservices. The key insights of Eadro are the anomaly manifestations on
different data sources and the close connection between detection and
localization. Thus, Eadro models intra-service behaviors and inter-service
dependencies from traces, logs, and KPIs, all the while leveraging the shared
knowledge of the two phases via multi-task learning. Experiments on two
widely-used benchmark microservices demonstrate that Eadro outperforms
state-of-the-art approaches by a large margin. The results also show the
usefulness of integrating multi-source data. We also release our code and data
to facilitate future research. | Cheryl Lee, Tianyi Yang, Zhuangbin Chen, Yuxin Su, Michael R. Lyu | 2023-02-10T07:22:23Z | http://arxiv.org/abs/2302.05092v1 | # Eadro: An End-to-End Troubleshooting Framework for Microservices on Multi-source Data
###### Abstract
The complexity and dynamism of microservices pose significant challenges to system reliability, and thereby, automated troubleshooting is crucial. Effective root cause localization after anomaly detection is crucial for ensuring the reliability of microservice systems. However, two significant issues rest in existing approaches: (1) Microservices generate traces, system logs, and key performance indicators (KPIs), but existing approaches usually consider traces only, failing to understand the system fully as traces cannot depict all anomalies; (2) Troubleshooting microservices generally contains two main phases, i.e., anomaly detection and root cause localization. Existing studies regard these two phases as independent, ignoring their close correlation. Even worse, inaccurate detection results can deeply affect localization effectiveness. To overcome these limitations, we propose _Eadro_, the first end-to-end framework to integrate anomaly detection and root cause localization based on multi-source data for troubleshooting large-scale microservices. The key insights of Eadro are the anomaly manifestations on different data sources and the close connection between detection and localization. Thus, Eadro models intra-service behaviors and inter-service dependencies from traces, logs, and KPIs, all the while leveraging the shared knowledge of the two phases via multi-task learning. Experiments on two widely-used benchmark microservices demonstrate that Eadro outperforms state-of-the-art approaches by a large margin. The results also show the usefulness of integrating multi-source data. We also release our code and data to facilitate future research.
Microservices, Root Cause Localization, Anomaly Detection, Traces
## I Introduction
Microservice systems are increasingly appealing to cloud-native enterprise applications for several reasons, including resource flexibility, loosely-coupled architecture, and lightweight deployment [1]. However, anomalies are inevitable in microservices due to their complexity and dynamism. An anomaly in one microservice could propagate to others and magnify its impact, resulting in considerable revenue and reputation loss for companies [2]. Figure 1 shows an example where a failure in one microservice may delay all microservices on the invocation chain.
Therefore, developers must closely monitor the microservice status via run-time information (e.g., traces, system logs, and KPIs) to discover and tackle potential failures in their earliest efforts. Yet, thousands of microservices are usually running in distributed machines in a large-scale industrial microservice system. As each microservice can launch multiple instances, a system can produce billions of run-time records per day [1, 2]. The explosion of monitoring data makes automated troubleshooting techniques imperative.
Many efforts have been devoted to this end, focusing either on anomaly detection [3, 4, 5] or on root cause localization [6, 7, 8, 9, 10, 11]. Anomaly detection tells whether an anomaly exists, and root cause localization identifies the culprit microservice upon the existence of an anomaly. Previous approaches usually leverage statistical models or machine learning techniques to mine information from traces, as traces profile and monitor microservice executions and record essential inter-service information (e.g., request duration). However, we identify two main limitations of the existing troubleshooting approaches.
(1) _Insufficient exploitation of monitoring data_: different from operation teams that pay close attention to diverse sources of run-time information, existing research deeply relies on traces and exploits other data sources insufficiently. This gap stems from the complexity of multi-source data analysis, which is much harder than single-source data analysis, as multi-source data is heterogeneous, frequently interacting, and very large [12]. However, on the one hand, traces contain important information for troubleshooting but are insufficient to reveal all typical types of anomalies. On the other hand, different types of data, such as logs and KPIs, can reveal anomalies collaboratively and bring more clues about potential failures. For example, a CPU exhaustion fault can cause abnormally high values in the CPU usage indicator and trigger warnings recorded in logs, but the traces may not exhibit abnormal patterns (such as high latency).
(2) _Disconnection in closely related tasks_: Generally, root cause localization follows anomaly detection since we must discover an anomaly before analyzing it. Current studies of microservice reliability regard the two phases as independent, despite their shared inputs and knowledge about the
Fig. 1: A failure in “order” indirectly delays other microservices on the invocation chain, while microservices off the chain are unaffected.
microservice status. Existing approaches usually deal with the same inputs redundantly and waste the rich correlation information between anomaly detection and root cause localization. Moreover, the contradiction between computing efficiency and accuracy limits the simple combination of state-of-the-art anomaly detectors and root cause localizers. For a two-stage troubleshooting approach, it is generally a little late to use an advanced anomaly detector and then analyze the root cause. Thus, root cause localization-focused studies usually apply oversimplified anomaly detectors (e.g., N-sigma), and unfortunately, the resulting detection outputs can contain many noisy labels and thereby affect the effectiveness of downstream root cause localization.
To overcome the above limitations, we propose **Eadro**, the first End-to-end framework integrating Anomaly Detection and Root cause IQcalization to troubleshoot microservice systems based on multi-source monitoring data. The key ideas are 1) learning discriminative representations of the microservice status via multi-modal learning and 2) forcing the model to learn fundamental features revealing anomalies via multi-task learning. Therefore, Eadro can fully exploit meaningful information from different data sources that can all manifest anomalies. Also, it allows information to be inputted once and used to tackle anomaly detection and root cause localization together and avoids incorrect detection results hindering next-phase root cause localization.
Specifically, Eadro consists of three components: _(1) Modal-wise learning_ contains modality-specific modules for learning intra-service behaviors from logs, KPIs, and traces. We apply Hawkes process [13] and a fully connected (FC) layer to model the log event occurrences. KPIs are fed into a dilated causal convolution (DCC) layer [14] to learn temporal dependencies and inter-series associations. We also use DCC to capture meaningful fluctuations of latency in traces, such as extremely high values. _(2) Dependency-aware status learning_ aims to model the intra- and inter-dependencies between microservices. It first fuses the multi-modal representations via gated concentration and feeds the fused representation into a graph attention network (GAT), where the topological dependency is built on historical invocations. _(3) Joint detection and localization_ contains an anomaly detector and a root cause localizer sharing representations and an objective. It predicts the existence of anomalies and the probability of each microservice being the culprit upon an anomaly alarm.
Experimental results on two datasets collected from two widely-used benchmark microservice systems demonstrate the effectiveness of Eadro. For anomaly detection, Eadro surpasses all compared approaches by a large margin in _F1_ (53.82%-92.68%), and also increases _F1_ by 11.47% on average compared to our derived multi-source data-based methods. For root cause localization, Eadro achieves state-of-the-art results with 290%-5068% higher in _HR@1_ (Top-1 Hit Rate) than five advanced baselines and outperforms our derived methods by 43.06% in _HR@1_ on average. An extensive ablation study further confirms the contributions of modeling different data sources.
Our main contributions are highlighted as follows:
* We identify two limitations of existing approaches for troubleshooting microservices, motivated by which we are the first to explore the opportunity and necessity to integrate anomaly detection and root cause localization, as well as exploit logs, KPIs, and traces together.
* We propose the first end-to-end troubleshooting framework (Eadro) to jointly conduct anomaly detection and root cause localization for microservices based on multi-source data. Eadro models intra-service behaviors and inter-service dependencies.
* We conduct extensive experiments on two benchmark datasets. The results demonstrate that Eadro outperforms all compared approaches, including state-of-the-art approaches and derived multi-source baselines on both anomaly detection and root cause localization. We also conduct ablation studies to further validate the contributions of different data sources.
* Our code and data 1 are made public for practitioners to adopt, replicate or extend Eadro. Footnote 1: [https://github.com/BEbillionaireUSD/Eadro](https://github.com/BEbillionaireUSD/Eadro)
## II Problem Statement
This section introduces important terminologies and defines the problem of integrating anomaly detection and root cause localization with the same inputs.
### _Terminologies_
Traces record the process of the microservice system responding to a user request (e.g., click "create an order" on an online shopping website). Different microservice instances then conduct a series of small actions to respond to the request. For example, the request "create an order" may contain steps "create an order in pending", "reserve credit", and "update the order state." A microservice (caller) can _invoke_ another microservice (callee) to conduct the following action (e.g., microservice "Query" asks microservice "Check" to check the order after finishing the action "query the stock of goods"), and the callee will return the result of the action to the caller. We name this process as _invocation_. The time consumed by the whole invocation (i.e., from initializing the invocation to returning the result) is called invocation _latency_, including the request processing time inside a microservice and the time spent on communicating between the caller and the callee. A _trace_ records the information during processing a user request [15] (including multiple invocations), such as the invocation latency, the total time of processing the request, the HTTP response code, etc.
Meanwhile, system logs are generated when system events are triggered. A _log message_ (or _log_ for short) is a line of the standard output of logging statements, composed of constant strings (written by developers) and variable values (determined by the system) [16]. If the variable values are removed, the remaining constant strings constitute a _log event_. _KPIs_ are the numerical measurements of system performance (e.g., disk I/O
rate) and the usage of resources (e.g., CPU, memory, disk) that are sampled uniformly.
### _Problem Formulation_
Consider a large-scale system with \(M\) microservices, system logs, KPIs, and traces are aggregated individually at each microservice. In a \(T\)-length observation window (data obtained in a window constitute a sample), we have multi-source data defined as \(\mathbf{X}=\{(\mathbf{X}_{m}^{\mathcal{L}},\mathbf{X}_{m}^{\mathcal{K}}, \mathbf{X}_{m}^{\mathcal{K}})\}_{m=1}^{M}\), where at the \(m\)-th microservice, \(\mathbf{X}_{m}^{\mathcal{L}}\) represents the log events chronologically arranged; \(\mathbf{X}_{m}^{\mathcal{K}}\) is a multivariate time series consisting of \(k\) indicators; \(\mathbf{X}_{m}^{\mathcal{K}}\) denotes the trace records. Our work attempts to build an end-to-end framework achieving a two-stage goal: Given \(\mathbf{X}_{[1:M]}\), the framework predicts the existence of anomalies, denoted by \(y\), a binary indicator represented as 0 (normal) or 1 (abnormal). If \(y\) equals one, a localizer is triggered to estimate the probability of each microservice to be the culprit, denoted by \(\mathbf{P}=[p_{1}\cdots p_{M}]\in[0,1]^{M}\). The framework is built on a parameterized model \(\mathcal{F}:\mathbf{X}\rightarrow(y,\mathbf{P})\).
## III Motivation
This section introduces the motivation for this work, which aims to address effective root cause localization by jointly integrating an accurate anomaly detector and being driven by multi-source monitoring data. The examples are taken from data collected from a benchmark microservice system, TrainTicket [17]. Details about data collection will be introduced in SS V-A.
### _Can different sources of data besides traces be helpful?_
We find that _traces are insufficient to reveal all potential faults despite their wide usage._ Most, if not all, previous related works [18, 19, 20, 21, 3, 4, 7, 18, 22] are trace-based, indicating traces are informative and valuable. However, traces focus on recording interactions between microservices and provide a holistic view of the system in practice. Such high-level information only enables basic queries for coarse-grained information rather than intra-service information. For example, latency or error rate in traces can suggest a microservice's availability, yet fine-grained information like memory usage reflecting the intra-service status is unknowable. This is consistent with our observation that latency is sensitive to network-related issues but cannot adequately reflect resource exhaustion-related anomalies. Figure 2 shows an example where a point denotes an invocation taking the microservice "travel" as the callee. When Network Jam or Packet Loss is injected, the latency is abnormally high (marked with stars), but the latency during the CPU exhaustion injection period does not display obviously abnormal patterns. This case reminds us to be careful of relying on traces only. Since traces are informative but cannot reveal all anomalies, trace-based methods may omit potential failures. We need extra information to mitigate the anomaly omission problem.
We also notice that _system logs and KPIs provide valuable information manifesting anomalies in microservices._
As for logs, we first parse all logs into events via Drain [22], a popular log parser showing effectiveness in many studies [16, 23]. It is evident that some logs can report anomalies semantically by including keywords such as "exception", "fail", and "errors". The event "Exception in monitor thread while connecting to server \(<\)*>." can be a good example.
Event occurrences can also manifest anomalies besides semantics. Take the event "Route id: \(<\)*>" recorded by the microservice "route" as an example. This event occurs when the microservice completes the routing request. Figure 3 shows that when network-related faults are injected, the example event's occurrence experiences a sudden drop and remains at low values. The reason is that the routing invocations become less since the communication between "route" and its parent microservices (callers) is blocked. This case further supports our intuition that system logs can provide clues about microservice anomalies.
KPIs are responsive to anomalies by continuously recording run-time information. An example in Figure 4 gives a closer look, which displays "total CPU usage" of microservice "payment" during the period covering fault injections. Clearly, "total CPU usage" responds to the fault CPU exhaustion by showing irregular jitters and abnormally high values. This observation aligns with our a priori knowledge that KPIs provide an external view of a microservice's resource usage and performance. Their fine-grained information can well reflect anomalies, especially resource-related issues, which require detailed analysis.
However, only using logs and KPIs is not sufficient since they are generated by each microservice individually at a local level. As the example shown in Figure 1 (SS I), we need traces
Fig. 4: A CPU exhaustion fault incurs abnormal jitters and high values in “total CPU usage”.
Fig. 3: The occurrences of related logs can reflect issues such as poor communication.
Fig. 2: Network-related faults incur obvious anomalies in latency of “travel”, but the CPU exhaustion fault does not.
to obtain inter-service dependencies to analyze the anomaly propagation so as to draw a global picture of the system to locate the root cause.
Traces are informative yet not sufficient to reflect all anomalies. System logs and metrics provide valuable information manifesting anomalies by presenting abnormal patterns, so they can serve as additional information.
### _Can current anomaly detectors provide accurate results?_
This section demonstrates that _current detectors attached with localizers cannot deliver satisfying accuracy_.
As far as we know, existing root cause localization approaches for microservices follow such a pipeline: 1) conduct anomaly detection and 2) if an anomaly is alarmed, then the localizer is triggered. That is, the anomaly detector and root cause localizer work separately. Unfortunately, incorrect anomaly detection results can exert a negative impact on the following root cause localization by introducing noisy labels. To investigate whether current anomaly detectors are satisfactory for downstream localizers, we first summarize three main kinds of anomaly detection approaches used in root cause localization papers. Note that since this paper targets root cause localization, the listed approaches are root cause localization-oriented anomaly detectors rather than sophisticated approaches for general anomaly detection.
* N-sigma used in [20, 21] computes the mean (\(\mu\)) and the standard deviation (\(\sigma\)) of historical fault-free data. If the maximum latency of the current observation window is larger than \(\mu+n\cdot\sigma\), an alarm will be triggered, where \(n\) is an empirical parameter.
* Feature engineering + machine learning (FE+ML) [9, 24] feeds manually derived features from traces into a machine learning-based model such as OC-SVM [9] to detect anomalies in a one-class-classification manner.
* SPOT [25] is an advanced algorithm for time series anomaly detection based on the Extreme Value Theory. Recent root cause analysis studies [6, 7] have applied it for detecting anomalies.
We conduct effectiveness measurement experiments based on our data on the three anomaly detectors following [6, 9, 21], respectively. We focus on the false omission rate (_FOR\(=\frac{FN}{FN+TN}\)_) and the false discovery rate (_FDR\(=\frac{FP}{FP+TN}\)_), where \(TN\) is the number of successfully predicted normal samples; \(FN\) is the number of undetected anomalies; \(FP\) is the number of normal samples incorrectly triggering alarms. Besides, #Infer/ms denotes the average inference time with the unit of microseconds.
Table I lists the experimental results, demonstrating a large improvement space for these anomaly detectors. The high _FOR_ and _FDR_ indicate that the inputs of the root cause localizer contain lots of noisy labels, thereby substantially influencing localization performance. We attribute this partly to the closed-world assumption relied on by these methods, that is, regarding normal but unseen data patterns as abnormal, thereby incorrectly forcing the downstream localizer to search for the "inexistent" root cause based on normal data. Also, latency is insufficient to reveal all anomalies, as stated before, especially those that do not severely delay inter-service communications, represented by the high _FOR_.
In addition, complex methods (FE+ML and SPOT) have better effectiveness than N-sigma yet burden the troubleshooting process by introducing extra computation. Since root cause localization requires anomaly detection first, the detector must be lightweight to mitigate the efficiency reduction. Even worse, these machine learning-based approaches require extra hyperparameter tuning, making the entire troubleshooting approach less practical.
Root cause localization requires anomalous data detected by anomaly detectors, but current localization-oriented detectors either deliver unsatisfactory accuracy and introduce noisy data or reduce efficiency, making the following localization troublesome.
In summary, these examples motivate us to design an end-to-end framework that integrates effective anomaly detection and root cause localization in microservices based on multi-source information, i.e., logs, KPIs, and traces. Logs, KPIs, and latency in traces provide local information on intra-service behaviors, while invocation chains recorded in traces depict the interactions between microservices, thereby providing a global view of the system status. This results in Eadro, the first work to enable jointly detecting anomalies and locating the root cause, all the while attacking the above-mentioned limitations by learning the microservice status concerning both intra- and inter-service properties from various types of data.
## IV Methodology
The core idea of Eadro is to learn the intra-service behaviors based on multi-modal data and capture dependencies between microservices to infer a comprehensive picture of the system status. Figure 5 displays the overview of Eadro, containing three phases: _modal-wise learning_, _dependency-aware status learning_, and _joint detection and localization_.
### _Modal-wise Learning_
This phase aims to model the different sources of monitoring data individually. We apply modality-specific models to learn an informative representation for each modality.
#### Iv-A1 Log Event Learning
We observe that both log semantics and event occurrences can reflect anomalies (SS III-A), yet we herein focus on event occurrences because of two reasons: 1) the logging behavior of microservices highly relies on the
developers' expertise, so the quality of log semantics cannot be guaranteed [16]; 2) the complexity of microservices necessitates lightweight techniques. As semantic extraction requires computation-intensive natural language processing technologies, log semantic-based methods may pose challenges in practical usage.
Therefore, we focus on modeling the occurrences of log events instead of log semantics. An insight facilitates the model. We observe that the past event increases the likelihood of the event's occurrence in the near future, which fits the assumption of the self-exciting process [26]. Hence, we initially propose to adopt the Hawkes process [13], a kind of self-exciting point process, to model the event occurrences, which is defined by the conditional intensity function:
\[\lambda_{l}^{*}(t)=\mu_{l}(t)+\sum_{\tau<t}\phi_{l}(t-\tau) \tag{1}\]
where \(l=1,...,L\) and \(L\) is the number of event types; for the \(l\)-th event, \(\mu_{l}\) is an estimated parameter and \(\phi_{l}(\cdot)\) is a user-defined triggering kernel function. We use an exponential parametrisation of the kernels herein following [27]: \(\phi_{l}(\cdot)=\alpha_{l}\beta\exp(-\beta t)|_{t>0}\), where \(\alpha_{1}\cdots\alpha_{L}\) are estimated parameters and \(\beta\) is a hyper-parameter.
In brief, log learning is done in a three-step fashion:
1. [leftmargin=*]
2. Parsing: Eadro starts with parsing logs into events via Drain [22] by removing variables in log messages.
3. Estimating: we then record the timestamps of event occurrences (relative to the starting timestamp of the observation window) to estimate the parameters of the Hawkes model with an exponential decay kernel. The estimation is implemented via an open-source toolkit Tick [28]. In this way, events \(\mathbf{X}^{\mathcal{L}}\) at each microservice inside a window are transformed into an intensity vector \(\mathbf{\Lambda}=[\lambda_{1}^{*},\cdots,\lambda_{L}^{*}]\in\mathbb{R}^{L}\).
4. Embedding: the intensity vector \(\mathbf{\Lambda}\) is embedded into a dense vector \(\mathbf{\boldsymbol{H}}^{\mathcal{L}}\in\mathbb{R}^{E^{\mathcal{L}}}\) in the latent space via a fully connected layer with the hidden size of \(E^{\mathcal{L}}\).
#### Iii-A2 KPI Learning
We first organize the KPIs \(\mathbf{X}^{\mathcal{K}}\) with \(k\) indicators of each microservice into a \(k\)-variate time series with the length of \(T\). Then we use a 1D dilated causal convolution (DCC) [14] layer that is lightweight and parallelizable to learn the temporal dependencies and cross-series relations of KPIs. Previous studies have demonstrated DCC's computational efficiency and accuracy in feature extraction of time series [29]. Afterward, we apply a self-attention [30] operation to compute more reasonable representations, and the attention weights are as computed in Equation 2.
\[Attn(X)=\mathsf{softmax}\left(\frac{W_{q}X\cdot(W_{k}X)^{\mathrm{T}}}{\sqrt{d }}(W_{v}X)\right) \tag{2}\]
where \(W_{q}\), \(W_{k}\), and \(W_{v}\) are learnable parameters, and \(d\) is an empirical scaling factor. This phase outputs \(\mathbf{\boldsymbol{H}}^{\mathcal{K}}\in\mathbb{R}^{E^{\mathcal{K}}}\) representing KPIs, where \(E^{\mathcal{K}}\) is the number of convolution filters.
#### Iii-A3 Trace Learning
Inspired by previous works [3, 6, 31], we extract latency from trace files and transform it into a time series by calculating the average latency at a time slot for each callee. We obtain a \(T\)-length univariate latency time series at each microservice (i.e., callee). Similarly, the latency time series is fed into a 1D DCC layer followed by a self-attention operation to learn the latent representation \(\mathbf{\boldsymbol{H}}^{\mathcal{T}}\in\mathbb{R}^{E^{\mathcal{T}}}\), where \(E^{\mathcal{T}}\) is the pre-defined number of filters. Note that we simply pad time slots without corresponding invocations with zeros.
### _Dependency-aware Status Learning_
In this phase, we aim to learn microservices' overall status and draw a comprehensive picture of the system. This module consists of three steps: dependency graph construction, multi-modal fusion, and dependency graph modeling. We first extract a directional graph depicting the relationships among microservices from historical traces. Afterward, we fuse the multi-modal representations obtained from the previous phases into latent node embeddings to represent the service-level status. Messages within the constructed graph will be propagated through a graph neural network so as to learn the neighboring dependencies represented in the edge weights. Eventually, we can obtain a dependency-aware representation representing the overall status of the microservice system.
#### Iii-B1 Dependency Graph Construction
By regarding microservices as nodes and invocations as directional edges, we
Fig. 5: Overview of Eadro
can extract a dependency graph \(\mathcal{G}=\{\mathbb{V},\mathbb{E}\}\) from historical traces to depict the dependencies between microservices. Specifically, \(\mathbb{V}\) is the node set and \(|\mathbb{V}|=M\), where \(M\) is the number of microservices; \(\mathbb{E}\) is the set of edges, and \(\vec{e}_{a,b}=(v_{a},v_{b})\in\mathbb{E}\) denotes an edge directed from \(v_{a}\) to \(v_{b}\), that is, \(v_{b}\) has invoked \(v_{a}\) at least once in the history.
#### Iii-B2 Multi-modal Fusion
In general, there are three fusion strategies [32]: early fusion carried out at the input level, intermediate fusion for fusing cross-modal representations, and late fusion at the decision level (e.g., voting). Research in cross-modal learning [33, 34] and neuroscience [35, 36] suggests that intermediate fusion usually facilitates modeling, so we transform single-modal representations to a compact multi-modal representation via intermediate fusion.
The fusion contains two steps:
1. We concatenate (\([\cdot||\cdot]\)) all representations of each microservice obtained from the previous phase to retain exhaustive information. The resulting vector \([\mathbf{H}^{\mathcal{L}}||\mathbf{H}^{\mathcal{K}}||\mathbf{H}^{\mathcal{T}}]\) is subsequently fed into a fully connected layer to be projected into a lower-dimensional space, denoted by \(\mathbf{H}^{\prime\mathcal{S}}\in\mathbb{R}^{2E}\), where \(2E<E^{\mathcal{L}}+E^{\mathcal{K}}+E^{\mathcal{T}}\) is an even number.
2. \(\mathbf{H}^{\prime\mathcal{S}}\) passes through a Gated Linear Unit (GLU) [37] to fuse representations in a non-linear manner and filter potential redundancy. GLU controls the bandwidth of information flow and diminishes the vanishing gradient problem. It also possesses extraordinary resilience to catastrophic forgetting. As we have massive data and complex stacked neural layers, GLU fits our scenario well. The computation follows \(\mathbf{H}^{\mathcal{S}}=GLU(\mathbf{H}^{\mathcal{S}})=\mathbf{H}^{\prime\mathcal{S}}_{1 }\otimes\sigma(\mathbf{H}^{\prime\mathcal{S}}_{(2)})\), where \(\mathbf{H}^{\prime\mathcal{S}}_{(1)}\) is the first half of \(\mathbf{H}^{\prime\mathcal{S}}\) and \(\mathbf{H}^{\prime\mathcal{S}}_{(2)}\) is the second half; \(\otimes\) denotes element-wise product, and \(\sigma\) is a sigmoid function.
Finally, we obtain \(\mathbf{H}^{\mathcal{S}}\in\mathbb{R}^{E}\), a service-level representation of each microservice.
#### Iii-B3 Dependency Graph Learning
As interactions between microservices can be naturally described by dependency graphs, we apply graph neural networks to perform triage inference. Particularly, we employ Graph Attention Network (GAT) [38] to learn the dependency-aware status of the microservice system. GAT enables learning node and edge representations and dynamically assigns weights to neighbors without requiring computation-consuming spectral decompositions. Hence, the model can pay attention to microservices with abnormal behaviors or at the communication hub.
The local representation \(\mathbf{H}^{\mathcal{S}}\) serves as the node feature, and GAT learns the whole graph's representation, where dynamic weights of edges are computed as Equation 3.
\[\omega_{a,b}=\frac{\exp(\text{LeakyReLU}(v^{\text{T}}[\mathbf{W}\mathbf{H}^{\mathcal{ S}}_{a}||\mathbf{W}\mathbf{H}^{\mathcal{S}}_{b}]))}{\sum_{k\in\mathbb{N}_{a}}\exp(\text{ LeakyReLU}(v^{\text{T}}[\mathbf{W}\mathbf{H}^{\mathcal{S}}_{a}||\mathbf{W}\mathbf{H}^{ \mathcal{S}}_{b}]))} \tag{3}\]
where \(\omega_{a,b}\) is the computed weight of edge \(\vec{e}_{a,b}\); \(\mathbb{N}_{a}\) is the set of neighbor nodes of node \(a\); \(\mathbf{H}^{\mathcal{S}}_{a}\) is the inputted node feature of \(a\); \(\mathbf{W}\in\mathbb{R}^{E^{\mathcal{G}}\times E}\) and \(v\in\mathbb{R}^{2E^{\mathcal{G}}}\) are learnable parameters. \(E^{\mathcal{G}}\) is the dimension of the outputted representation, which is calculated by \(\mathbf{\hat{H}}^{\mathcal{S}}_{a}=\psi(\sum_{b\in\mathbb{N}_{a}}\omega_{a,b}\mathbf{ W}\mathbf{H}^{\mathcal{S}}_{b})\), where \(\psi(\cdot)\) is a customized activation function, usually ReLU. Eventually, we perform global attention pooling [39] on the multi-modal representations of all nodes. The final output is \(\mathbf{H}^{\mathcal{F}}\in\mathbb{R}^{E^{\mathcal{F}}}\), a dependency-aware representation of the overall system status.
### _Joint Detection and Localization_
Lastly, Eadro predicts whether the current observation window is abnormal and if so, it identifies which microservice the root cause is. As demonstrated in SS III-B, existing troubleshooting methods regard anomaly detection and root cause localization as independent and ignore their shared knowledge. Besides, current anomaly detectors deliver unsatisfactory results and affect the next-stage localization by incorporating noisy labels. Therefore, we fully leverage the shared knowledge and integrate two closely related tasks into an end-to-end model.
In particular, based on the previously obtained representation \(\mathbf{H}^{\mathcal{F}}\), a detector first conducts binary classification to decide the existence of anomalies. If no anomaly exists, Eadro directly outputs the result; if not, a localizer ranks the microservices according to their probabilities of being the culprit. The detector and the localizer are both composed of stacked fully-connected layers and jointly trained by sharing an objective. The detector aims to minimize the binary cross-entropy loss:
\[\mathfrak{L}_{1}=\sum_{i=1}^{N}[-(y_{i}\log(\hat{y_{i}})+(1-y_{i})\log(1-\hat{y _{i}}))] \tag{4}\]
where \(N\) is the number of historical samples; \(y_{i}\in\{0,1\}\) is the ground truth indicating the presence of anomalies (1 denotes presence while 0 denotes absence), and \(\hat{y_{i}}\in[0,1]\) is the predicted indicator. Subsequently, all samples predicted as normal (0) are masked, and samples predicted as abnormal (1) pass through the localizer. The localizer attempts to narrow the distance between the predicted and ground-truth probabilities, whose objective is expressed by:
\[\mathfrak{L}_{2}=\sum_{i=1}^{N}\sum_{s=1}^{M}c_{i,s}\log(p_{i,s}) \tag{5}\]
where \(M\) is the number of involved microservices. In the \(i\)-th sample, \(c_{i,s}\in\{0,1\}\) is 1 if the culprit microservice is \(s\) and 0 otherwise; \(p_{i,s}\) is the predicted probability of microservice \(s\) being the culprit. The objective of Eadro is the weighted sum
Fig. 6: A demo for reviewing the suspicious status.
of the two sub-objectives \(\mathfrak{L}=\beta\cdot\mathfrak{L}_{1}+(1-\beta)\cdot\mathfrak{L}_{2}\), where \(\beta\) is a hyper-parameter balancing the two tasks. Eventually, Eadro outputs a ranked list of microservices to be checked according to their predicted probabilities of being the root cause.
To sum up, Eadro can provide explicit clues about the microservice status. Hence, troubleshooting is much more convenient for operation engineers with the ranked list of microservices. Figure 6 presents a visualized demo.
## V Evaluation
This section answers the following research questions:
* **RQ1**: How effective is Eadro in anomaly detection?
* **RQ2**: How effective is Eadro in root cause localization?
* **RQ3**: How much does each data source contribute?
### _Data Collection_
Since existing data collections of microservice systems [40, 41] contain traces only, we deploy two benchmark microservice systems and generate requests to collect multi-source data, including logs, KPIs, and traces. Afterward, we inject typical faults to simulate real-world anomalies. To our best knowledge, it is the first triple-source data collection with injected faults in the context of microservices.
#### V-A1 Benchmark microservice systems
We first deploy two open-source microservice benchmarks: TrainTicket [17] (TT) and SocialNetwork [42] (SN). TT provides a railway ticketing service where users can check, book, and pay for train tickets. It is widely used in previous works [3, 15] with 41 microservices actively interacting with each other, and 27 of them are business-related. SN implements a broadcast-style social networking site. Users can create, read, favorite, and repost posts. In this system, 21 microservices communicate with each other via Thrift RPCs [43]. SN has 21 microservices, 14 of which are related to business logic.
We construct a distributed testbed to deploy the two systems running in Docker containers and develop two request simulators to simulate valid user requests. A series of open-source monitoring tools are deployed for data collection. Microservice instances send causally-related traces to a collector Jaeger [44]. We employ cAdvisor [45] and Prometheus [46] to monitor the KPIs per second of each microservice. The KPIs are stored in an instance of InfluxDB [47], including "CPU system usage", "CPU total usage", "CPU user usage", "memory usage", the amount of "working set memory", "rx bytes" (received bytes), and "tx bytes" (transmitted bytes). We also utilize Elasticsearch [48], Fluentd [49], and Kibana [50] to collect, aggregate, and store logs, respectively.
#### V-A2 Fault Injection
Eadro can troubleshoot anomalies that manifest themselves in performance degradations (logs and KPIs) or latency deviations (traces). Referring to previous studies [6, 21, 24], we inject three typical types of faults via Chaosblade [51]. Specifically, we simulate CPU exhaustion by putting a hog to consume CPU resource heavily. To simulate a network jam, we delay the network packets of a microservice instance. We also randomly drop network packets to simulate stochastic packet loss that frequently occurs when excessive data packets flood a network.
We generate 0.2\(\sim\)0.5 and 2\(\sim\)3 requests per second for TT and SN at a uniform rate, respectively. Before fault injection, we collect normal data under a fault-free setting for 7 hours for TT and 1.2 hours for SN. Then, we set each fault duration to 10 mins (with a 2-min interval between two injections) for TT, while the fault duration is 2 mins and SN's interval is half a minute. Each fault is injected into one microservice once. In total, we conduct 162 and 72 injection operations in TT and SN, respectively. Such different setups are attributed to the different processing capacities of the two systems, i.e., TT usually takes more time to process a request than SN.
In this way, we collect two datasets (\(\mathcal{TT}\) and \(\mathcal{SN}\)) with 48,296 and 126,384 traces, respectively. Data produced in different periods are divided into training (60%) data and testing (40%) data, respectively. The data divisions share similar distributions in abnormal/normal ratios and root causes.
### _Baselines_
We compare Eadro with previous approaches and derived methods integrating multi-source data. As our task is relatively novel by incorporating more information than existing single-source data-based studies, simply comparing our model with previous approaches seems a bit unfair.
#### V-B1 Advanced baselines
In terms of anomaly detection, we consider two state-of-the-art baselines. TraceAnomaly [3] uses a variational auto-encoder (VAE) to discover abnormal invocations. MultimodalTrace [4] extracts operation sequences and latency time series from traces and uses a multi-modal Long Short-term Memory (LSTM) network to model the temporal features. For root cause localization, we compare Eadro with five trace-based baselines: TBAC [10], NetMedic [52], MonitorRank [8], CloudRanger [11], and DyCause [6]. As far as we know, no root cause localizers for microservices rely on multi-modal data.
These methods use statistical models or heuristic methods to locate the root cause. For example, TBAC, MonitorRank, and DyCause applied the Pearson correlation coefficient, and MonitorRank and DyCause also leveraged Random Walk. We implement these baselines referring to the codes provided by the original papers [3, 6, 21]. For the papers without open-source codes, we carefully follow the papers and refer to the baseline implementation released by [6].
#### V-B2 Derived multi-source baselines
We also derive four multi-source data-based methods for further comparison. Inspired by [4], we transform all data sources into time series and use learning-based algorithms for status inference. Specifically, logs are represented by event occurrence sequences; traces are denoted by latency time series; KPIs are natural time series. Since previous studies are mainly machine learning-based, we train practical machine learning methods, i.e., Random Forest (RF) and Support Vector Machine (SVM), on the multi-source time series. We derive MS-RF-AD and MS-SVM-AD for anomaly detection as well as MS-RF-RCL and MS-SVM-RCL
for root cause localization. We also derive two methods (MS-LSTM and MS-DCC) that employ deep learning techniques, i.e., LSTM and 1D DCC, to extract representations from multi-modal time series. The learned representations are fed into the module of joint detection and localization, which is described in IV-C.
### _Implementation_
The experiments are conducted on a Linux server with an NVIDIA GeForce GTX 1080 GPU via Python 3.7. As for the hyper-parameters, the hidden size of all fully-connected layers is 64, and every DCC layer shares the same filter number of 64 with a kernel size of three. The GAT's hidden size and the fusion dimension (i.e., \(2E\)) are 128. We use a 4-head mechanism of GAT's attention layer, and the layer number of all modalities' models is only one for speeding up. Moreover, Batch Normalization [53] is added after DCCs to mitigate overfitting. We train Eadro using the Adam [54] optimizer with an initial learning rate of 0.001, a batch size of 256, and an epoch number of 50. All the collected data and our code are released for replication.
### _Evaluation Measurements_
The anomaly detection challenge is modeled in a binary classification manner, so we apply the widely-used binary classification measurements to gauge the performance of models: Recall (_Rec_)\(=\frac{TP}{TP+FN}\), Precision (_Pre_)\(=\frac{TP}{TP+FP}\), F1-score (_F1_)\(=\frac{2\cdot Pre\cdot Rec}{Pre+Rec}\), where \(TP\) is the number of discovered abnormal samples; \(FN\) and \(FP\) are defined in SS III-B.
For root cause localization, we introduce the Hit Rate of top-k (_HR@k_) and Normalized Discounted Cumulative Gain of top-k (_NDCG@k_) for localizer evaluation. Herein, we set \(k=1,3,5\). _HR@k\(=\frac{1}{N}\sum_{i=1}^{N}(s_{i}^{t}\in S_{i,[1:k]}^{p})\)_ calculates the overall probability of the current microservice within the top-k predicted candidates \(S_{i,[1:k]}^{p}\), where \(s_{i}^{t}\) is the ground-truth root cause for the \(i\)-th observation window, and \(N\) is the number of samples to be tested. _NDCG@k\(=\frac{1}{N}\sum_{i=1}^{N}(\sum_{j=1}^{M}\frac{p_{j}}{\log_{3}(j+1)})\)_ measures the ranking quality, where \(p_{j}\) is the predicted probability of the \(j\)-th microservice, and \(M\) is the number of microservices. _NDCG@1_ is left out because it is the same with _HR@1_ in our scenario. The two evaluation metrics measure how easily engineers find the culprit microservice. _HR@k_ directly measures how likely the root cause will be found within \(k\) checks. _NDCG@k_ measures to what extent the root cause appears higher up in the ranked candidate list. Thus, the higher the above measurements, the better.
### _Rq1: Effectiveness in Anomaly Detection_
Ground truths are based on the known injection operations, i.e., if a fault is injected, then the current observation window is abnormal; otherwise, it is normal. Table II displays a comparison of anomaly detection, from which we draw three observations:
(1) Eadro outperforms all competitors significantly and achieves very high scores in _F1_ (0.988), _Rec_ (0.996), and _Pre_ (0.981), illustrating that Eadro generates very few missing anomalies or false alarms. Eadro's excellence can be attributed to 1) Eadro applies modality-specific designs to model various sources of data as well as a multi-modal fusion to wrangle these modalities so that it can learn a distinguishable representation of the status; 2) Eadro learns dependencies between microservices to enable extraction of anomaly propagation to facilitate tracing back to the root cause.
(2) Generally, multi-source data-based approaches, including Eadro, perform much better than trace-relied baselines because they incorporate extra essential information (i.e., logs and KPIs) besides traces. The results align with our observations in SS III-A that logs and KPIs provide valuable clues about microservice anomalies, while traces cannot reveal all anomalies. Trace-based methods can only detect anomalies yielding an enormous impact on invocations, so they ignore anomalies reflected by other data sources.
(3) Moreover, Eadro, MS-LSTM, and MS-DDC perform better than MS-SVM and MS-RF. The superiority of the former ones lies in applying deep learning and joint learning. Deep learning has demonstrated a powerful capacity in extracting features from complicated time series [29, 55, 56]. Joint learning allows capturing correlated knowledge across detection and localization to exploit commonalities across the two tasks. These two mechanisms are beneficial to troubleshooting by enhancing representation learning.
In brief, Eadro is very effective in anomaly detection of microservice systems and improves _F1_ by 53.82%\(\sim\)92.68% compared to baselines and 3.13%\(\sim\)25.32% compared to derived methods. The detector is of tremendous assistance for next-stage root cause localization by reducing noisy labels inside the localizer's inputs.
### _Rq2: Effectiveness in Root Cause Localization_
To focus on comparing the effectiveness of root cause localization, we provide ground truths of anomaly existence for baselines herein. In contrast, Eadro, MS-LSTM, and MS-DCC use the predicted results of their detectors as they are end-to-end approaches integrating the two tasks. Table III presents the root cause localization comparison, underpinning three observations:
(1) Eadro performs the best, taking all measurements into consideration, achieving _HR@1_ of 0.982, _HR@5_ of 0.990, and _NDCG@5_ of 0.989 on average. With the incorporation of valuable logs and KPIs ignored by previous approaches,
Endro can depict the system status more accurately. Trace-based approaches have difficulties in troubleshooting resource exhaustion-related anomalies or severe network-related anomalies that block inter-service communications resulting in few invocations. Besides, Eadro enables eavesdropping across detection and localization via joint learning, which encourages full use of the shared knowledge to enhance status learning. Eadro also leverages powerful techniques to capture meaningful patterns from multi-modal data, including designs of modality-specific models and advanced GAT to exploit graph-structure dependencies. Moreover, Eadro achieves a much higher score in _HR@1_ than derived methods, while its superiority in _HR@5_ and _NDCG@5_ is not particularly prominent. The reason is that Eadro learns the dependency-aware status besides intra-service behaviors, allowing to catch the anomaly origin by tracing anomaly propagation. Other multi-modal approaches capture dependency-agnostic information, so they can pinpoint the scope of suspicious microservices effectively rather than directly deciding the culprit.
(2) Multi-modal approaches considerably outperform single-modal baselines, similar to the results in anomaly detection. The superiority of multi-source derived methods is more evident since localization is a more complicated task than detection, so the advantage of incorporating diverse data sources to learn the complementarity is fully demonstrated. This situation is more revealing in \(\mathcal{TT}\) because TrainTicket responds more slowly, leading to sparse trace records, and trace-based models get into trouble when few invocations occur in the current observation window. In contrast, derived approaches can accurately locate the culprit microservice in such a system since they leverage various information sources to obtain more clues.
(3) Considering multi-modal approaches, Eadro, MS-LSTM, and MS-DCC deliver better performance (measured by _HR@1_) than MS-RF-RCL and MS-SVM-RCL. The superiority of the former approaches can be attributed to the strong fitting ability of deep learning and the advantages brought by the joint learning mechanism. However, MS-LSTM performs poorer in narrowing the suspicious scope, especially in \(\mathcal{SN}\) (measured by _HR@5_ and _NDCG@5_). This may be because that LSTMs' training process is a lot more complicated than DCCs or simple machine learning techniques. The scale of \(\mathcal{SN}\) is relatively small, so MS-LSTM cannot be thoroughly trained and capture the most meaningful features.
To sum up, the results demonstrate the effectiveness of Eadro in root cause localization. Eadro increases _HR@1_ by 290%\(\sim\)5068% than baselines and 26.93%\(\sim\)66.16% than derived methods. Our approach shows effectiveness both in anomaly detection and root cause localization, suggesting its potential to automate labor-intensive troubleshooting.
### _RQ3: Contributions of Different Data Sources_
We perform an ablation study to explore how different data sources contribute by conducting source-wise-agnostic experiments, so we derive the following variants:
* Eadro w/o \(\mathcal{L}\): drops logs while inputs traces and KPIs by removing the log modeling module in SS IV-A1.
* Eadro w/o \(\mathcal{M}\): drops KPIs while inputs traces and logs by removing the KPI modeling module in SS IV-A2.
* Eadro w/o \(\mathcal{T}\): drops latency extracted from traces by removing the trace modeling module in SS IV-A3.
* Eadro w/o \(\mathcal{G}\): replaces GAT by an FC layer to learn dependency-agnostic representations.
The ablation study results are shown in Table IV. Considering that root cause localization, being our major target, is more difficult and that all variants achieve relatively good performance in anomaly detection, we focus on root cause localization. Clearly, each source of information contributes to the effectiveness of Eadro as it performs the best, while the degrees of their contributions are not exactly the same.
Specifically, logs contribute the least as Eadro w/o \(\mathcal{L}\) is second-best. We attribute it to the lack of log semantics and the low logging frequency. As the two benchmark systems were recently proposed without multiple version iterations, only a few events are recorded. We believe that logs would play a greater value in the development of microservices.
In addition, we observe that the performance of Eadro w/o \(\mathcal{M}\) and Eadro w/o \(\mathcal{T}\) degrades dramatically, especially in _HR@1_, since traces and KPIs are essential information that contributes the most to the identification of the root cause microservice. This observation aligns with our motivating cases, where we show some anomaly cases that can be directly revealed by traces and KPIs.
Moreover, _HR@5_ of Eadro w/o \(\mathcal{G}\) degrades slightly, indicating that dependency-agnostic representations are useful to narrow the suspicious scope. However, _HR@1_ of Eadro w/o \(\mathcal{G}\) decreases 23.21% as Eadro uses readily applicable GAT to modal graph-structure inter-service dependencies, while FC layers model the dependencies linearly, unable to capture anomaly propagation well, leading to performance degradation in determining the culprit.
To further demonstrate the benefits brought by KPIs and logs, we visualize the latent representations of abnormal data samples learned by Eadro, Eadro w/o \(\mathcal{L}\), and Eadro w/o \(\mathcal{M}\) via t-SNE [57] of the test set of \(\mathcal{SN}\), shown in Figures 7.
We can see that the representations learned by Eadro are the most discriminative, and those learned by Eadro w/o \(\mathcal{L}\) are second-best, while those learned by Eadro w/o \(\mathcal{M}\) are the worst. Specifically, Eadro distributes representations corresponding to the different root causes into different clusters distant from each other in the hyperspace. In contrast, Eadro w/o \(\mathcal{M}\) learns representations close in space, making it difficult to distinguish them for triage. That is why Eadro w/o \(\mathcal{M}\) delivers poorer performance in localization than Eadro. The visualization intuitively helps us grasp the usefulness of KPIs in helping pinpoint the root cause. The discriminativeness of the representations learned by Eadro w/o \(\mathcal{L}\) is in-between, where some clusters are pure while others seem to be a mixture of representations corresponding to different root causes, in line with the experiment results. We can attribute part of the success of Eadro to incorporating KPIs and logs, which encourages more discriminative representations of the microservice status with extra clues.
In conclusion, the involved data sources can all contribute to the effectiveness of Eadro to some degree, and traces contribute the most to the overall effectiveness. This emphasizes the insights about appropriately modeling multi-source data to troubleshoot microservices effectively.
## VI Discussion
### _Limitations_
We identify three limitations of Eadro: 1) the incapacity to deal with bugs related to program logic; 2) the prerequisites for multi-source data collection; 3) the requirement of annotated data for training.
As Eadro is an entirely data-driven approach targeting the scope of reliability management, it is only applicable to troubleshooting anomalies manifested in the involved data, so logical bugs out of our scope and silent issues that do not incur abnormal patterns in observed data can not be detected or located.
Moreover, Eadro is basically well-suited for all microservices where anomalies can be reflected in the involved three types of data we employ. However, some practical systems may lack the ability to collect the three types of data. Though the low-coupled nature of the modal-wise learning module allows the absence of some source of data, it is better to provide all data types to fully leverage Eadro. Since we apply standard
Fig. 7: Distributions of representations learned by Eadro and its variants.
open-source monitoring toolkits and these off-the-shelf toolkits can be directly instrumented, enabling microservices with the data collection ability is not difficult.
In addition, the supervised nature of Eadro requires a large amount of labeled training data, which may be time-consuming in the real world. Nevertheless, our approach outperforms compared with unsupervised approaches by a large margin, indicating that in practice, unsupervised methods may be difficult to use because the accuracy rate is not up to the required level, especially considering that realistic microservices systems are much larger and more complex. A common solution in companies is to use an unsupervised model to generate coarse-grained pseudo-labels. Afterward, experienced engineers manually review the labels with lower confidence. The hybrid-generated labels are used for training the supervised model, and eventually, the supervised approach performs the troubleshooting work. Hence, Eadro will still play an important role in practice and fulfill its potential.
### _Threat to Validity_
#### Vi-B1 Internal Threat
The main internal threat lies in the correctness of baseline implementation. We reproduce the baselines based on our understanding of their papers since most baselines, except DyCause and TraceAnomaly, have not released codes, but the understanding may not be accurate. To mitigate the threat, we carefully follow the original papers and refer to the baseline implementation released by [6].
#### Vi-B2 External Threat
The external threats concern the generalizability of our experimental results. We evaluate our approach on two simulated datasets since there is no publicly available dataset containing multi-modal data. It is yet unknown whether the performance of Eadro can be generalized across other datasets. We alleviate this threat from two aspects. First, the benchmark microservice systems are widely used in existing comparable studies, and the injected faults are also typical and broadly applied in previous studies [6, 21, 24], thereby supporting the representativeness of the datasets. Second, our approach is request- and fault-agnostic, so an anomaly incurred by a fault beyond our injections can also be discovered if it causes abnormalities in the observations.
## VII Related Work
Previous anomaly detection approaches are usually based on system logs [58, 59, 60, 61, 62] or KPIs [63, 64, 65, 66, 67], or both [68], targeting traditional distributed systems without complex invocation relationships. Recently, some studies [3, 4, 31] have been presented to automate anomaly detection in microservice systems. [3] proposed to employ a variational autoencoder with a Bayes model to detect anomalies reflected by latency. [4] extracted operation sequence and invocation latency from traces and fed them into a multi-modal LSTM to identify anomalies. These anomaly detection methods rely on single-source data (i.e., traces) and ignore other informative data such as logs and KPIs.
Tremendous efforts [7, 8, 10, 11, 19, 24, 52] have been devoted to root cause localization in microservice or service-oriented systems, most of which rely on traces only and leverage traditional or naive machine learning techniques. For example, [15] conducted manual feature engineering in trace logs to predict latent errors and identify the faulty microservice via a decision tree. [9] proposed a high-efficient approach that dynamically constructs a service call graph and ranks candidate root causes based on correlation analysis. A recent study [6] designed a crowd-sourcing solution to resolve user-space diagnosis for microservice kernel failures. These methods work well when the latent features of microservices are readily comprehensible but may lack scalability to larger-scale microservice systems with more complex features. Deep learning-based approaches explore meaningful features from historical data to avoid manual feature engineering. Though deep learning has not been applied to root cause localization as far as we know, some approaches incorporated it for performance debugging. For example, to handle traces, [69] used convolution networks and LSTM, and [70] leveraged causal Bayesian networks.
However, they rely on traces and ignore other data sources, such as logs and KPIs, that can also reflect the microservice status. Also, they either focus on anomaly detection or root cause localization leading to the disconnection in the two closely related tasks. The inaccurate results of naive anomaly detectors affect the effectiveness of downstream localization. Moreover, many methods combine manual feature engineering with traditional algorithms, making it insufficiently practical in large-scale systems.
## VIII Conclusion
This paper first identifies two limitations of current troubleshooting approaches for microservices and aims to address them. The motivation is based on two observations: 1) the usefulness of logs and KPIs and the insufficiency of traces; 2) the unsatisfactory results delivered by current anomaly detectors. To this end, we propose an end-to-end troubleshooting approach for microservices, Eadro, the first work to integrate anomaly detection and root cause localization based on multi-source monitoring data. Eadro consists of powerful modality-specific models to learn intra-service behaviors from various data sources and a graph attention network to learn inter-service dependencies. Extensive experiments on two datasets demonstrate the effectiveness of Eadro in both detection and localization. It achieves _F1_ of 0.988 and _HR@1_ of 0.982 on average, vastly outperforming all competitors, including derived multi-modal methods. The ablation study further validates the contributions of the involved data sources. Lastly, we release our code and data to facilitate future research.
## Acknowledgement
The work described in this paper was supported by the National Natural Science Foundation of China (No. 62202511), and the Research Grants Council of the Hong Kong Special Administrative Region, China (No. CUHK 14206921 of the General Research Fund). |
2307.16156 | Variational field theory of macroscopic forces in Coulomb fluids | Based on the variational field theory framework, we extend our previous
mean-field formalism, taking into account the electrostatic correlations of the
ions. We employ a general covariant approach and derive a total stress tensor
that considers the electrostatic correlations of ions. This is accomplished
through an additional term that depends on the autocorrelation function of
local electric field fluctuations. Utilizing the derived total stress tensor
and applying the mechanical equilibrium condition, we establish a general
expression for the disjoining pressure of the Coulomb fluids, confined in a
pore with a slit-like geometry. Using this equation, we derive an asymptotic
expression for the disjoining pressure in a slit-like pore with non-electrified
conductive walls. Present theory is the basis for future modeling of the
mechanical stresses that occur in electrode pores with conductive charged
walls, immersed in liquid phase electrolytes beyond the mean-field theory. | Yury A. Budkov, Petr E. Brandyshev | 2023-07-30T07:54:26Z | http://arxiv.org/abs/2307.16156v4 | # Variational field theory of macroscopic forces in Coulomb fluids
###### Abstract
In this theoretical paper, we present a field-theoretical approach based on variational field theory. This approach allows us to derive the grand thermodynamic potential of an inhomogeneous Coulomb fluid as a functional of the electrostatic potential and a trial electrostatic Green's function for an arbitrary reference fluid system. Through this derivation, we obtain self-consistent field equations for the electrostatic potential and Green's function, serving as the Euler-Lagrange equations for this functional. Extending our previous mean-field formalism [Y. A. Budkov and A. L. Kolesnikov, JStatMech, 2022], we account for the electrostatic correlations of the ions. To achieve this, we employ a general covariant approach and derive a total stress tensor that considers the electrostatic correlations of ions. This is accomplished through an additional term that depends on the autocorrelation function of the local electric field fluctuations. Utilizing the derived total stress tensor and applying the mechanical equilibrium condition, we establish a general expression for the disjoining pressure of the Coulomb fluids, confined in a pore with a slit-like geometry. The formulated theory is the basis for future modeling of the mechanical stresses that occur in electrode pores with conductive charged walls, immersed in liquid-phase electrolytes beyond the mean-field theory.
Introduction
Coulomb fluids, such as plasma, electrolyte solutions, molten salts, and room-temperature ionic liquids, have garnered significant attention from researchers and chemical engineers in recent times. This interest is primarily driven by the utilization of Coulomb fluids in various applications, ranging from lipid and ion exchange membranes to biomacromolecules, colloids, batteries, fuel cells, supercapacitors, etc.. In all of these applications, Coulomb fluids experience interactions with charged surfaces or confinement within nanocapillaries, leading to significant spatial inhomogeneity. This inhomogeneity of the Coulomb fluid leads to a violation of local electrical neutrality, which requires the numerical solution of the self-consistent field equations for the electrostatic potential, accompanied by appropriate boundary conditions [1; 2; 3; 4; 5]. The equations commonly used for this purpose are the classical Poisson-Boltzmann equation (PB) and its modified forms, which are often referred to as modified PB equations [3; 6; 7].
The modified PB equations derived so far incorporate the unique characteristics of individual ionic species and consider the specific chemical properties of the surfaces with which they interact. In this case it is necessary to especially emphasize the consideration of the non-zero excluded volume of ions (steric interactions) [8; 9; 10; 11], their specific and structural interactions [6; 12; 13; 14], polarizability/quadrupolarizability and permanent dipole moment of ionic species and solvent (co-solvent) molecules [5; 7; 13; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25], surface charge regulaton [26], specific ion-electrode interactions (adsorption or depletion) [13; 27], and electrostatic correlations [28; 29; 30; 31; 32; 33; 34; 35; 36].
Recently, Budkov and Kolesnikov [3] introduced a first-principle approach that enables the derivation of the modified PB equations. These equations are derived from a unified standpoint, treating them as the Euler-Lagrange equations of the grand thermodynamic potential (GTP). This functional is defined in terms of the electrostatic potential and considers factors such as the excluded volume of ions and solvent molecules as well as their static and orientation polarizabilities. Additionally, basing on the Noether's theorem the authors derived stress tensors that are consistent with the modified PB equations. This theoretical framework offers the opportunity to calculate the macroscopic forces acting on conductive or dielectric bodies immersed in the Coulomb fluid, including electrodes, colloid particles, membranes, micelles, and dust particles. While the formulation of a systematic mean-field theory for Coulomb fluids has been successful, there still remains a lack of clarity regarding
the incorporation of the electrostatic correlations [2] into the total stress tensor. In other words, it becomes necessary to expand our formalism to encompass the electrostatic correlations of the ions. However, when considering electrostatic correlations, the resulting functionals consistently become nonlocal [31; 32; 33; 34; 36; 37; 38; 28; 31], which presents challenges for the application of Noether's theorem, originally formulated for local functionals [39].
An alternative approach to Noether's theorem for this case could be the general covariant approach recently proposed by us in [40]. Our approach relies on Noether's second theorem, enabling us to derive the symmetric stress tensor for an arbitrary (nonlocal) model of the inhomogeneous liquid as a functional derivative of a GTP with respect to the metric tensor. We have applied this approach to several phenomenological models [6; 30] of inhomogeneous Coulomb fluids considering electrostatic correlations of ions or short-range correlations related to packing effects.
In this paper, we will present a field-theoretical approach based on variational field theory [31; 32; 33; 34; 38; 41]. This approach allows us to derive the GTP of an inhomogeneous Coulomb fluid as a functional of the electrostatic potential and a trial electrostatic Green's function for an arbitrary reference fluid system. Through this derivation, we will obtain self-consistent field equations for the electrostatic potential and the Green's function, serving as the Euler-Lagrange equations for this functional. Thus, we extend our previous mean-field formalism [3] taking into account the electrostatic correlations of ions. Using our general covariant approach, we will derive a total stress tensor that accounts for the electrostatic correlations via the additional term depending on the autocorrelation function of the local electric field fluctuations.
## II Variational field theory of Coulomb gas
In this section, we will formulate the variational field theory for the Coulomb gas, specifically when there are no steric interactions between ions. This presentation will draw heavily from research [31; 32; 34; 38] and thus serves a pedagogical purpose, while also being a reference for the subsequent sections.
Let us consider the case of Coulomb gas with the following total potential energy of
interactions
\[U=\frac{1}{2}\left(\hat{\rho}G_{0}\hat{\rho}\right)+\sum_{\alpha}(\hat{n}_{\alpha }u_{\alpha}), \tag{1}\]
where we have introduced the following short-hand notations
\[(\hat{\rho}G_{0}\hat{\rho})=\int d{\bf r}\int d{\bf r^{\prime}}\hat{\rho}({\bf r })G_{0}({\bf r},{\bf r^{\prime}})\hat{\rho}({\bf r^{\prime}}) \tag{2}\]
and
\[(\hat{n}_{\alpha}u_{\alpha})=\int d{\bf r}\hat{n}_{\alpha}({\bf r})u_{\alpha}( {\bf r}) \tag{3}\]
with
\[\hat{\rho}({\bf r})=\sum_{\alpha}\sum_{j_{\alpha}=1}^{N_{\alpha}}\varrho_{ \alpha}({\bf r}-{\bf r}_{j_{\alpha}}), \tag{4}\]
being the microscopic density of the ions with the internal charge densities [34; 38]\(\varrho_{\alpha}({\bf r})\) satisfying the normalization condition
\[\int d{\bf r}\varrho_{\alpha}({\bf r})=q_{\alpha}, \tag{5}\]
where \(q_{\alpha}\) is the total electric charge of the \(\alpha\)th ion. Thus, we account for the electrostatic self-interaction of the ions with nonlocal charge distributions. The second term on the right-hand side of the eq. (1) is the total potential energy of interaction of the ions with external fields with potentials \(u_{\alpha}({\bf r})\); the microscopic concentrations of the ions are
\[\hat{n}_{\alpha}({\bf r})=\sum_{j_{\alpha}=1}^{N_{\alpha}}\delta({\bf r}-{\bf r }_{j_{\alpha}}). \tag{6}\]
The two-point function \(G_{0}({\bf r},{\bf r^{\prime}})\) is the standard Green's function of the Poisson equation satisfying the following equation
\[-\varepsilon\nabla^{2}G_{0}({\bf r},{\bf r^{\prime}})=\delta({\bf r}-{\bf r^{ \prime}}), \tag{7}\]
where \(\varepsilon\) is the permittivity of medium. In the case of an infinite continuous dielectric medium, Green's function has the form of standard Coulomb law, i.e. \(G_{0}({\bf r},{\bf r^{\prime}})=1/4\pi\varepsilon|{\bf r}-{\bf r^{\prime}}|\).
Further, using the standard Hubbard-Stratonovich (HS) transformation, we can represent the grand partition function of the Coulomb gas as the following functional integral over the fluctuating electrostatic potential [1; 2; 28; 37]
\[\Xi=\int\frac{{\cal D}\varphi}{C_{0}}\exp\left[-\frac{\beta}{2}\left(\varphi G _{0}^{-1}\varphi\right)+\sum_{\alpha}z_{\alpha}\int d{\bf r}e^{i\beta\varrho_ {\alpha}\varphi({\bf r})-\beta u_{\alpha}({\bf r})}\right], \tag{8}\]
where
\[C_{0}=\int{\cal D}\varphi\exp\bigg{[}-\frac{\beta}{2}\left(\varphi G_{0}^{-1} \varphi\right)\bigg{]} \tag{9}\]
is the normalization constant of the Gaussian measure with the following short-hand notations
\[\left(\varphi G_{0}^{-1}\varphi\right)=\int d{\bf r}\int d{\bf r}^{ \prime}\varphi({\bf r})G_{0}^{-1}({\bf r},{\bf r}^{\prime})\varphi({\bf r}^{ \prime}), \tag{10}\]
\[G_{0}^{-1}({\bf r},{\bf r}^{\prime})=-\varepsilon\nabla^{2}\delta({\bf r}-{\bf r }^{\prime}), \tag{11}\]
\(z_{\alpha}=\Lambda_{\alpha}^{-3}e^{\beta\mu_{\alpha}}\theta_{\alpha}\) are the fugacities of the ionic species, \(\Lambda_{\alpha}\) is the thermal wavelength of the ions of \(\alpha\)th kind, \(\mu_{\alpha}\) - their chemical potential, \(\theta_{\alpha}\) - their internal partition function, \(\beta=(k_{B}T)^{-1}\), \(k_{B}\) is the Boltzmann constant, \(T\) is the temperature; we have also introduced the notation
\[\varrho_{\alpha}\varphi({\bf r})=\int d{\bf r}^{\prime}\varrho_{\alpha}({\bf r }-{\bf r}^{\prime})\varphi({\bf r}^{\prime}). \tag{12}\]
Performing the shift \(\varphi\rightarrow\varphi+\varphi_{0}\) of the integration variable and going to a trial Green's function, \(G_{0}\to G\), we obtain
\[\Xi=\frac{C}{C_{0}}\int\frac{{\cal D}\varphi}{C}\exp\bigg{[}- \frac{\beta}{2}\left(\varphi G^{-1}\varphi\right)-\frac{\beta}{2}\left( \varphi[G_{0}^{-1}-G^{-1}]\varphi\right)-\\ \beta(\varphi_{0}G_{0}^{-1}\varphi)-\frac{\beta}{2}(\varphi_{0}G _{0}^{-1}\varphi_{0})+\sum_{\alpha}z_{\alpha}\int d{\bf r}e^{i\beta\varrho_{ \alpha}\varphi_{0}({\bf r})+i\beta\varrho_{\alpha}\varphi({\bf r})-\beta u_{ \alpha}({\bf r})}\bigg{]}, \tag{13}\]
where we introduced a notation for normalization constant of new Gaussian measure
\[C=\int{\cal D}\varphi\exp\bigg{[}-\frac{\beta}{2}\left(\varphi G^{-1}\varphi \right)\bigg{]}. \tag{14}\]
Further, using the Gibbs-Bogolyubov inequality
\[\left\langle e^{X}\right\rangle\geq e^{\left\langle X\right\rangle}, \tag{15}\]
where
\[\left\langle(...)\right\rangle=\int\frac{{\cal D}\varphi}{C}\exp\bigg{[}- \frac{\beta}{2}\left(\varphi G^{-1}\varphi\right)\bigg{]}(...), \tag{16}\]
we arrive at
\[\Xi\geq\exp W[G;\varphi_{0}], \tag{17}\]
where we introduced the following auxiliary functional
\[W[G;\varphi_{0}]=\frac{1}{2}\ln\frac{DetG}{DetG_{0}}-\frac{1}{2 }tr\left(G[G_{0}^{-1}-G^{-1}]\right)-\frac{\beta}{2}(\varphi_{0}G_{0}^{-1} \varphi_{0})+\\ \sum_{\alpha}z_{\alpha}\int d{\bf r}e^{i\beta\varrho_{\alpha} \varphi_{0}({\bf r})-\frac{\beta}{2}\varrho_{\alpha}G\varrho_{\alpha}-\beta u _{\alpha}({\bf r})} \tag{18}\]
and took into account that
\[\left\langle e^{i\beta\varrho_{\alpha}\varphi(\mathbf{r})}\right\rangle=e^{-\frac {\beta}{2}\varrho_{\alpha}G\varrho_{\alpha}}=A_{\alpha}(\mathbf{r}), \tag{19}\]
\[\varrho_{\alpha}G\varrho_{\alpha}=\int d\mathbf{r}_{1}\int d\mathbf{r}_{2} \varrho_{\alpha}(\mathbf{r}_{1}-\mathbf{r})G(\mathbf{r}_{1},\mathbf{r}_{2}) \varrho_{\alpha}(\mathbf{r}_{2}-\mathbf{r}), \tag{20}\]
and that \(C/C_{0}=DetG/DetG_{0}\), where the symbol \(Det\) denotes the functional determinant of the operator [42].
The functions \(\varphi_{0}=i\psi\) and \(G(\mathbf{r},\mathbf{r}^{\prime})\) are determined from the self-consistent equations
\[\frac{\delta W}{\delta\varphi_{0}(\mathbf{r})}\bigg{|}_{\varphi_{0}=i\psi}=0, \ \frac{\delta W}{\delta G(\mathbf{r},\mathbf{r}^{\prime})}=0 \tag{21}\]
which were for the first time proposed in papers [31; 34; 38; 41; 43]. These equations guarantee the best variation estimation of the grand partition function.
The first of them is
\[\nabla^{2}\psi(\mathbf{r})=-\frac{1}{\varepsilon}\sum_{\alpha}z_{\alpha}\int d \mathbf{r}^{\prime}\varrho_{\alpha}(\mathbf{r}-\mathbf{r}^{\prime})A_{\alpha }(\mathbf{r}^{\prime})e^{-\beta\varrho_{\alpha}\psi(\mathbf{r}^{\prime})- \beta u_{\alpha}(\mathbf{r}^{\prime})}. \tag{22}\]
To obtain the second equation, we use the identity
\[\frac{1}{2}\ln\frac{DetG}{DetG_{0}}=\frac{1}{2}\left(tr\ln G-tr\ln G_{0} \right), \tag{23}\]
where the trace is
\[trA=\sum_{n}a_{n}, \tag{24}\]
where \(a_{n}\) are the eigenvalues of the operator \(A\) in the orthonormal basis of its eigenfunctions \(\psi_{n}(\mathbf{r})\), so that
\[a_{n}=\int d\mathbf{r}\int d\mathbf{r}^{\prime}\psi_{n}(\mathbf{r})A(\mathbf{ r},\mathbf{r}^{\prime})\psi_{n}(\mathbf{r}^{\prime}), \tag{25}\]
and the kernel of the operator is
\[A(\mathbf{r},\mathbf{r}^{\prime})=\sum_{n}a_{n}\psi_{n}(\mathbf{r})\psi_{n}( \mathbf{r}^{\prime}), \tag{26}\]
so that
\[trA=\int d\mathbf{r}A(\mathbf{r},\mathbf{r}). \tag{27}\]
The kernel of inverse operator is
\[A^{-1}(\mathbf{r},\mathbf{r}^{\prime})=\sum_{n}\frac{1}{a_{n}}\psi_{n}( \mathbf{r})\psi_{n}(\mathbf{r}^{\prime}). \tag{28}\]
Thus, we have
\[\frac{\delta}{\delta G({\bf r},{\bf r}^{\prime})}tr\ln G=\frac{\delta}{\delta G({ \bf r},{\bf r}^{\prime})}\sum_{n}\ln g_{n}=\sum_{n}\frac{1}{g_{n}}\frac{\delta g _{n}}{\delta G({\bf r},{\bf r}^{\prime})}, \tag{29}\]
where \(g_{n}\) are the eigenvalues of operator \(G\).
Further, we have
\[\frac{\delta g_{n}}{\delta G({\bf r},{\bf r}^{\prime})}=\frac{\delta}{\delta G ({\bf r},{\bf r}^{\prime})}\int d{\bf r}_{1}\int d{\bf r}_{2}\psi_{n}({\bf r}_{ 1})G({\bf r}_{1},{\bf r}_{2})\psi_{n}({\bf r}_{2})=\psi_{n}({\bf r})\psi_{n}({ \bf r}^{\prime}), \tag{30}\]
where we took into account that
\[\frac{\delta G({\bf r}_{1},{\bf r}_{2})}{\delta G({\bf r},{\bf r}^{\prime})}= \delta({\bf r}_{1}-{\bf r}^{\prime})\delta({\bf r}_{2}-{\bf r}^{\prime}). \tag{31}\]
Then, we have
\[\frac{\delta}{\delta G({\bf r},{\bf r}^{\prime})}tr\ln G=\sum_{n}\frac{1}{g_{ n}}\psi_{n}({\bf r})\psi_{n}({\bf r}^{\prime})=G^{-1}({\bf r},{\bf r}^{ \prime}). \tag{32}\]
Further, we have
\[\frac{\delta}{\delta G({\bf r},{\bf r}^{\prime})}tr\left(G[G^{-1} _{0}-G^{-1}]\right)=\frac{\delta}{\delta G({\bf r},{\bf r}^{\prime})}tr\left( GG^{-1}_{0}-I\right)=\frac{\delta}{\delta G({\bf r},{\bf r}^{\prime})}tr(GG^{-1}_{0})=\] \[\frac{\delta}{\delta G({\bf r},{\bf r}^{\prime})}\int d{\bf r}_{ 1}\int d{\bf r}_{2}G({\bf r}_{1},{\bf r}_{2})G^{-1}_{0}({\bf r}_{1},{\bf r}_{ 2})=G^{-1}_{0}({\bf r},{\bf r}^{\prime}) \tag{33}\]
and
\[\frac{\delta}{\delta G({\bf r},{\bf r}^{\prime})}\left(z_{\alpha} \int d{\bf x}A_{\alpha}({\bf x})e^{-\beta g_{\alpha}\psi({\bf x})-\beta u_{ \alpha}({\bf x})}\right)=-\frac{\beta z_{\alpha}}{2}\int d{\bf x}A_{\alpha}({ \bf x})\times\\ e^{-\beta g_{\alpha}\psi({\bf x})-\beta u_{\alpha}({\bf x})} \varrho_{\alpha}({\bf r}-{\bf x})\varrho_{\alpha}({\bf r}^{\prime}-{\bf x}), \tag{34}\]
where we have used the expression
\[\frac{\delta A_{\alpha}({\bf x})}{\delta G({\bf r},{\bf r}^{\prime})}=-\frac{ \beta}{2}A_{\alpha}({\bf x})\varrho_{\alpha}({\bf r}-{\bf x})\varrho_{\alpha}( {\bf r}^{\prime}-{\bf x}). \tag{35}\]
Therefore, the second equation takes the form
\[G^{-1}({\bf r},{\bf r}^{\prime})-G^{-1}_{0}({\bf r},{\bf r}^{\prime})=\Sigma({ \bf r},{\bf r}^{\prime}), \tag{36}\]
where
\[\Sigma({\bf r},{\bf r}^{\prime})=\beta\sum_{\alpha}z_{\alpha}\int d{\bf x}A_{ \alpha}({\bf x})e^{-\beta\varrho_{\alpha}\psi({\bf x})-\beta u_{\alpha}({\bf x })}\varrho_{\alpha}({\bf r}-{\bf x})\varrho_{\alpha}({\bf r}^{\prime}-{\bf x}). \tag{37}\]
Using the expression for the "empty space" inverse Green's function, expression (36) for the trial Green's function \(G^{-1}\) can be rewritten in the form
\[\left(-\varepsilon\nabla^{2}+\Sigma\right)G({\bf r},{\bf r}^{\prime})=\delta({ \bf r}-{\bf r}^{\prime}), \tag{38}\]
where \(\Sigma\) is the integral operator with the kernel (37).
Thus, using the charging approach (see ref. [34; 38; 41]), the expression for the grand thermodynamic potential (GTP) can be reduced to
\[\Omega=-k_{B}TW[G;i\psi]=-\int d{\bf r}\frac{\varepsilon(\nabla \psi)^{2}}{2}-k_{B}T\int d{\bf r}\sum_{\alpha}\bar{n}_{\alpha}({\bf r})+\\ k_{B}T\int d{\bf r}\int d{\bf r}^{\prime}\Sigma({\bf r},{\bf r }^{\prime})\int\limits_{0}^{1}d\tau\left(G({\bf r},{\bf r}^{\prime};\tau)-G({ \bf r},{\bf r}^{\prime})\right). \tag{39}\]
It should be noted that we have the option to calculate the functional determinants using an alternative approach, as described in the Appendix.
For the case of point-like charges of ions, when \(\varrho_{\alpha}({\bf r})=q_{\alpha}\delta({\bf r})\), eq. (39) simplifies to [34; 38]
\[\Omega=-k_{B}TW[G;i\psi]=-\int d{\bf r}\frac{\varepsilon(\nabla \psi)^{2}}{2}-k_{B}T\int d{\bf r}\sum_{\alpha}\bar{n}_{\alpha}({\bf r})+\\ \int d{\bf r}{\cal I}({\bf r})\int\limits_{0}^{1}d\tau\left(G({ \bf r},{\bf r};\tau)-G({\bf r},{\bf r})\right), \tag{40}\]
where
\[{\cal I}({\bf r})=\frac{1}{2}\sum_{\alpha}q_{\alpha}^{2}\bar{n}_{\alpha}({\bf r}) \tag{41}\]
is the local ionic strength with the local ionic concentrations
\[\bar{n}_{\alpha}({\bf r})=\frac{\delta\Omega}{\delta u_{\alpha}({\bf r})}=z_ {\alpha}A_{\alpha}({\bf r})e^{-\beta q_{\alpha}\psi({\bf r})-\beta u_{\alpha} ({\bf r})}. \tag{42}\]
Eqs. (42) determine the chemical equilibrium conditions for ions, which can be rewritten in terms of the chemical potentials
\[\mu_{\alpha}=\bar{\mu}_{\alpha}+q_{\alpha}\psi+\frac{q_{\alpha}^{2}}{2}G({ \bf r},{\bf r})+u_{\alpha}, \tag{43}\]
where the intrinsic chemical potentials of species are
\[\bar{\mu}_{\alpha}=k_{B}T\ln(\bar{n}_{\alpha}\Lambda_{\alpha}^{3}). \tag{44}\]
Intermediate Green's function \(G({\bf r},{\bf r}^{\prime};\tau)\) is governed by the following equation
\[\left(-\varepsilon\nabla^{2}+\tau\Sigma\right)G({\bf r},{\bf r}^{\prime};\tau)= \delta({\bf r}-{\bf r}^{\prime}). \tag{45}\]
In the bulk phase with \(u_{\alpha}=0\) and under thermodynamic limit where \(n_{\alpha}\) becomes constant, the function \(\psi({\bf r})\) tends to zero, leading to a translation-invariant Green's function, i.e.
\[G({\bf r},{\bf r}^{\prime})=G({\bf r}-{\bf r}^{\prime})=\int\frac{d{\bf k}}{(2 \pi)^{3}}G({\bf k})e^{i{\bf k}({\bf r}-{\bf r}^{\prime})}. \tag{46}\]
As it follows from eqs. (38) and (45),
\[G({\bf k})=\frac{1}{\varepsilon(k^{2}+\varkappa^{2}({\bf k}))}, \tag{47}\]
\[G({\bf k};\tau)=\frac{1}{\varepsilon(k^{2}+\tau\varkappa^{2}({\bf k}))}, \tag{48}\]
where we have introduced the screening function [25; 41; 43; 44]
\[\varkappa^{2}({\bf k})=\frac{1}{k_{B}T\varepsilon}\sum_{\alpha}n_{\alpha}| \varrho_{\alpha}({\bf k})|^{2} \tag{49}\]
with the Fourier-images of the inner charge densities (form-factors) of the ions, \(\varrho_{\alpha}({\bf k})\).
Thus, eq. (40) leads to well known expression [25; 41; 43] for the bulk osmotic pressure \(P_{b}=-\Omega/V\)
\[P_{b}=k_{B}T\sum_{\alpha}n_{\alpha}+\frac{k_{B}T}{2}\int\frac{d{\bf k}}{(2\pi) ^{3}}\left(\ln\left(1+\frac{\varkappa^{2}({\bf k})}{k^{2}}\right)-\frac{ \varkappa^{2}({\bf k})}{k^{2}+\varkappa^{2}({\bf k})}\right). \tag{50}\]
In the case of point-like charges, when
\[\varkappa^{2}({\bf k})=\kappa^{2}=\frac{1}{k_{B}T\varepsilon}\sum_{\alpha}q_{ \alpha}^{2}n_{\alpha} \tag{51}\]
eq. (50) results in the well-known Debye-Huckel expression [41]
\[P_{b}=k_{B}T\sum_{\alpha}n_{\alpha}-\frac{k_{B}T\kappa^{3}}{24\pi}. \tag{52}\]
## III Extension to arbitrary reference system
Above, we have formulated a variational field theory for Coulomb gas, i.e. when the ions have zero excluded volume. In this section, based on the paper [3], referenced in the Introduction section, we formulate an extension of this variational field theory to the case of
an arbitrary reference fluid system. Note that the first formulation of variational field theory, taking into account the steric interactions between ions within the lattice gas model, has been presented in [35; 43]. The variational field theory of charged rigid particles in electrolyte solutions, where steric interactions are taken into account on the level of the second virial terms, has been formulated in [41].
Let us assume that, apart from the Coulomb interactions, the ions interact with each other solely through repulsive potentials, denoted as \(U_{\alpha\gamma}(\mathbf{r})\), so that the total potential energy of the system is
\[U=\frac{1}{2}\left(\hat{\rho}G_{0}\hat{\rho}\right)+\frac{1}{2}\sum_{\alpha \gamma}\left(\hat{n}_{\alpha}U_{\alpha\gamma}\hat{n}_{\gamma}\right)-\frac{1}{ 2}\sum_{\alpha}N_{\alpha}U_{\alpha\alpha}(0)+\sum_{\alpha}(\hat{n}_{\alpha}u_{ \alpha}), \tag{53}\]
where
\[(\hat{n}_{\alpha}U_{\alpha\gamma}\hat{n}_{\gamma})=\int d\mathbf{r}\int d \mathbf{r}^{\prime}\hat{n}_{\alpha}(\mathbf{r})U_{\alpha\gamma}(\mathbf{r}- \mathbf{r}^{\prime})\hat{n}_{\gamma}(\mathbf{r}^{\prime}). \tag{54}\]
Employing the HS transformations, we obtain the following functional integral [3]:
\[\Xi=\int\frac{\mathcal{D}\varphi}{C_{0}}\exp\left[-\frac{\beta}{2}\left( \varphi G_{0}^{-1}\varphi\right)\right]\Xi_{R}[\varphi] \tag{55}\]
where we have introduced the grand partition function of the reference system with pure repulsive interactions between ions
\[\Xi_{R}=\int\frac{\mathcal{D}\Phi}{\mathcal{N}_{U}}e^{-\frac{\beta}{2}\sum \limits_{\alpha,\gamma}(\Phi_{\alpha}U_{\alpha\gamma}^{-1}\Phi_{\gamma})+W[ \Phi;\chi]} \tag{56}\]
with the auxiliary functional
\[W[\Phi;\chi]=\sum_{\alpha}\bar{z}_{\alpha}\int d\mathbf{r}e^{i\beta\Phi_{ \alpha}+i\beta\chi_{\alpha}}, \tag{57}\]
normalization constant
\[\mathcal{N}_{U}=\int\mathcal{D}\Phi e^{-\frac{\beta}{2}\sum\limits_{\alpha, \gamma}(\Phi_{\alpha}U_{\alpha\gamma}^{-1}\Phi_{\gamma})}, \tag{58}\]
and short-hand notations
\[\chi_{\alpha}=\varrho_{\alpha}\varphi+iu_{\alpha}. \tag{59}\]
The inverse matrix operator, \(U^{-1}\), is determined by the integral relation
\[\int d\mathbf{r}^{\prime\prime}\sum_{\lambda}U_{\alpha\lambda}^{-1}(\mathbf{ r}-\mathbf{r}^{\prime\prime})U_{\lambda\gamma}(\mathbf{r}^{\prime\prime}- \mathbf{r}^{\prime})=\delta_{\alpha\gamma}\delta(\mathbf{r}-\mathbf{r}^{ \prime}). \tag{60}\]
Now, as in previous section, let us apply the shift \(\varphi\rightarrow\varphi+i\psi\) and transition to trial Green's function \(G\):
\[\Xi=\int\frac{\mathcal{D}\Phi}{\mathcal{N}_{U}}e^{-\frac{\beta}{2} \sum\limits_{\alpha,\gamma}(\Phi_{\alpha}U_{\alpha\gamma}^{-1}\Phi_{\gamma})} \frac{C}{C_{0}}\int\frac{\mathcal{D}\varphi}{C}e^{-\frac{\beta}{2}\left( \varphi G^{-1}\varphi\right)}\times\\ \exp\bigg{[}-\frac{\beta}{2}\left(\varphi[G_{0}^{-1}-G^{-1}] \varphi\right)-i\beta(\varphi G_{0}^{-1}\psi)+\\ \frac{\beta}{2}(\psi G_{0}^{-1}\psi)+\sum_{\alpha}\bar{z}_{ \alpha}\int d\mathbf{r}e^{i\beta\Phi_{\alpha}+i\beta\varrho_{\alpha}\varphi- \beta\varrho_{\alpha}\psi-\beta u_{\alpha}}\bigg{]}. \tag{61}\]
Considering only the first cumulant over new Gaussian measure with the trial Green function \(G\), we can obtain the following
\[\Xi\approx\exp\left[\ln\frac{C}{C_{0}}-\frac{1}{2}tr\left(G[G_{0 }^{-1}-G^{-1}]\right)+\frac{\beta}{2}(\psi G_{0}^{-1}\psi)\right]\times\\ \int\frac{\mathcal{D}\Phi}{\mathcal{N}_{U}}\exp\left[-\frac{\beta }{2}\sum_{\alpha,\gamma}(\Phi_{\alpha}U_{\alpha\gamma}^{-1}\Phi_{\gamma})+ \sum_{\alpha}\bar{z}_{\alpha}\int d\mathbf{r}A_{\alpha}(\mathbf{r})e^{i\beta \Phi_{\alpha}-\beta\varrho_{\alpha}\psi-\beta u_{\alpha}}\right], \tag{62}\]
where \(\bar{z}_{\alpha}=z_{\alpha}e^{\frac{\beta}{2}U_{\alpha\alpha}(0)}\). By employing the approach based on the cluster expansion formulated in a recent paper by one of us[3], and assuming sufficiently small ranges of the potentials \(U_{\alpha\gamma}(\mathbf{r})\), we obtain the following approximation for the grand partition function
\[\Xi\approx\exp W[G;\psi], \tag{63}\]
where we have introduced the following auxiliary functional
\[W[G;\psi]=\frac{1}{2}\ln\frac{DetG}{DetG_{0}}-\frac{1}{2}tr\left(G[G_{0}^{-1}- G^{-1}]\right)+\frac{\beta}{2}(\psi G_{0}^{-1}\psi)+\beta\int d\mathbf{r}P\left( \{\bar{\mu}_{\alpha}\}\right) \tag{64}\]
with the local pressure of the reference fluid system, \(P\), which depends on the intrinsic chemical potentials of the ions
\[\bar{\mu}_{\alpha}=\mu_{\alpha}-\varrho_{\alpha}\psi-\frac{1}{2}\varrho_{ \alpha}G\varrho_{\alpha}-u_{\alpha}. \tag{65}\]
To ensure the self-consistency of the theory, it is necessary to determine appropriate "closures" for the functions \(\psi\) and \(G\). We choose these closures based on the extremum conditions for the functional \(W[G;\psi]\), that is,
\[\frac{\delta W}{\delta\psi(\mathbf{r})}=0,\ \frac{\delta W}{\delta G(\mathbf{r}, \mathbf{r}^{\prime})}=0, \tag{66}\]
which yield
\[\nabla^{2}\psi(\mathbf{r})=-\frac{1}{\varepsilon}\sum_{\gamma}\varrho_{\gamma }\bar{n}_{\gamma}(\mathbf{r}) \tag{67}\]
\[\left(-\varepsilon\nabla^{2}+\Sigma\right)G({\bf r},{\bf r}^{\prime})=\delta({ \bf r}-{\bf r}^{\prime}), \tag{68}\]
where the integral operator \(\Sigma\) in this case possesses the following kernel
\[\Sigma({\bf r},{\bf r}^{\prime})=\beta\sum_{\alpha}\int d{\bf x}\bar{n}_{ \alpha}({\bf x})\varrho_{\alpha}({\bf r}-{\bf x})\varrho_{\alpha}({\bf r}^{ \prime}-{\bf x}). \tag{69}\]
Note that the intrinsic chemical potentials, \(\bar{\mu}_{\alpha}\), can, in principle, be expressed in terms of the local concentrations using the relations
\[\bar{n}_{\alpha}({\bf r})=\frac{\delta\Omega}{\delta u_{\alpha}({\bf r})}= \frac{\partial P}{\partial\bar{\mu}_{\alpha}({\bf r})}. \tag{70}\]
The GTP takes the form
\[\Omega=-k_{B}TW[G;\psi]=-\int d{\bf r}\frac{\varepsilon(\nabla \psi)^{2}}{2}-k_{B}T\int d{\bf r}P_{0}({\bf r})+\\ k_{B}T\int d{\bf r}\int d{\bf r}^{\prime}\Sigma({\bf r},{ \bf r}^{\prime})\int\limits_{0}^{1}d\tau\left(G({\bf r},{\bf r}^{\prime}; \tau)-G({\bf r},{\bf r}^{\prime})\right), \tag{71}\]
where \(P_{0}({\bf r})=P(\{\bar{\mu}_{\alpha}({\bf r})\})\) is the local osmotic pressure.
It is important to note that this approach, unlike the variational method used in the previous section for the Coulomb gas, does not yield the optimal variational estimate for the GTP. However, it can be seen as a logical extension of the mean-field theory based on the saddle-point approximation [3], incorporating both electrostatic correlations and steric interactions between the ions. Similarly to modified PB equations [3], this approach enables the use of different reference systems, such as symmetric and asymmetric lattice gas [6; 14; 45], as well as a hard sphere mixture model [46].
For the symmetric lattice gas model for which we have the following ansatz
\[P=\frac{k_{B}T}{v}\ln\left(1+\sum_{\gamma}e^{\beta\bar{\mu}_{\gamma}}\right), \tag{72}\]
so that the local concentrations are
\[\bar{n}_{\alpha}=\frac{1}{v}\frac{e^{\beta\bar{\mu}_{\alpha}}}{1+\sum_{\gamma }e^{\beta\bar{\mu}_{\gamma}}}. \tag{73}\]
The self-consistent field equations for this reference system have been obtained recently in ref. [35] within a different approach. For the case of the ideal gas reference system for which
\(P=\sum_{\alpha}\Lambda_{\alpha}^{-3}e^{\beta\bar{\mu}\alpha}\) present theory transforms into the previously discussed (see also [34; 38]) variational field theory.
Employing the calculations similar to those in the previous section, it becomes apparent that for the case of the bulk phase of a Coulomb fluid with point-like charges of the ions, the theory with the reference system as a symmetric lattice gas leads to the following equation of state
\[P_{b}=-\frac{k_{B}T}{v}\ln\left(1-v\sum_{\gamma}n_{\gamma}\right)-\frac{k_{B}T \kappa^{3}}{24\pi}, \tag{74}\]
where \(n_{\gamma}\) are the bulk ion concentrations, satisfying the electroneutrality condition \(\sum_{\gamma}q_{\gamma}n_{\gamma}=0\).
Although formulated above approach considers the steric interactions of the ions, it approximates the electrostatic contribution to the total osmotic pressure in the bulk phase by the Debye-Huckel expression (see eq. (74)).
## IV Stress tensor
Now, we would like to discuss how to calculate the stress tensor from the derived above GTP functional. For this purpose, we apply the general covariant approach presented in our recent work [40]. In this approach, the stress tensor can be obtained using the following expression
\[\sigma_{ik}(\mathbf{r})=\frac{2}{\sqrt{g}}\frac{\delta\Omega}{\delta g_{ik}( \mathbf{r})}\bigg{|}_{g_{ik}=\delta_{ik}}, \tag{75}\]
where \(g_{ij}\) is the metric tensor, and \(g=\det g_{ij}\) - its determinant.
We consider only the case of point-like charges of the ions. In this case, the self-consistent field equations can be written in the form
\[\nabla^{2}\psi(\mathbf{r})=-\frac{1}{\varepsilon}\sum_{\gamma}q_{\gamma}\bar {n}_{\gamma}(\mathbf{r}), \tag{76}\]
\[\varepsilon\left(-\nabla^{2}+\varkappa^{2}(\mathbf{r})\right)G(\mathbf{r}, \mathbf{r}^{\prime})=\delta(\mathbf{r}-\mathbf{r}^{\prime}), \tag{77}\]
where
\[\varkappa^{2}(\mathbf{r})=\frac{1}{k_{B}T\varepsilon}\sum_{\alpha}q_{\alpha}^{ 2}\bar{n}_{\alpha}(\mathbf{r}). \tag{78}\]
Before applying eq. (75), let us express the GTP, \(\Omega=-k_{B}TW[G;\psi]\), in general covariant form [40, 47]. Thus, we have
\[W[G;\psi]=W_{0}[G;\psi]+W_{1}[G;\psi], \tag{79}\]
where
\[W_{0}[G;\psi]=\beta\int d{\bf r}\sqrt{g}\frac{\varepsilon(\nabla\psi)^{2}}{2}+ \beta\int d{\bf r}\sqrt{g}P(\{\bar{\mu}_{\alpha}\}), \tag{80}\]
\[W_{1}[G;\psi]=\frac{1}{2}\biggl{(}tr\ln G-tr\ln G_{0}\biggr{)}-\frac{1}{2}tr \biggl{(}G\biggl{[}G_{0}^{-1}-G^{-1}\biggr{]}\biggr{)}, \tag{81}\]
with \(\bar{\mu}_{\alpha}({\bf r})=\mu_{\alpha}-q_{\alpha}\psi({\bf r})-q_{\alpha}^{2 }G({\bf r},{\bf r})/2\). Here, we consider only the case of \(u_{\alpha}({\bf r})=0\).
For composition of two integral operators,
\[C=AB, \tag{82}\]
we have
\[C({\bf r},{\bf r}^{\prime})=\int d{\bf r}^{\prime\prime}\sqrt{g({\bf r}^{ \prime\prime})}A({\bf r},{\bf r}^{\prime\prime})B({\bf r}^{\prime\prime},{\bf r }^{\prime}). \tag{83}\]
The action of the operator \(A\) on a function \(f({\bf r})\) is determined by
\[Af({\bf r})=\int d{\bf r}^{\prime}\sqrt{g({\bf r}^{\prime})}A({\bf r},{\bf r}^ {\prime})f({\bf r}^{\prime}). \tag{84}\]
The trace variation is
\[\delta tr(A)=\int d{\bf r}\sqrt{g({\bf r})}\delta A({\bf r},{\bf r})+\frac{1}{ 2}\int d{\bf r}\sqrt{g({\bf r})}g^{ij}({\bf r})\delta g_{ij}({\bf r})A({\bf r},{\bf r}), \tag{85}\]
which can be rewritten as
\[\delta tr(A)=tr(\bar{\delta}A), \tag{86}\]
where we have introduced the infinitesimal operator \(\bar{\delta}A\) that has the kernel determined by the definition
\[\bar{\delta}A({\bf r},{\bf r}^{\prime})=\delta A({\bf r},{\bf r}^{\prime})+ \frac{1}{2}A({\bf r},{\bf r}^{\prime})g^{ij}({\bf r}^{\prime})\delta g_{ij}({ \bf r}^{\prime}). \tag{87}\]
Thus, we have
\[\delta tr(A)=\int d{\bf r}\sqrt{g({\bf r})}\bar{\delta}A({\bf r},{\bf r}). \tag{88}\]
To calculate the functional derivative with respect to the metric tensor, we take into account that
\[\frac{\delta W}{\delta g_{ik}({\bf r})}=\int d{\bf x}\left(\frac {\delta W}{\delta\psi({\bf x})}\right)_{G,g}\frac{\delta\psi({\bf x})}{\delta g _{ik}({\bf r})}+\\ \int d{\bf x}\int d{\bf x}^{\prime}\left(\frac{\delta W}{\delta G ({\bf x},{\bf x}^{\prime})}\right)_{\psi,g}\frac{\delta G({\bf x},{\bf x}^{ \prime})}{\delta g_{ik}({\bf r})}+\left(\frac{\delta W}{\delta g_{ik}({\bf r} )}\right)_{\psi,G}=\left(\frac{\delta W}{\delta g_{ik}({\bf r})}\right)_{ \psi,G}. \tag{89}\]
Thus, variation over the metric tensor have to be performed at constant \(\psi\) and \(G\). Bearing in mind this circumstance, we can derive the variation of \(W_{0}\):
\[\delta W_{0}[G;\psi]=\beta\int d{\bf r}\sqrt{g}\delta g_{ik}\biggl{(}\frac{ \varepsilon}{2}g^{ik}g^{mn}\partial_{m}\psi\partial_{n}\psi+g^{ik}P\biggr{)}, \tag{90}\]
where we took into account that[48; 49]
\[\delta(\sqrt{g})=\frac{1}{2}\sqrt{g}g^{ij}\delta g_{ij},\quad\delta g^{ij}=-g^ {im}g^{jn}\delta g_{mn} \tag{91}\]
and \(\partial_{i}=\partial/\partial x_{i}\) is the partial coordinate derivative. Note that summation over repeated coordinate indices is implied. Thus, using a determination
\[\sigma^{(0)}_{ik}=-\frac{2}{\beta\sqrt{g}}\frac{\delta W_{0}[G;\psi]}{\delta g _{ik}}\biggr{|}_{g_{ik}=\delta_{ik}}, \tag{92}\]
we can get the first part of the stress tensor
\[\sigma^{(0)}_{ik}=\varepsilon\partial_{i}\psi\partial_{k}\psi-\frac{ \varepsilon}{2}\delta_{ik}\partial_{l}\psi\partial_{l}\psi-P\delta_{ik}. \tag{93}\]
Note that the latter expression coincides with the stress tensor in the mean-field approximation obtained in ref.[3]
The variation of the second part of the functional is
\[\delta W_{1}[G;\psi]=-\frac{1}{2}tr\biggl{(}G_{0}^{-1}\bar{\delta}G_{0}\biggr{)} -\frac{1}{2}tr\biggl{(}G\bar{\delta}G_{0}^{-1}\biggr{)}. \tag{94}\]
Therefore, we use the relations
\[\bar{\delta}G_{0}^{-1}G_{0}+G_{0}^{-1}\bar{\delta}G_{0}=0, \tag{95}\]
\[G_{0}^{-1}({\bf r^{\prime}},{\bf r})=-\varepsilon\Delta\delta({\bf r}-{\bf r^ {\prime}}),\quad\bar{\delta}G_{0}^{-1}({\bf r^{\prime}},{\bf r})=-\varepsilon \Delta^{\prime}\delta({\bf r}-{\bf r^{\prime}}), \tag{96}\]
where \(\Delta=\nabla^{2}\) is the Laplacian and \(\Delta^{\prime}\) is the differential operator, determined by
\[\Delta^{\prime}f=Df-\frac{1}{2}g^{mn}\delta g_{mn}\Delta f. \tag{97}\]
Thus, we can show that the variation is
\[\delta W_{1}[G;\psi]=\frac{\varepsilon}{2}tr(\Delta^{\prime}\bar{G}), \tag{98}\]
where
\[\bar{G}=G-G_{0}, \tag{99}\]
and the special differential operator
\[Df=\frac{1}{2\sqrt{g}}\partial_{i}\bigg{(}\sqrt{g}g^{mn}\delta g_{mn}g^{ij} \partial_{j}f\bigg{)}-\frac{1}{\sqrt{g}}\partial_{i}\bigg{(}\sqrt{g}\delta g_{mn }g^{im}g^{jn}\partial_{j}f\bigg{)} \tag{100}\]
has been introduced.
After the same calculations as in [47], we obtain
\[\sigma^{(1)}_{ij}(\mathbf{r})=\frac{\varepsilon k_{B}T}{2}\lim_{\mathbf{r}^{ \prime}\rightarrow\mathbf{r}}\hat{D}_{ij}\bar{G}(\mathbf{r},\mathbf{r}^{ \prime}), \tag{101}\]
where the following differential operator
\[\hat{D}_{ij}=\delta_{ij}\partial_{k}\partial_{k}+\delta_{ij}\partial_{k} \partial_{k}^{\prime}-\partial_{i}\partial_{j}^{\prime}-\partial_{j}\partial _{i}^{\prime} \tag{102}\]
has been introduced.
Now, it is necessary to prove that the tensor
\[\sigma_{ik}=\sigma^{(0)}_{ij}+\sigma^{(1)}_{ij} \tag{103}\]
is the stress tensor. For this purpose, we have to show that its divergence is equal to zero, i.e. \(\partial_{i}\sigma_{ik}=0\).
Using eqs. (76) and (77), we obtain the following
\[\partial_{i}\sigma^{(1)}_{ik}(\mathbf{r})=\frac{\varepsilon k_{B}T}{2}G( \mathbf{r},\mathbf{r})\partial_{k}\varkappa^{2}(\mathbf{r}). \tag{104}\]
Using the same equations, we obtain the following results
\[\partial_{i}\sigma^{(0)}_{ik}(\mathbf{r})=\frac{\varepsilon k_{B}T}{2} \varkappa^{2}(\mathbf{r})\partial_{k}G(\mathbf{r},\mathbf{r}). \tag{105}\]
Thus, we get the following
\[\partial_{i}\sigma_{ik}=\frac{\varepsilon k_{B}T}{2}\partial_{k}(\varkappa^{ 2}(\mathbf{r})G(\mathbf{r},\mathbf{r})). \tag{106}\]
It can be observed that the approximate variational functional \(\Omega=-k_{B}TW[G;\psi]\) does not yield a divergenceless stress tensor. This is because the adopted approximation does not guarantee consistency with the mechanical equilibrium, which must be derived from the exact GTP functional. However, it is surprising that equation (106) can still be reformulated in the form of a conservation law
\[\partial_{i}T_{ik}=0, \tag{107}\]
where
\[T_{ik}=\sigma_{ik}-\frac{\varepsilon k_{B}T}{2}\varkappa^{2}({\bf r})G({\bf r},{ \bf r})\delta_{ik}. \tag{108}\]
Bearing in mind that with the use of the self-consistent field equations (76), (77)
\[\sigma_{ik}^{(1)}({\bf r})=\frac{\varepsilon}{2}\biggl{(}k_{B}T\varkappa^{2}({ \bf r})G({\bf r},{\bf r})+{\cal D}_{ll}({\bf r})\biggr{)}\delta_{ik}- \varepsilon{\cal D}_{ik}({\bf r}), \tag{109}\]
the tensor \(T_{ik}\) can be eventually presented as
\[T_{ik}=-P\left(\left\{\bar{\mu}_{\alpha}\right\}\right)\delta_{ik}+\varepsilon \left(\partial_{i}\psi\partial_{k}\psi-\frac{1}{2}\delta_{ik}\partial_{l}\psi \partial_{l}\psi\right)+\varepsilon\left(\frac{1}{2}{\cal D}_{ll}({\bf r}) \delta_{ik}-{\cal D}_{ik}({\bf r})\right), \tag{110}\]
where
\[{\cal D}_{ik}({\bf r})=k_{B}T\lim_{{\bf r}^{\prime}\to{\bf r}}\partial_{i} \partial_{k}^{\prime}\bar{G}({\bf r},{\bf r}^{\prime}) \tag{111}\]
is the renormalized autocorrelation function of the electric field fluctuations. The divergenceless tensor \(T_{ik}\) can be interpreted as the total stress tensor, which is consistent with the self-consistent field equations (76) and (77). In eq. (110) the first term in the right-hand side represents the hydrostatic isotropic stress tensor; the second term represents the standard Maxwell stress tensor; the third term represents the contribution of fluctuations in the local electric field around the mean-field configuration. Eq. (110) is the main result of this paper.
In the presence of the external fields, i.e. when \(\bar{\mu}_{\alpha}({\bf r})=\mu_{\alpha}-q_{\alpha}\psi({\bf r})-q_{\alpha}^{ 2}G({\bf r},{\bf r})/2-u_{\alpha}({\bf r})\), tensor (110) yields
\[\partial_{i}T_{ik}-\sum_{\alpha}\bar{n}_{\alpha}\partial_{k}u_{\alpha}=0. \tag{112}\]
The latter equation represents the mechanical equilibrium condition of the Coulomb fluid in the presence of external fields. We will use this equation below to obtain the expression for the disjoining pressure. Eq. (112) is nothing more but the hydrostatic equation for the Coulomb fluids.
## V Disjoining pressure
Let us consider the case of a Coulomb fluid that is confined in a slit-like pore with infinite charged walls. We assume that the walls create external potentials
\[u_{\alpha}(z)=\phi_{\alpha}(z)+\phi_{\alpha}(H-z), \tag{113}\]
where \(\phi_{\alpha}(z)\) is the single-wall external potential of the ion of the \(\alpha\)th kind at point with coordinate \(z\in[0,H]\). The identical walls are located at \(z=0\) and \(z=H\). The disjoining pressure is determined by
\[\Pi=-\frac{\partial(\Omega/{\cal A})}{\partial H}-P_{b}, \tag{114}\]
where \({\cal A}\) is the total area of the walls. Taking into account that the functional \(\Omega=-k_{B}TW[G;\psi]\) achieves the extremum at functions that satisfy the self-consistent field equations (76) and (77), we obtain the following.
\[\Pi=-\sum_{\alpha}\int\limits_{-\infty}^{\infty}dz\bar{n}_{\alpha}(z)\phi^{ \prime}_{\alpha}(z)-P_{b}, \tag{115}\]
where we used the relations \(\bar{n}_{\alpha}(H-z)=\bar{n}_{\alpha}(z)\), stemming from the symmetry determined by the identity of the walls. Integrating the mechanical equilibrium condition,
\[\frac{dT_{zz}}{dz}-\sum_{\alpha}\bar{n}_{\alpha}(z)u^{\prime}_{\alpha}(z)=0, \tag{116}\]
from \(z=H/2\) to \(z=\infty\) and taking into account that \(T_{zz}(\infty)=0\), we obtain
\[-T_{zz}(H/2)=\sum_{\alpha}\int\limits_{H/2}^{\infty}dz\bar{n}_{\alpha}(z)u^{ \prime}_{\alpha}(z)=\sum_{\alpha}\int\limits_{H/2}^{\infty}dz\bar{n}_{\alpha}( z)\phi^{\prime}_{\alpha}(z)-\sum_{\alpha}\int\limits_{-\infty}^{H/2}dz\bar{n}_{ \alpha}(z)\phi^{\prime}_{\alpha}(z). \tag{117}\]
Therefore,
\[\Pi=-\sum_{\alpha}\int\limits_{-\infty}^{\infty}dz\bar{n}_{\alpha}(z)\phi^{ \prime}_{\alpha}(z)-P_{b}=-\sum_{\alpha}\int\limits_{-\infty}^{H/2}dz\bar{n}_ {\alpha}(z)\phi^{\prime}_{\alpha}(z)-\sum_{\alpha}\int\limits_{H/2}^{\infty} dz\bar{n}_{\alpha}(z)\phi^{\prime}_{\alpha}(z)-P_{b}=\\ -T_{zz}(H/2)-P_{b}-2\sum_{\alpha}\int\limits_{H/2}^{\infty}dz\bar{ n}_{\alpha}(z)\phi^{\prime}_{\alpha}(z). \tag{118}\]
Thus, taking into account that \(\bar{n}_{\alpha}(z)=0\) at \(z\geq H\) (impermeable wall), we eventually obtain the following result
\[\Pi=P_{n}-P_{b}-2\sum_{\alpha}\int\limits_{H/2}^{H}dz\bar{n}_{\alpha}(z)\phi^{ \prime}_{\alpha}(z), \tag{119}\]
where \(P_{n}=-T_{zz}(H/2)\) is the normal pressure at the middle of the pore. If the pore is sufficiently large and the range of the wall potential is sufficiently small, we can neglect the integral in the right-hand side of eq. (119). Note that the same expression for the disjoining pressure has been obtained within the pure mean-field theory in ref. [3].
Now, we would like to specify the normal stress at the pore midpoint. For slit-like pore the Green's function can be presented as
\[G({\bf r},{\bf r}^{\prime})=G(\mathbf{\rho}-\mathbf{\rho}^{ \prime}|z,z^{\prime}), \tag{120}\]
where \(\rho\) is the two-dimensional vector lying in the plane of the pore and the \(z\)-axis is perpendicular to the pore wall. Let us consider the two-dimensional Fourier transform[34, 35, 38]
\[G(\mathbf{\rho}-\mathbf{\rho}^{\prime}|z,z^{\prime})=\int \frac{d^{2}{\bf q}}{(2\pi)^{2}}e^{-i{\bf q}(\mathbf{\rho}-\mathbf{\rho}^{\prime})}G(q|z,z^{\prime}), \tag{121}\]
where \(G(q|z,z^{\prime})\) is the even function of \({\bf q}\) depending only on the vector modulus \(q=|{\bf q}|\). The cross elements of the tensor \({\cal D}_{ik}(z)\) are
\[{\cal D}_{xy}(z)={\cal D}_{xz}(z)={\cal D}_{yz}(z)=0, \tag{122}\]
whereas its diagonal elements are
\[{\cal D}_{xx}(z)=\int\frac{d^{2}{\bf q}}{(2\pi)^{2}}q_{x}^{2}Q(q,z),\ {\cal D}_{yy}(z)=\int \frac{d^{2}{\bf q}}{(2\pi)^{2}}q_{y}^{2}Q(q,z), \tag{123}\]
\[{\cal D}_{zz}(z)=\int\frac{d^{2}{\bf q}}{(2\pi)^{2}}{\cal D}_{zz}(q,z), \tag{124}\]
where
\[{\cal D}_{zz}(q,z)=k_{B}T\lim_{z^{\prime}\to z}\partial_{z}\partial_{z^{ \prime}}\bar{G}(q|z,z^{\prime}), \tag{125}\]
\[Q(q,z)=k_{B}T\lim_{z^{\prime}\to z}\bar{G}(q|z,z^{\prime}), \tag{126}\]
so that
\[{\cal D}_{ll}(z)=\int\frac{d^{2}{\bf q}}{(2\pi)^{2}}\biggl{(}q^{2}Q(q,z)+{ \cal D}_{zz}(q,z)\biggr{)}. \tag{127}\]
The Fourier-image of the Green's function, \(G(q|z,z^{\prime})\), can be found from the equation
\[\varepsilon\biggl{(}-\partial_{z}^{2}+q^{2}+\varkappa^{2}(z)\biggr{)}G(q|z,z^ {\prime})=\delta(z-z^{\prime}). \tag{128}\]
Therefore, the normal pressure in eq. (119) is
\[P_{n}=P_{m}+\varepsilon\left({\cal D}_{zz}\left(\frac{H}{2}\right)-\frac{1}{2 }{\cal D}_{ll}\left(\frac{H}{2}\right)\right), \tag{129}\]
where \(P_{m}=P_{0}(H/2)\) is the osmotic pressure of the ions at the pore middle and we have used that \(\psi^{\prime}(H/2)=0\).
To conclude this section, we would like to estimate the asymptotic of the second term in the right-hand side of eq. (129) corresponding to the electric field fluctuations at \(H\to\infty\). Taking into account that in this asymptotic \(\varkappa(z)\simeq\kappa\) and that[35]
\[G(q|z,z^{\prime})\simeq\frac{e^{-\sqrt{q^{2}+\kappa^{2}}|z-z^{\prime}|}}{2 \varepsilon\sqrt{q^{2}+\kappa^{2}}},\ G_{0}(q|z,z^{\prime})\simeq\frac{e^{-q|z -z^{\prime}|}}{2\varepsilon q}, \tag{130}\]
we obtain
\[\varepsilon\left(\mathcal{D}_{zz}\left(\frac{H}{2}\right)-\frac{ 1}{2}\mathcal{D}_{ll}\left(\frac{H}{2}\right)\right)\simeq\\ \frac{k_{B}T}{8\pi}\int\limits_{0}^{\infty}dqq\left(2q-\sqrt{q^{2 }+\kappa^{2}}-\frac{q^{2}}{\sqrt{q^{2}+\kappa^{2}}}\right)=-\frac{k_{B}T \kappa^{3}}{24\pi}, \tag{131}\]
i.e., as it should be, Debye-Huckel expression.
## VI Discussion
As explained above, the contribution of electrostatics to the bulk osmotic pressure is described by the Debye-Huckel expression, which is expressed in terms of the Debye screening length, \(r_{D}=\kappa^{-1}\) (see equation (74)). However, if we consider the self-consistent field equation (76) for the electrostatic potential, we can derive another screening length in the linear approximation (weak electrostatics), which depends on the choice of the reference system (see also refs.[10; 50]). This discrepancy arises when we approximate the GTP by using the local osmotic pressure, which depends on the chemical potentials which in turn include the self-energies[31; 32; 33] of the ions. As a result, the current form of variational field theory predicts the weak electrostatic coupling limit for the bulk osmotic pressure. However, a recent study[51] has shown that in the case of multivalent ions, a symmetric lattice gas reference system can qualitatively describe the attractive force between like-charged dielectric walls in an electrolyte solution at small interwall distances.
We also note that the formulation of the theory does not treat dielectric discontinuities and thus can be applied to the modeling of electric double layers at the metal-electrolyte interfaces. However, theory can be directly generalized to take into account the dielectric heterogeneity (dependence of dielectric permittivity on coordinates) on interfaces with such macroscopic dielectrics, as membranes[52; 53; 1]. Comprehensive examples of such variational field theories that account for dielectric heterogeneity can be found in papers[32; 33; 34; 38]. We
also note that the present theory does not take into account the static polarizability and dipole moment of the ions [7; 15; 16] and polar molecules of solvent [19; 20; 21]. They can be taken into account in the same manner as in recently presented pure mean-field theories (see Models II and III as classified in [3]).
## VII Conclusions
In conclusion, this paper presents a field-theoretical approach based on variational field theory, allowing us to derive the grand thermodynamic potential of an inhomogeneous Coulomb fluid. Considering electrostatic potential and a trial electrostatic Green's function, we have obtained self-consistent field equations, serving as the Euler-Lagrange equations for acquired grand thermodynamic potential. Building upon the results of the previous work, formulated within the mean-field framework, we have extended our approach to incorporate the electrostatic correlations of ions. This is achieved using a general covariant approach [40], resulting in a total stress tensor that accounts for the electrostatic correlations. Notably, an additional term that depends on the autocorrelation function of the local electric field fluctuations is obtained. Utilizing the derived total stress tensor and applying the mechanical equilibrium condition, we have established a general expression for the disjoining pressure of the confined Coulomb fluids in slit-like pore geometry. The derivation of this equation enables us to calculate directly the disjoining pressure of a Coulomb fluid within a slit-like pore, with conductive walls, by means of solving numerically the self-consistent field equations, eliminating the necessity for a computationally intense differentiation of the grand thermodynamic potential with respect to the thickness of the pore. These numerical calculations, for physically relevant reference fluids systems, such as hard-sphere mixtures and asymmetric lattice gases, will be the focus of our future publications.
**Data availability statement.** The data supporting the findings of this study are available in the article.
**Acknowledgements.** This work is an output of a research project implemented as part of the Basic Research Program at the National Research University Higher School of Economics (HSE University). We would like to express our gratitude to the anonymous reviewers for their valuable comments, which have helped us make significant improvements to the text.
Appendix A An alternative method of calculations of the logarithm of the fraction of functional determinants.
In this appendix, we demonstrate an alternative approach to calculate the logarithm of the fraction of functional determinants, different from the charging method [34; 41]. Therefore, we can establish a chain of equalities to illustrate this method:
\[\ln\frac{DetG}{DetG_{0}}=tr\ln G-tr\ln G_{0}=\sum_{n}\ln\frac{v_{n} }{u_{n}}=\int\limits_{0}^{\infty}ds\sum_{n}\left(\frac{1}{s+u_{n}}-\frac{1}{s+ v_{n}}\right)\\ =\int d\mathbf{r}\int\limits_{0}^{\infty}ds\sum_{n}\left(\frac{1} {s+u_{n}}\psi_{n}(\mathbf{r})\psi_{n}(\mathbf{r})-\frac{1}{s+v_{n}}\phi_{n}( \mathbf{r})\phi_{n}(\mathbf{r})\right)\\ =\int d\mathbf{r}\int\limits_{0}^{\infty}ds\left(\left(s+G^{-1} \right)^{-1}-\left(s+G_{0}^{-1}\right)^{-1}\right)\delta(\mathbf{r}-\mathbf{r} ^{\prime})\bigg{|}_{\mathbf{r}=\mathbf{r}^{\prime}}\\ =\int d\mathbf{r}\int\limits_{0}^{\infty}ds\left(\mathcal{G}( \mathbf{r},\mathbf{r}|s)-\mathcal{G}_{0}(\mathbf{r},\mathbf{r}|s)\right)=\int \limits_{0}^{\infty}ds\left(tr\mathcal{G}(s)-tr\mathcal{G}_{0}(s)\right), \tag{10}\]
where we have introduced the auxiliary Green's functions, \(\mathcal{G}\) and \(\mathcal{G}_{0}\), determined by the following equations
\[(G^{-1}+s)\mathcal{G}(\mathbf{r},\mathbf{r}^{\prime}|s)=\delta(\mathbf{r}- \mathbf{r}^{\prime}) \tag{11}\]
and
\[(G_{0}^{-1}+s)\mathcal{G}_{0}(\mathbf{r},\mathbf{r}^{\prime}|s)=\delta( \mathbf{r}-\mathbf{r}^{\prime}) \tag{12}\]
the eigenvalues \(u_{n}\) and \(v_{n}\) of the operators \(G^{-1}\) and \(G_{0}^{-1}\), respectively, and the corresponding orthonormal eigenfunctions \(\psi_{n}(\mathbf{r})\) and \(\phi_{n}(\mathbf{r})\). We have also used the functional completeness conditions for the eigenfunctions, that is
\[\sum_{n}\phi_{n}(\mathbf{r})\phi_{n}(\mathbf{r}^{\prime})=\sum_{n}\psi_{n}( \mathbf{r})\psi_{n}(\mathbf{r}^{\prime})=\delta(\mathbf{r}-\mathbf{r}^{\prime }). \tag{13}\]
|
2305.08511 | SAT-Based PAC Learning of Description Logic Concepts | We propose bounded fitting as a scheme for learning description logic
concepts in the presence of ontologies. A main advantage is that the resulting
learning algorithms come with theoretical guarantees regarding their
generalization to unseen examples in the sense of PAC learning. We prove that,
in contrast, several other natural learning algorithms fail to provide such
guarantees. As a further contribution, we present the system SPELL which
efficiently implements bounded fitting for the description logic
$\mathcal{ELH}^r$ based on a SAT solver, and compare its performance to a
state-of-the-art learner. | Balder ten Cate, Maurice Funk, Jean Christoph Jung, Carsten Lutz | 2023-05-15T10:20:31Z | http://arxiv.org/abs/2305.08511v1 | # SAT-Based PAC Learning of Description Logic Concepts
###### Abstract
We propose _bounded fitting_ as a scheme for learning description logic concepts in the presence of ontologies. A main advantage is that the resulting learning algorithms come with theoretical guarantees regarding their generalization to unseen examples in the sense of PAC learning. We prove that, in contrast, several other natural learning algorithms fail to provide such guarantees. As a further contribution, we present the system SPELL which efficiently implements bounded fitting for the description logic \(\mathcal{ELH}^{r}\) based on a SAT solver, and compare its performance to a state-of-the-art learner.
## 1 Introduction
In knowledge representation, the manual curation of knowledge bases (KBs) is time consuming and expensive, making learning-based approaches to knowledge acquisition an attractive alternative. We are interested in description logics (DLs) where _concepts_ are an important class of expressions, used for querying KBs and also as central building blocks for ontologies. The subject of learning DL concepts from labeled data examples has received great interest, resulting in various implemented systems such as DL-Learner, DL-Foil, and YINYANG [1, 1, 12]. These systems take a set of positively and negatively labeled examples and an ontology \(\mathcal{O}\), and try to construct a concept that fits the examples w.r.t. \(\mathcal{O}\). The related _fitting problem_, which asks to decide the existence of a fitting concept, has also been studied intensely [1, 13, 14].
The purpose of this paper is to propose a new approach to concept learning in DLs that we call _bounded fitting_, inspired by both bounded model checking as known from systems verification [1] and by Occam algorithms from computational learning theory [1]. The idea of bounded fitting is to search for a fitting concept of bounded size, iteratively increasing the size bound until a fitting is found. This approach has two main advantages, which we discuss in the following.
First, it comes with formal guarantees regarding the generalization of the returned concept from the training data to previously unseen data. This is formalized by Valiant's framework of _probably approximately correct (PAC) learning_[23]. Given sufficiently many data examples sampled from an unknown distribution, bounded fitting returns a concept that with high probability \(\delta\) has a classification error bounded by some small \(\epsilon\). It is well-known that PAC learning is intimately linked to Occam algorithms which guarantee to find a hypothesis of small size [1, 1]. By design, algorithms following the bounded fitting paradigm are Occam, and as a consequence the number of examples needed for generalization depends only linearly on \(1/\delta\), \(1/\epsilon\), and the size of the target concept to be learned. This generalization guarantee holds independently of the DL used to formulate concepts and ontologies. In contrast, no formal generalization guarantees have been established for DL concept learning approaches.
The second advantage is that, in important cases, bounded fitting enables learning based on SAT solvers and thus leverages the practical efficiency of these systems. We consider ontologies formulated in the description logic \(\mathcal{ELH}^{r}\) and concepts formulated in \(\mathcal{EL}\), which may be viewed as a core of the ontology language OWL 2 EL. In this case, the _size-restricted fitting problem_, which is defined like the fitting problem except that the maximum size of fitting concepts to be considered is given as an additional input (in unary), is NP-complete; it is thus natural to implement bounded fitting using a SAT solver. For comparison, we mention that the unbounded fitting problem is ExpTime-complete in this case [13].
As a further contribution of the paper, we analyze the generalization ability of other relevant approaches to constructing fitting \(\mathcal{EL}\)-concepts. We start with algorithms that return fittings that are 'prominent' from a logical perspective in that they are most specific or most general or of minimum quantifier depth among all fittings. Algorithms with such characteristics and their applications are discussed in [13]. Notably, constructing fittings via direct products of positive examples yields most specific fittings [1, 14]. Our result is that, even without ontologies, these types of algorithms are not _sample-efficient_, that is, no polynomial amount of positive and negative examples is sufficient to achieve generalization in the PAC sense.
We next turn to algorithms based on so-called downward refinement operators which underlie all implemented DL learning systems that we are aware of. We consider two natural such
operators that are rather similar to one another and combine them with a breadth-first search strategy. The first operator can be described as exploring'most-general specializations' of the current hypotheses and the second one does the same, but is made 'artificially Occam' (with, most likely, a negative impact on practicality). We prove that while the first operator does not lead to a not sample-efficient algorithm (even without ontologies), the second one does. This leaves open whether or not implemented systems based on refinement operators admit generalization guarantees, as they implement complex heuristics and optimizations.
As our final contribution we present SPELL, a SAT-based system that implements bounded fitting of \(\mathcal{EL}\)-concepts under \(\mathcal{ELH}^{r}\)-ontologies. We evaluate SPELL on several datasets and compare it to the only other available learning system for \(\mathcal{EL}\) that we are aware of, the _\(\mathcal{EL}\) tree learner (ELTL)_ incarnation of the _DL-Learner_ system [1]. We find that the running time of SPELL is almost always significantly lower than that of ELTL. Since, as we also show, it is the size of the target concept that has most impact on the running time, this means that SPELL can learn larger target queries than ELTL. We also analyze the relative strengths and weaknesses of the two approaches, identifying classes of inputs on which one of the systems performs significantly better than the other one. Finally, we make initial experiments regarding generalization, where both systems generalize well to unseen data, even on very small samples. While this is expected for SPELL, for ELTL it may be due to the fact that some of the heuristics prefer fittings of small size, which might make ELTL an Occam algorithm.
Proof details are provided in the appendix.
Related WorkCohen and Hirsh identified a fragment of the early DL CLASSIC that admits sample-efficient PAC learning, even in polynomial time [1]. For several DLs such as \(\mathcal{EL}\) and \(\mathsf{CLASSIC}\), concepts are learnable in polynomial time in Angluin's framework of exact learning with membership and equivalence queries [14, 15, 16, 17]. The algorithms can be transformed in a standard way into sample-efficient polynomial time PAC learning algorithms that, however, additionally use membership queries to an oracle [1]. It is known that sample-efficient PAC learning under certain assumptions implies the existence of Occam algorithms [1]. These assumptions, however, do not apply to the learning tasks studied here.
## 2 Preliminaries
Concepts, Ontologies, Queries.Let \(\mathsf{N}_{\mathsf{C}}\), \(\mathsf{N}_{\mathsf{R}}\), and \(\mathsf{N}_{\mathsf{I}}\) be countably infinite sets of _concept names_, _role names_, and _individual names_, respectively. An _\(\mathcal{EL}\)-concept_ is formed according to the syntax rule
\[C,D::=\top\mid A\mid C\sqcap D\mid\exists r.C\]
where \(A\) ranges over \(\mathsf{N}_{\mathsf{C}}\) and \(r\) over \(\mathsf{N}_{\mathsf{R}}\). A concept of the form \(\exists r.C\) is called an _existential restriction_ and the _quantifier depth_ of a concept is the maximum nesting depth of existential restrictions in it. An _\(\mathcal{ELH}^{r}\)-ontology_\(\mathcal{O}\) is a finite set of _concept inclusions (CIs)_\(C\sqsubseteq D\), _role inclusions_\(r\sqsubseteq s\), and _range assertions_\(\mathsf{ran}(r)\sqsubseteq C\) where \(C\) and \(D\) range over \(\mathcal{EL}\)-concepts and \(r,s\) over role names. An _\(\mathcal{EL}\)-ontology_ is an \(\mathcal{EL}\)_\(\mathcal{CH}^{r}\)_-ontology that uses neither role inclusions nor range assertions. We also sometimes mention _\(\mathcal{EL}\)-concepts_ and _\(\mathcal{EL}\)-ontologies_, which extend their \(\mathcal{EL}\)-counterparts with inverse roles \(r^{-}\) that can be used in place of role names. See [1] for more information. A _database_\(\mathcal{D}\) (also called _ABox_ in a DL context) is a finite set of _concept assertions_\(A(a)\) and _role assertions_\(r(a,b)\) where \(A\in\mathsf{N}_{\mathsf{C}}\), \(r\in\mathsf{N}_{\mathsf{R}}\), and \(a,b\in\mathsf{N}_{\mathsf{I}}\). We use \(\mathsf{adom}(\mathcal{D})\) to denote the set of individual names that are used in \(\mathcal{D}\). A _signature_ is a set of concept and role names, in this context uniformly referred to as _symbols_. For any syntactic object \(O\), such as a concept or an ontology, we use \(\mathsf{sig}(O)\) to denote the set of symbols used in \(O\) and \(||O||\) to denote the _size_ of \(O\), that is, the number of symbols used to write \(O\) encoded as a word over a finite alphabet, with each occurrence of a concept or role name contributing a single symbol.
The semantics is defined in terms of _interpretations_\(\mathcal{I}=(\Delta^{\mathcal{I}},\,^{\mathcal{I}})\) where \(\Delta^{\mathcal{I}}\) is the _domain_ of \(\mathcal{I}\) and \(\cdot^{\mathcal{I}}\) assigns a set \(A^{\mathcal{I}}\subseteq\Delta^{\mathcal{I}}\) to every \(A\in\mathsf{N}_{\mathsf{C}}\) and a binary relation \(r^{\mathcal{I}}\subseteq\Delta^{\mathcal{I}}\times\Delta^{\mathcal{I}}\) to every \(r\in\mathsf{N}_{\mathsf{R}}\). The _extension_\(C^{\mathcal{I}}\) of \(\mathcal{EL}\)-concepts \(C\) is then defined as usual [1]. An interpretation \(\mathcal{I}\)_satisfies_ a concept or role inclusion \(\alpha\sqsubseteq\beta\) if \(\alpha^{\mathcal{I}}\subseteq\beta^{\mathcal{I}}\), a range assertion \(\mathsf{ran}(r)\sqsubseteq C\) if the projection of \(r^{\mathcal{I}}\) to the second component is contained in \(C^{\mathcal{I}}\), a concept assertion \(A(a)\) if \(a\in A^{\mathcal{I}}\), and a role assertion \(r(a,b)\) if \((a,b)\in r^{\mathcal{I}}\). We say that \(\mathcal{I}\) is a _model_ of an ontology/database if it satisfies all inclusions/assertions in it.
An \(\mathcal{EL}\)-concept \(C\) can be viewed as an _\(\mathcal{EL}\)-query (ELQ)_\(q\), as follows. Let \(\mathcal{D}\) be a database and \(\mathcal{O}\) an \(\mathcal{ELH}^{r}\)-ontology. Then \(a\in\mathsf{adom}(\mathcal{D})\) is an _answer_ to \(q\) on \(\mathcal{D}\) w.r.t. \(\mathcal{O}\) if \(a\in C^{\mathcal{I}}\) for all models \(\mathcal{I}\) of \(\mathcal{D}\) and \(\mathcal{O}\). In a similar way, we may view \(\mathcal{EL}\)-concepts as _\(\mathcal{EL}\)-queries (ELIQs)_. We will from now on mostly view \(\mathcal{EL}\)-concepts as ELQs. This does not, however, restrict their use, which may be as actual queries or as concepts used as building blocks for ontologies.
An _ontology-mediated query (OMQ) language_ is a pair \((\mathcal{L},\mathcal{Q})\) with \(\mathcal{L}\) an ontology language and \(\mathcal{Q}\) a query language, such as \((\mathcal{ELH}^{r},\mathsf{ELQ})\) and \((\mathcal{ELT},\mathsf{ELIQ})\). For a query language \(\mathcal{Q}\) and signature \(\Sigma\), we use \(\mathcal{Q}_{\Sigma}\) to denote the set of all queries \(q\in\mathcal{Q}\) with \(\mathsf{sig}(q)\subseteq\Sigma\). All query languages considered in this paper are unary, that is, they return a subset of \(\mathsf{adom}(\mathcal{D})\) as answers. We use \(q(\mathcal{D}\cup\mathcal{O})\) to denote the set of answers to \(q\) on \(\mathcal{D}\) w.r.t. \(\mathcal{O}\). For an \(\mathcal{L}\)-ontology \(\mathcal{O}\) and queries \(q_{1},q_{2}\), we write \(\mathcal{O}\models q_{1}\sqsubseteq q_{2}\) if for all databases \(\mathcal{D}\), \(q_{1}(\mathcal{D}\cup\mathcal{O})\sqsubseteq q_{2}(\mathcal{D}\cup\mathcal{O})\). We say that \(q_{1}\) and \(q_{2}\) are _equivalent_ w.r.t. \(\mathcal{O}\), written \(\mathcal{O}\models q_{1}\equiv q_{2}\), if \(\mathcal{O}\models q_{1}\sqsubseteq q_{2}\) and \(\mathcal{O}\models q_{2}\sqsubseteq q_{1}\). When \(\mathcal{O}=\emptyset\), we write \(q_{1}\sqsubseteq q_{2}\) and \(q_{1}\equiv q_{2}\).
Every ELQ \(q\) may be viewed as a database \(\mathcal{D}_{q}\) in an obvious way, e.g. \(q=\exists r.\exists s.A\) as \(\mathcal{D}_{q}=\{r(a_{q},a_{1}),s(a_{1},a_{2}),A(a_{2})\}\). Let \(\mathcal{D}_{1},\mathcal{D}_{2}\) be databases and \(\Sigma\) a signature. A _\(\Sigma\)-simulation_ from \(\mathcal{D}_{1}\) to \(\mathcal{D}_{2}\) is a relation \(S\subseteq\mathsf{adom}(\mathcal{D}_{1})\times\mathsf{adom}(\mathcal{D}_{2})\) such that for all \((a_{1},a_{2})\in S\):
1. if \(A(a_{1})\in\mathcal{D}_{1}\) with \(A\in\Sigma\), then \(A(a_{2})\in\mathcal{D}_{2}\);
2. if \(r(a_{1},b_{1})\in\mathcal{D}_{1}\) with \(r\in\Sigma\), there is \(r(a_{2},b_{2})\in\mathcal{D}_{2}\) such that \((b_{1},b_{2})\in S\).
For \(a_{1}\in\mathsf{adom}(\mathcal{D}_{1})\) and \(a_{2}\in\mathsf{adom}(\mathcal{D}_{2})\), we write
\((\mathcal{D}_{1},a_{1})\preceq_{\Sigma}(\mathcal{D}_{2},a_{2})\) if there is a \(\Sigma\)-simulation \(S\) from \(\mathcal{D}_{1}\) to \(\mathcal{D}_{2}\) with \((a_{1},a_{2})\in S\). We generally drop the mention of \(\Sigma\) in case that \(\Sigma=\mathsf{N_{C}}\cup\mathsf{N_{R}}\). The following well-known lemma links simulations to ELQs.
**Lemma 1**.: _For all ELQs \(q\), databases \(\mathcal{D}\), and \(a\in\mathsf{adom}(\mathcal{D})\): \(a\in q(\mathcal{D})\) iff \((\mathcal{D}_{q},a_{q})\preceq(\mathcal{D},a)\). Consequently, for all ELQs \(q\),\(p\): \(q\sqsubseteq p\) iff \((\mathcal{D}_{p},a_{p})\preceq(\mathcal{D}_{q},a_{q})\)._
**Fitting.** A _pointed database_ is a pair \((\mathcal{D},a)\) with \(\mathcal{D}\) a database and \(a\in\mathsf{adom}(\mathcal{D})\). A _labeled data example_ takes the form \((\mathcal{D},a,+)\) or \((\mathcal{D},a,-)\), the former being a _positive example_ and the latter a _negative example_.
Let \(\mathcal{O}\) be an ontology, \(\mathcal{Q}\) a query language, and \(E\) a collection of labeled data examples. A query \(q\in\mathcal{Q}\)_fits_\(E\) w.r.t. \(\mathcal{O}\) if \(a\in q(\mathcal{D}\cup\mathcal{O})\) for all \((\mathcal{D},a,+)\in E\) and \(a\notin q(\mathcal{D}\cup\mathcal{O})\) for all \((\mathcal{D},a,-)\in E\). We then call \(E\) a \(q\)_-labeled data example w.r.t. \(\mathcal{O}\)_. We say that \(q\) is a _most specific fitting_ if \(\mathcal{O}\models q\sqsubseteq q^{\prime}\) for every \(q^{\prime}\in\mathcal{Q}\) that fits \(E\), and that it is _most general_ if \(\mathcal{O}\models q^{\prime}\sqsubseteq q\) for every \(q^{\prime}\in\mathcal{Q}\) that fits \(E\).
**Example 1**.: _Consider the collection \(E_{0}\) of examples \((\{r(a,a),A(a),B(a)\},a,+),(\{A(a),r(a,b),B(b)\},a,+),\)\((\{r(a,b)\},b,-)\). It has several ELQ fittings, the most specific one being \(A\sqcap\exists r.B\). There is no most general fitting ELQ as both \(A\) and \(\exists r.B\) fit, but no common generalization does._
A _fitting algorithm_ for an OMQ language \((\mathcal{L},\mathcal{Q})\) is an algorithm that takes as input an \(\mathcal{L}\)-ontology \(\mathcal{O}\) and a collection of labeled data examples \(E\) and returns a query \(q\in\mathcal{Q}\) that fits \(E\) w.r.t. \(\mathcal{O}\), if such a \(q\) exists, and otherwise reports non-existence or does not terminate. The _size-restricted fitting problem_ for \((\mathcal{L},\mathcal{Q})\) means to decide, given a collection of labeled data examples \(E\), an \(\mathcal{L}\)-ontology \(\mathcal{O}\), and an \(s\geq 1\) in unary, whether there is a query \(q\in\mathcal{Q}\) with \(||q||\leq s\) that fits \(E\) w.r.t. \(\mathcal{O}\).
It is well-known that for every database \(\mathcal{D}\) and \(\mathcal{ELH}^{r}\)-ontology \(\mathcal{O}\), we can compute in polynomial time a database \(\mathcal{U}_{\mathcal{D},\mathcal{O}}\) that is _universal for ELQs_ in the sense that \(a\in q(\mathcal{D}\cup\mathcal{O})\) iff \(a\in q(\mathcal{U}_{\mathcal{D},\mathcal{O}})\) for all ELQs \(q\) and \(a\in\mathsf{adom}(\mathcal{D})\)[13]. Given a collection of labeled data examples \(E\) and an \(\mathcal{ELH}^{r}\)-ontology \(\mathcal{O}\), we denote with \(E_{O}\) the collection obtained from \(E\) by replacing each (positive or negative) example \((\mathcal{D},a,\cdot)\) with \((\mathcal{U}_{\mathcal{D},\mathcal{O}},a,\cdot)\). The following proposition shows that a fitting algorithm for ELQ without ontologies also gives rise to a fitting algorithm for \((\mathcal{ELH}^{r}\), ELQ) with at most a polynomial increase in running time. It is immediate from the definition of universality.
**Proposition 1**.: _An ELQ \(q\) fits a collection of labeled examples \(E\) w.r.t. an \(\mathcal{ELH}^{r}\)-ontology \(\mathcal{O}\) iff \(q\) fits \(E_{\mathcal{O}}\) w.r.t. \(\emptyset\)._
We remark that in contrast to ELQs, finite databases that are universal for ELIQs need not exist [10].
PAC Learning.We recall the definition of PAC learning, in a formulation that is tailored towards OMQ languages. Let \(P\) be a probability distribution over pointed databases and let \(q_{T}\) and \(q_{H}\) be queries, the target and the hypothesis. The error of \(q_{H}\) relative to \(q_{T}\) and \(P\) is
\[\mathsf{error}_{P,q_{T}}(q_{H})=\Pr_{(\mathcal{D},a)\sim P}(a\in q_{H}( \mathcal{D}\cup\mathcal{O})\ \Delta\ q_{T}(\mathcal{D}\cup\mathcal{O}))\]
where \(\Delta\) denotes symmetric difference and \(\mathsf{Pr}_{(\mathcal{D},a)\sim P}\,X\) is the probability of \(X\) when drawing \((\mathcal{D},a)\) randomly according to \(P\).
**Definition 1**.: _A PAC learning algorithm for an OMQ language \((\mathcal{L},\mathcal{Q})\) is a (potentially randomized) algorithm \(\mathfrak{A}\) associated with a function \(m:\mathbb{R}^{2}\times\mathbb{N}^{4}\to\mathbb{N}\) such that_
* \(\mathfrak{A}\) _takes as input an_ \(\mathcal{L}\)_-ontology_ \(\mathcal{O}\) _and a collection of labeled data examples_ \(E\)_;_
* _for all_ \(\epsilon,\delta\in(0,1)\)_, all_ \(\mathcal{L}\)_-ontologies_ \(\mathcal{O}\)_, all finite signatures_ \(\Sigma\)_, all_ \(s_{Q},s_{E}\geq 0\)_, all probability distributions_ \(P\) _over pointed databases_ \((\mathcal{D},c)\) _with_ \(\mathsf{sig}(\mathcal{D})\subseteq\Sigma\) _and_ \(||D||\leq s_{E}\)_, and all_ \(q_{T}\in\mathcal{Q}_{\Sigma}\) _with_ \(||q_{T}||\leq s_{Q}\)_, the following holds: when running_ \(\mathfrak{A}\) _on_ \(\mathcal{O}\) _and a collection_ \(E\) _of at least_ \(m(1/\delta,1/\epsilon,||\mathcal{O}||,|\Sigma|,s_{Q},s_{E})\) _labeled data examples that are_ \(q_{T}\)_-labeled w.r.t._ \(\mathcal{O}\) _and drawn according to_ \(P\)_, it returns a hypothesis_ \(q_{H}\) _such that with probability at least_ \(1-\delta\) _(over the choice of_ \(E\)_), we have_ \(\mathsf{error}_{P,q_{T}}(q_{H})\leq\epsilon\)_._
_We say that \(\mathfrak{A}\) has sample size \(m\) and call \(\mathfrak{A}\) sample-efficient if \(m\) is a polynomial._
Note that a PAC learning algorithm is not required to terminate if no fitting query exists. It would be desirable to even attain _efficient_ PAC learning which additionally requires \(\mathfrak{A}\) to be a polynomial time algorithm. However, ELQs are known to not be efficiently PAC learnable even without ontologies, unless \(\mathsf{RP}=\mathsf{NP}\)[13, 14, 15]. The same is true for ELIQs and any other class of conjunctive queries that contains all ELQs.
## 3 Bounded Fitting and Generalization
We introduce bounded fitting and analyze when fitting algorithms are PAC learning algorithms.
**Definition 2**.: _Let \((\mathcal{L},\mathcal{Q})\) be an OMQ language and let \(\mathfrak{A}\) be an algorithm for the size-restricted fitting problem for \((\mathcal{L},\mathcal{Q})\). Then Bounded-Fitting\({}_{\Box}\) is the algorithm that, given a collection of labeled data examples \(E\) and an \(\mathcal{L}\)-ontology \(\mathcal{O}\), runs \(\mathfrak{A}\) with input \((E,\mathcal{O},s)\) to decide whether there is a \(q\in\mathcal{Q}\) with \(||q||\leq s\) that fits \(E\) w.r.t. \(\mathcal{O}\), for \(s=1,2,3\ldots\), returning a fitting query as soon as it finds one._
**Example 2**.: _Consider again Example 1. For \(s=1\), bounded fitting tries the candidates \(\top,A,B,\exists r.\top\) and returns the fitting \(A\). If started on \(E_{0}\) extended with \((\{A(a)\},a,-)\), it finds one of the fitting ELQs \(A\sqcap\exists r.\top\) and \(\exists r.B\) in Round \(2\)._
In spirit, bounded fitting focusses on finding fitting queries when they exist, and not on deciding the existence of a fitting query. This is in analogy with bounded model checking, which focusses on finding counterexamples rather than on proving that no such examples exist. If an upper bound on the size of fitting queries is known, however, we can make bounded fitting terminate by reporting non-existence of a fitting query once the bound is exceeded. This is more of theoretical than of practical interest since the size bounds tend to be large. For ELQs without ontologies and for \((\mathcal{EL},\text{ELQ})\), for instance, it is double exponential [10]. It thus seems more realistic to run an algorithm that decides the existence of a fitting in parallel to bounded fitting and to report the result as soon as one of the algorithms terminates. There are also important cases where fitting existence is undecidable, such as for the OMQ language \((\mathcal{ELI},\text{ELIQ})\)[10].
Bounded fitting may be used also in such cases as long as the size-restricted fitting problem is still decidable. This is the case for \((\mathcal{ELI},\text{ELIQ})\), as a direct consequence of query evaluation to be decidable in this OMQ language [1], see Appendix H.
A major advantage of bounded fitting is that it yields a sample-efficient PAC learning algorithm with sample size linear in the size of the target query. This is because bounded fitting is an Occam algorithm which essentially means that it produces a fitting query that is at most polynomially larger than the fitting query of minimal size [1].1
Footnote 1: A precise definition of Occam algorithms is based on the notion of VC-dimension; it is not crucial to the main part of the paper, details can be found in the appendix.
**Theorem 1**.: _Let \((\mathcal{L},\mathcal{Q})\) be an OMQ language. Every bounded fitting algorithm for \((\mathcal{L},\mathcal{Q})\) is a (sample-efficient) PAC learning algorithm with sample size \(O\big{(}\frac{1}{\epsilon}\cdot\log\big{(}\frac{1}{\epsilon}\big{)}\cdot\log \big{(}\frac{1}{\delta}\big{)}\cdot\log|\Sigma|\cdot||q_{T}||\big{)}\)._
We remark that bounded fitting is _robust_ in that other natural measures of query size (such as the number of existential restrictions) and enumeration sequences such as \(s=1,2,4,8,\dots\) also lead to sample-efficient PAC learning algorithms. This results in some flexibility in implementations.
We next show that many other fitting algorithms are not sample-efficient when used as PAC learning algorithms. We start with algorithms that return fittings which are most specific or most general or of minimum quantifier depth. No such algorithm is a sample-efficient PAC learning algorithm, even without ontologies.
**Theorem 2**.: _If \(\mathfrak{A}\) is a fitting algorithm for ELQs that satisfies one of the conditions below, then \(\mathfrak{A}\) is not a sample-efficient PAC learning algorithm._
1. \(\mathfrak{A}\) _always produces a most specific fitting, if it exists;_
2. \(\mathfrak{A}\) _always produces a most general fitting, if it exists;_
3. \(\mathfrak{A}\) _produces a fitting of minimal quantifier depth, if a fitting exists._
The proof of Theorem 2 relies on duals of finite relational structures, which are widely known in the form of homomorphism duals [10]. Here, we introduce the new notion of _simulation_ duals.
Let \((\mathcal{D},a)\) be a pointed database and \(\Sigma\) a signature. A set \(M\) of pointed databases is a _\(\Sigma\)-simulation dual_ of \((\mathcal{D},a)\) if for all pointed databases \((\mathcal{D}^{\prime},a^{\prime})\), the following holds:
\[(\mathcal{D},a)\preceq_{\Sigma}(\mathcal{D}^{\prime},a^{\prime}) \quad\text{iff}\quad(\mathcal{D}^{\prime},a^{\prime})\not\preceq_{\Sigma}( \mathcal{D}^{\prime\prime},a^{\prime\prime})\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \text{for all }(\mathcal{D}^{\prime\prime},a^{\prime\prime})\in M.\]
For illustration, consider the simulation dual \(M\) of \((\mathcal{D}_{q},a_{q})\) for an ELQ \(q\). Then every negative example for \(q\) has a simulation into an element of \(M\) and \(q\) is the most general ELQ that fits \(\{(\mathcal{D},a,-)\mid(\mathcal{D},a)\in M\}\). We exploit this in the proof of Theorem 2. Moreover, we rely on the fact that ELQs have simulation duals of polynomial size. In contrast, (non-pointed) homomorphism duals of tree-shaped databases may become exponentially large [10].
**Theorem 3**.: _Given an ELQ \(q\) and a finite signature \(\Sigma\), a \(\Sigma\)-simulation dual \(M\) of \((\mathcal{D}_{q},a_{q})\) of size \(||M||\leq 3\cdot|\Sigma|\cdot||q||^{2}\) can be computed in polynomial time. Moreover, if \(\mathcal{D}_{q}\) contains only a single \(\Sigma\)-assertion that mentions \(a_{q}\), then \(M\) is a singleton._
The notion of simulation duals is of independent interest and we develop it further in the appendix. We show that Theorem 3 generalizes from databases \(\mathcal{D}_{q}\) to all pointed databases \((\mathcal{D},a)\) such that the directed graph induced by the restriction of \(\mathcal{D}\) to the individuals reachable (in a directed sense) from \(a\) is a DAG. Conversely, databases that are not of this form do not have finite simulation duals. We find it interesting to recall that DAG-shaped databases do in general not have finite homomorphism duals [10].
Using Theorem 3, we now prove Point 2 of Theorem 2. Points 1 and 3 are proved in the appendix.
Proof.: To highlight the intuitions, we leave out some minor technical details that are provided in the appendix. Assume to the contrary of what we aim to show that there is a sample-efficient PAC learning algorithm that produces a most general fitting ELQ, if it exists, with associated polynomial function \(m\colon\mathbb{R}^{2}\times\mathbb{N}^{4}\) as in Definition 1. As target ELQs \(q_{T}\), we use concepts \(C_{i}\) where \(C_{0}=\top\) and \(C_{i}=\exists r.(A\sqcap B\sqcap C_{i-1})\). Thus, \(C_{i}\) is an \(r\)-path of length \(i\) in which every non-root node is labeled with \(A\) and \(B\).
Choose \(\Sigma=\{A,B,r\}\), \(\delta=\epsilon=0.5\), and \(n\) large enough so that \(2^{n}>2m(1/\delta,1/\epsilon,0,|\Sigma|,3n,3\cdot|\Sigma|\cdot||C_{n}||^{2})\). Further choose \(q_{T}=C_{n}\).
We next construct negative examples; positive examples are not used. Define a set of ELQs \(S=S_{n}\) where
\[S_{0}=\{\top\}\quad S_{i}=\{\exists r.(\alpha\sqcap C)\mid C\in S_{i-1}, \alpha\in\{A,B\}\}.\]
Note that the ELQs in \(S\) resemble \(q_{T}\) except that every node is labeled with only one of the concept names \(A,B\). Now consider any \(q\in S\). Clearly, \(q_{T}\sqsubseteq q\). Moreover, the pointed database \((\mathcal{D}_{q},a_{q})\) contains a single assertion that mentions \(a_{q}\). By Theorem 3, \(q\) has a singleton \(\Sigma\)-simulation dual \(\{(\mathcal{D}_{q}^{\prime},a_{q}^{\prime})\}\) with \(||\mathcal{D}_{q}^{\prime}||\leq 3\cdot|\Sigma|\cdot||C_{n}||^{2}\). We shall use these duals as negative examples.
The two crucial properties of \(S\) are that for all \(q\in S\),
1. \(q\) is the most general ELQ that fits \((\mathcal{D}_{q}^{\prime},a_{q}^{\prime},-)\);
2. for all \(T\subseteq S\), \(q\notin T\) implies \(\bigsqcap_{p\in T}p\not\sqcap q\).
By Point 1 and since \(q_{T}\sqsubseteq q\), each \((\mathcal{D}_{q}^{\prime},a_{q}^{\prime})\) is also a negative example for \(q_{T}\).
Let the probability distribution \(P\) assign probability \(\frac{1}{2^{n}}\) to all \((\mathcal{D}_{q}^{\prime},a_{q}^{\prime})\) with \(q\in S\) and probability \(0\) to all other pointed databases. Now assume that the algorithm is started on a collection of \(m(1/\delta,1/\epsilon,0,|\Sigma|,3n,3\cdot|\Sigma|\cdot||C_{n}||^{2})\) labeled data examples \(E\) drawn according to \(P\). It follows from Point 1 that \(q_{H}=\bigsqcap_{(\mathcal{D}_{q}^{\prime},a_{q}^{\prime})\in E}q\) is the most general ELQ that fits \(E\). Thus, (an ELQ equivalent to) \(q_{H}\) is output by the algorithm.
To obtain a contradiction, it suffices to show that with probability \(1-\delta\), we have \(\mathsf{error}_{P,q_{T}}(q_{H})>\epsilon\). We argue that, in fact, \(q_{H}\) violates all (negative) data examples that are not in the sample \(E\), that is, \(a_{q}\in q_{H}(\mathcal{D}_{p})\) for all \(p\in S\) with \((\mathcal{D}_{p}^{\prime},a_{p}^{\prime})\notin E\). The definition of \(P\) and choice of \(n\) then yield that with probability 1, \(\mathsf{error}_{P,q_{T}}(q_{H})=\frac{|S|-|E|}{|S|}>\frac{1}{2}\).
Thus consider any \(p\in S\) such that \((\mathcal{D}^{\prime}_{p},a^{\prime}_{p})\notin E\). It follows from Point 2 that \(q_{H}\not\sqsubseteq p\) and the definition of duals may now be used to derive \(a^{\prime}_{p}\in q_{H}(\mathcal{D}^{\prime}_{p})\) as desired.
## 4 Refinement Operators
We discuss fitting algorithms based on refinement operators, used in implemented systems such as ELTL, and show that the generalization abilities of such algorithms subtly depend on the exact operator (and strategy) used.
Let \((\mathcal{L},\mathcal{Q})\) be an OMQ language. A _(downward) refinement_ of a query \(q\in\mathcal{Q}\) w.r.t. an \(\mathcal{L}\)-ontology \(\mathcal{O}\) is any \(p\in\mathcal{Q}\) such that \(\mathcal{O}\models p\sqsubseteq q\) and \(\mathcal{O}\not\models q\sqsubseteq p\). A _(downward) refinement operator_ for \((\mathcal{L},\mathcal{Q})\) is a function \(\rho\) that associates every \(q\in Q_{\Sigma}\), \(\mathcal{L}\)-ontology \(\mathcal{O}\), and finite signature \(\Sigma\) with a set \(\rho(q,\mathcal{O},\Sigma)\) of downward refinements \(p\in\mathcal{Q}_{\Sigma}\) of \(q\) w.r.t. \(\mathcal{O}\). The operator \(\rho\) is _ideal_ if it is finite and complete where \(\rho\) is
1. _finite_ if \(\rho(q,\mathcal{O},\Sigma)\) is finite for all \(q\), \(\mathcal{O}\), and finite \(\Sigma\), and
2. _complete_ if for all finite signatures \(\Sigma\) and all \(q,p\in\mathcal{Q}_{\Sigma}\), \(\mathcal{O}\models p\sqsubseteq q\) implies that there is a finite \(\rho,\mathcal{O},\Sigma\)-_refinement sequence_ from \(q\) to \(p\), that is, a sequence of queries \(q_{1},\ldots,q_{n}\) such that \(q=q_{1}\), \(q_{i+1}\in\rho(q_{i},\mathcal{O},\Sigma)\) for \(1\leq i<n\), and \(\mathcal{O}\models q_{n}\equiv p\).
When \(\mathcal{O}\) is empty, we write \(\rho(q,\Sigma)\) in place of \(\rho(q,\mathcal{O},\Sigma)\).
For \((\mathcal{EL},\text{ELQ})\) and thus also for \((\mathcal{ELH}^{\prime},\text{ELQ})\), it is known that no ideal refinement operator exists [10]. This problem can be overcome by making use of Proposition 1 and employing an ideal refinement operator for ELQs without ontologies, which does exist [13]. But also these refinement operators are not without problems. It was observed in [10] that for any such operator, non-elementarily long refinement sequences exist, potentially impairing the practical use of such operators. We somewhat relativize this by the following observation. A refinement operator \(\rho\) for \((\mathcal{L},\mathcal{Q})\) is _\(f\)-depth bounded_, for \(f:\mathbb{N}\rightarrow\mathbb{N}\), if for all \(q,p\in\mathcal{Q}\) and all \(\mathcal{L}\)-ontologies \(\mathcal{O}\) with \(\mathcal{O}\models p\sqsubseteq q\), there exists a \(\rho,\mathcal{O},\Sigma\)-refinement sequence from \(q\) to \(p\) that is of length at most \(f(||p||)\).
**Theorem 4**.: _Let \((\mathcal{L},\mathcal{Q})\) be an OMQ-language. If \((\mathcal{L},\mathcal{Q})\) has an ideal refinement operator, then it has a \(2^{O(n)}\)-depth bounded ideal refinement operator._
The depth bounded operator in Theorem 4 is obtained by starting with some operator \(\rho\) and adding to each \(\rho(q,\mathcal{O},\Sigma)\) all \(p\in\mathcal{Q}_{\Sigma}\) such that \(\mathcal{O}\models p\sqsubseteq q\), \(\mathcal{O}\not\models q\sqsubseteq p\), and \(||p||\leq||q||\). Note that the size of queries is used in an essential way, as in Occam algorithms.
A refinement operator by itself is not a fitting algorithm as one also needs a strategy for applying the operator. We use breadth-first search as a simple yet natural such strategy.
We consider two related refinement operators \(\rho_{1}\) and \(\rho_{2}\) for ELQs. The definition of both operators refers to (small) query size, inspired by Occam algorithms. Let \(q\) be an ELQ. Then \(\rho_{1}(q,\Sigma)\) is the set of all \(p\in\text{ELQ}_{\Sigma}\) such that \(p\sqsubseteq q\), \(q\not\sqsubseteq p\), and \(||p||\leq 2||q||+1\). The operator \(\rho_{2}\) is defined like \(\rho_{1}\) except that we include in \(\rho_{2}(q,\Sigma)\) only ELQs \(p\) that are a _(downward) neighbor_ of \(q\), that is, for all ELQs \(p^{\prime}\), \(p\sqsubseteq p^{\prime}\sqsubseteq q\) implies \(p^{\prime}\sqsubseteq p\) or \(q\sqsubseteq p^{\prime}\). The following lemma shows that \(\rho_{2}(q,\Sigma)\) actually contains _all_ neighbors of \(q\) with \(\mathsf{sig}(q)\subseteq\Sigma\), up to equivalence. An ELQ \(q\) is _minimal_ if there is no ELQ \(p\) such that \(||p||<||q||\) and \(p\equiv q\).
**Lemma 2**.: _For every ELQ \(q\) and minimal downward neighbor \(p\) of \(q\), we have \(||p||\leq 2||q||+1\)._
Both \(\rho_{1}\) and \(\rho_{2}\) can be computed by brute force. For more elaborate approaches to computing \(\rho_{2}\), see [10] where downward neighbors of ELQs are studied in detail.
**Lemma 3**.: \(\rho_{1}\) _and \(\rho_{2}\) are ideal refinement operators for ELQ._
We next give more details on what we mean by breadth-first search. Started on a collection of labeled data examples \(E\), the algorithm maintains a set \(M\) of candidate ELQs that fit all positive examples \(E^{+}\) in \(E\), beginning with \(M=\{\top\}\) and proceeding in rounds. If any ELQ \(q\) in \(M\) fits \(E\), then we return such a fitting \(q\) with \(||q||\) smallest. Otherwise, the current set \(M\) is replaced with the set of all ELQs from \(\bigcup_{q\in M}\rho(q,\mathsf{sig}(E))\) that fit \(E^{+}\), and the next round begins. For \(i\in\{1,2\}\), let \(\mathfrak{A}_{i}\) be the version of this algorithm that uses refinement operator \(\rho_{i}\). Although \(\rho_{1}\) and \(\rho_{2}\) are defined quite similarly, the behavior of the algorithms \(\mathfrak{A}_{1}\) and \(\mathfrak{A}_{2}\) differs.
**Theorem 5**.: \(\mathfrak{A}_{1}\) _is a sample-efficient PAC learning algorithm, but \(\mathfrak{A}_{2}\) is not._
To prove Theorem 5, we show that \(\mathfrak{A}_{1}\) is an Occam algorithm while \(\mathfrak{A}_{2}\) produces a most general fitting (if it exists), which allows us to apply Theorem 2.
The above is intended to provide a case study of refinement operators and their generalization abilities. Implemented systems use refinement operators and strategies that are more complex and include heuristics and optimizations. This makes it difficult to analyze whether implemented refinement-based systems constitute a sample-efficient PAC learner.
We comment on the ELTL system that we use in our experiments. ELTL is based on the refinement operator for \((\mathcal{ELH}^{\prime},\text{ELQ})\) presented in [13]. That operator, however, admits only \(\mathcal{ELH}^{\prime}\) ontologies of a rather restricted form: all CIs must be of the form \(A\sqsubseteq B\) with \(A,B\) concept _names_. Since no ideal refinement operators for unrestricted \((\mathcal{EL},\text{ELQ})\) exist and ELTL does not eliminate ontologies in the spirit of Proposition 1, it remains unclear whether and how ELTL achieves completeness (i.e., finding a fitting whenever there is one).
## 5 The SPELL System
We implemented bounded fitting for the OMQ language \((\mathcal{ELH}^{\prime},\text{ELQ})\) in the system SPELL (for _SAT-based PAC \(\mathcal{EL}\) concept Learner_).2 SPELL takes as input a knowledge base in OWL RDF/XML format that contains both an \(\mathcal{ELH}^{\prime}\) ontology \(\mathcal{O}\) and a collection \(E\) of positive and negative examples, and it outputs an ELQ represented as a SPARQL query. SPELL is implemented in Python 3 and uses the PySat library to interact with the Glucose SAT solver. It provides integration into the SML-Bench benchmark framework [20].
Footnote 2: Available at [https://github.com/spell-system/SPELL](https://github.com/spell-system/SPELL).
In the first step, SPELL removes the ontology \(\mathcal{O}\) by replacing the given examples \(E\) with \(E_{\mathcal{O}}\) as per Proposition 1.
It then runs bounded fitting in the variant where in each round \(n\), fitting ELQs with at most \(n-1\) existential restrictions are considered (rather than fitting ELQs \(q\) with \(||q||\leq n\)). The existence of such a fitting is checked using the SAT solver. Also this variant of bounded fitting results in a sample-efficient PAC learning algorithm, with sample size \(O\big{(}\frac{1}{\epsilon}\cdot\log\big{(}\frac{1}{\epsilon}\big{)}\cdot\log \big{(}\frac{1}{\epsilon}\big{)}\cdot|\Sigma|\cdot||q_{T}||\big{)}\), see the appendix. We prefer this variant for implementation because it admits a more natural reduction to SAT, described next.
From \(E_{\mathcal{O}}\) and the bound \(n\), we construct a propositional formula \(\varphi=\varphi_{1}\wedge\varphi_{2}\) that is satisfiable if and only if there is an ELQ \(q\) over \(\Sigma=\text{sig}(E_{\mathcal{O}})\) with at most \(n-1\) existential restrictions that fits \(E_{\mathcal{O}}\). Indeed, any model of \(\varphi\) returned by the SAT solver uniquely represents a fitting ELQ \(q\). More precisely, \(\varphi_{1}\) ensures that such a model represents \(\mathcal{EL}\)-concepts \(C_{1},\ldots,C_{n}\) where each \(C_{i}\) only contains existential restrictions of the form \(\exists r.C_{j}\) with \(j>i\), and we take \(q\) to be \(C_{1}\). We use variables of the form \(c_{i,A}\) to express that the concept name \(A\) is a conjunct of \(C_{i}\), and variables \(x_{j,r}\) and \(y_{i,j}\) to express that \(\exists r.C_{j}\) is a conjunct of \(C_{i}\). Then \(\varphi_{2}\) enforces that the represented ELQ fits \(E_{\mathcal{O}}\). Let \(\mathcal{D}\) be the disjoint union of all databases that occur in an example in \(E_{\mathcal{O}}\). We use variables \(s_{i,a}\), with \(1\leq i\leq n\) and \(a\in\text{adom}(\mathcal{D})\), to express that \(a\in C_{i}(\mathcal{D})\); the exact definition of \(\varphi_{2}\) uses simulations and relies on Lemma 1. The number of variables in \(\varphi\) is \(O\big{(}n^{2}\cdot|\mathcal{D}|\big{)}\), thus linear in \(|\mathcal{D}|\).
We have implemented several improvements over this basic reduction of which we describe two. The first improvement is based on the simple observation that for computing a fitting ELQ with \(n-1\) existential restrictions, for every example \((\mathcal{D}^{\prime},a,\pm)\in E_{\mathcal{O}}\) it suffices to consider individuals that can be reached via at most \(n-1\) role assertions from \(a\). Moreover, we may restrict \(\Sigma\) to symbols that occur in all \(n-1\)-reachable parts of the positive examples. The second improvement is based on the observation that the search space for satisfying assignments of \(\varphi\) contains significant _symmetries_ as the same ELQ \(q\) may be encoded by many different arrangements of concepts \(C_{1},\ldots C_{n}\). We add constraints to \(\varphi\) so that the number of possible arrangements is reduced, breaking many symmetries. For details see the appendix.
## 6 Experimental Evaluation
We evaluate SPELL on several benchmarks3 and compare it to the ELTL component of the DL-Learner system [1]. Existing benchmarks do not suit our purpose as they aim at learning concepts that are formulated in more expressive DLs of the \(\mathcal{ALC}\) family. As a consequence, a fitting \(\mathcal{EL}\) concept almost never exists. This is the case, for example, in the often used Structured Machine Learning Benchmark [16]. We thus designed several new benchmarks leveraging various existing knowledge bases, making sure that a fitting \(\mathcal{EL}\) concept always exists. We hope that our benchmarks will provide a basis also for future experimental evaluations of \(\mathcal{EL}\) learning systems.
Footnote 3: Available at [https://github.com/spell-system/benchmarks](https://github.com/spell-system/benchmarks).
Performance Evaluation.We carried out two experiments that aim at evaluating the performance of SPELL. The main questions are: Which parameters have most impact on the running time? And how does the running time compare to that of ELTL?
The first experiment uses the Yago 4 knowledge base which combines the concept classes of schema.org with data from Wikidata [15]. The smallest version of Yago 4 is still huge and contains over 40 million assertions. We extracted a fragment of 12 million assertions assertions that focusses on movies and famous persons. We then systematically vary the number of labeled examples and the size of the target ELQs. The latter take the form \(C_{n}=\exists\text{actor}.\big{|}_{i=1}^{n}r_{i}.\top\) where each \(r_{i}\) is a role name that represents a property of actors in Yago and \(n\) is increased to obtain larger queries. The positive examples are selected by querying Yago with \(C_{n}\) and the negative examples by querying Yago with generalizations of \(C_{n}\). The results are presented in Figure 1. They show that the size of the target query has a strong impact on the running time whereas the impact of the number of positive and negative examples is much more modest. We also find that SPELL performs \(\sim\)1.5 orders of magnitude better than ELTL, meaning in particular that it can handle larger target queries.
Figure 1: Yago experiment, dark red area indicates timeout (60min)
Since Yago has only a very restricted ontology that essentially consists of inclusions \(A\sqsubseteq B\) with \(A,B\) concept names, we complement the above experiment with a second one based on OWL2Bench. OWL2Bench is a benchmark for ontology-mediated querying that combines a database generator with a hand-crafted ontology which extends the University Ontology Benchmark [23, 24]. The ontology is formulated in OWL 2 EL and we extracted its \(\mathcal{ELH}^{r}\) fragment which uses all aspects of this DL and comprises 142 concept names, 83 role names, and 173 concept inclusions. We use datasets that contain 2500-2600 individuals and 100-200 examples, generated as in the Yago case. We designed 6 ELQs with 3-5 occurrences of concept and role names and varying topology. The results are shown in Table 2. The difference in running time is even more pronounced in this experiment, with SPELL returning a fitting ELQ almost instantaneously in all cases.4
Footnote 4: ELTL crashes on this benchmark unless one option (‘useMinimizer’) is switched off. We thus ran ELTL without useMinimizer.
Strengths and Weaknesses.In this experiment, we aim to highlight the respective strengths and weaknesses of SPELL and ELTL or, more generally, of bounded fitting versus refinement-operator based approaches. We anticipated that the performance of bounded fitting would be most affected by the number of existential restrictions in the target query whereas the performance of refinement would be most affected by the (unique) length of the sequence \(C_{1},\ldots,C_{k}\) such that \(C_{1}=\top\), \(C_{i+1}\) is a downward neighbor of \(C_{i}\) for \(1\leq i<k\), and \(C_{k}\) is the target query. Let us call this the _depth_ of \(C_{k}\). The number of existential restrictions and depth are orthogonal parameters. In the \(k\)_-path_ benchmark, we use target ELQs of the form \(\exists r^{k}\!\!\top\), \(k\geq 1\). These should be difficult for bounded fitting when the number \(k\) of existential restrictions gets large, but easy for refinement as the depth of \(\exists r^{k}\!\!\top\) is only \(k\). In the \(k\)_-1-conj_ benchmark, we use ELQs of the form \(\exists r\!\!\top\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
## Acknowledgements
Balder ten Cate is supported by the European Union's Horizon 2020 research and innovation programme under grant MSCA-101031081 and Carsten Lutz by the DFG Collaborative Research Center 1320 EASE and by BMBF in DAAD project 57616814 (SECAI).
|
2305.07622 | PALR: Personalization Aware LLMs for Recommendation | Large language models (LLMs) have recently received significant attention for
their exceptional capabilities. Despite extensive efforts in developing
general-purpose LLMs that can be utilized in various natural language
processing (NLP) tasks, there has been less research exploring their potential
in recommender systems. In this paper, we propose a novel framework, named
PALR, which aiming to combine user history behaviors (such as clicks,
purchases, ratings, etc.) with LLMs to generate user preferred items.
Specifically, we first use user/item interactions as guidance for candidate
retrieval. Then we adopt a LLM-based ranking model to generate recommended
items. Unlike existing approaches that typically adopt general-purpose LLMs for
zero/few-shot recommendation testing or training on small-sized language models
(with less than 1 billion parameters), which cannot fully elicit LLMs'
reasoning abilities and leverage rich item side parametric knowledge, we
fine-tune a 7 billion parameters LLM for the ranking purpose. This model takes
retrieval candidates in natural language format as input, with instruction
which explicitly asking to select results from input candidates during
inference. Our experimental results demonstrate that our solution outperforms
state-of-the-art models on various sequential recommendation tasks. | Fan Yang, Zheng Chen, Ziyan Jiang, Eunah Cho, Xiaojiang Huang, Yanbin Lu | 2023-05-12T17:21:33Z | http://arxiv.org/abs/2305.07622v3 | # PALR: Personalization Aware LLMs for Recommendation
###### Abstract.
Large language models (LLMs) have recently received significant attention for their exceptional capabilities. Despite extensive efforts in developing general-purpose LLMs that can be utilized in various natural language processing (NLP) tasks, there has been less research exploring their potential in recommender systems. In this paper, we propose a novel framework, named **PALR** (Personalization Aware LLMs for **R**ecommendion), aimed at integrating user history behaviors (such as clicks, purchases, ratings, etc.) with LLMs to generate user preferred items. Specifically, we first use user/item interactions as guidance for candidate retrieval, and then adopt an LLM-based ranking model to generate recommended items. Unlike existing approaches that typically adopt general-purpose LLMs for zero/few-shot recommendation testing or training on small-sized language models (with less than 1 billion parameters), which cannot fully elicit LLMs' reasoning abilities and leverage rich item side parametric knowledge, we fine-tune an LLM of 7 billion parameters for the ranking purpose. This model takes retrieval candidates in natural language format as input, with instructions explicitly asking to select results from input candidates during inference. Our experimental results demonstrate that our solution outperforms state-of-the-art models on various sequential recommendation tasks.
Generative Recommender Model, User Preference Learning, Large Language Models +
Footnote †: leftmargin=*] Authors contributed equally to this research. Names are ordered alphabetically.
+
Footnote †: leftmargin=*] Authors contributed equally to this research.
In this paper, we present PALR, which is a general framework for personalized recommendation tasks that combines user behaviors with LLM. Given the challenges mentioned above, we break down the task into several stages. Initially, we use an LLM and user behavior as input to generate user profile keywords. Then, we employ a retrieval module to pre-filter some candidates from the items pool based on the user profile. Importantly, our framework is not reliant on any specific retrieval algorithms. Finally, we use LLM to provide recommendations from those candidates based on user history behaviors. To better adapt these general-purpose LLMs to fit the recommendation scenarios, we convert user behavior data into natural language prompts and fine-tune a LLaMa[(22)] 7B model. Our goal is to teach the model to learn the co-occurrence of user engaged item patterns. This approach enables us to incorporate user behavior data into the LLM' reasoning process and better generalize to new users and unseen items. In summary, our contributions are:
1. We propose PALR, a flexible personalized recommendation framework, which incorporating user behaviors with LLMs for recommended items generation.
2. We break down recommendation task into user profile generation, candidates retrieval and items ranking three sub-tasks, and tune instruction prompts to better elicit LLMs reasoning ability.
3. We fine-tune a recommendation oriented LLM based on LLaMa 7B. Evaluation under PALR framework on two public datasets demonstrate its competitive performance against state-of-the-art methods.
4. We experimented with two datasets, MovieLens-1M[(5)], and Amazon Beauty[(14)] and demonstrated the strong potential of an LLM for recommendation in comparison to SOTA.
## 2. Methodology
### PALR Framework
Our proposed method, PALR (Personalization Aware LLM for Recommendation), is illustrated in Figure 1. It utilizes a multi-step approach to harness the potential of LLMs for recommendation.
* **Natural Language user profile generation**. When a user interacts with a large number of items and exhibits mixed affinities, it can be challenging for the model to provide accurate recommendations based solely on user behaviors. In such situations, having a high-level summarization of the user's preferences can be beneficial. An LLM can be leveraged to generate a summary of a user's preferences. For example, by analyzing a user's music and TV viewing history, we can generate a summary of their preferences such as "pop music" and "fantasy movies."
* **Candidates retrieval**. To address the issues of hallucination and incompleteness in the generated results, a retrieval module is utilized to ground knowledge and filter out results that do not relevant to the task at hand, resulting in a much smaller candidate pool to feed into the LLM for further processing. This framework can accommodate various retrieval models, such as a sequential recommendation model trained on user behaviors, which can serve this purpose effectively.
* **Item recommendation**. By combining the interaction history, natural language user profile and retrieved candidates, we can create a natural language prompt that can be fed into the LLM for recommendation. The model will utilize its reasoning ability to select the items from the candidates pool that align best with user profile.
The steps of "user profile generation" and "item recommendation" require dedicated prompt design to effectively leverage the reasoning ability of LLMs. An example of related prompt design in the movie recommendation task is shown in Figure 2.
### Fine-Tuning
Through our investigation, we find fine-tuning is necessary to make the model 1) obtain reasonably strong performance, and 2) recognize the retrieval layer and performs the retrieval as expected. We employ instruction-based fine-tuning, a technique proven effective in recent LLM development [(21; 23; 24)].
Figure 1. Here is an overview of our proposed PALR architecture. The “Natural Language Prompt” for the “LLM for recommendation” comprises three components: the “Interaction History Sequence,” the “Natural Language User Profile,” and the “Candidates for Recommendation”. The “Interaction History Sequence” is created by simply concatenating the items that the user has interacted with. The “Natural Language User Profile” is a high-level summarization of the user’s preferences, generated using an LLM based on user-item interactions, item attributes, or even user information if possible. The “Candidates for Recommendation” are the output of a retrieval model, and in our design, we have the flexibility to use any retrieval model for this purpose. We have included an example in Figure 2.
We have created two types of instruction tasks called "Recommend" and "Recommend_Retrieval". The "Recommend" task involves a list of items that the user has interacted with in the past (with a maximum limit of 20 items), and the objective of the model is to generate a list of "future" items that the user may interact with. Here's an example of such an instruction for the Movielens dataset. We refer to a model fine-tuned by this instruction as \(\mathit{PALR}_{\text{e1}}\).
The "Recommend_Retrieval" task asks the model to retrieve the target "future" items from a list of candidate items. The candidate list contains all target items, plus a few negative items similar to the target items (e.g. movies with the same genres, co-watched by many users). The following are two examples of such instruction used in our fine-tuning for the Movielens dataset and the Amazon Beauty dataset. For the Amazon beauty dataset, we include item ID for evaluation. We refer to a model fine-tuned with both "Recommend" and "Recommend_Retrieval" instruction as \(\mathit{PALR}_{\text{e2}}\).
The "Recommend_Retrieval" task asks the model to retrieve the target "future" items from a list of candidate items. The candidate list contains all target items, plus a few negative items similar to the target items (e.g. movies with the same genres, co-watched by many users). The following are two examples of such instruction used in our fine-tuning for the Movielens dataset and the Amazon Beauty dataset. For the Amazon beauty dataset, we include item ID for evaluation. We refer to a model fine-tuned with both "Recommend" and "Recommend_Retrieval" instruction as \(\mathit{PALR}_{\text{e2}}\).
It is worth noting that the fine-tuning is retrieval-layer agnostic. Despite our objective being to train the model to select from a list of candidates, the construction of this list for fine-tuning is not bound to the retrieval layer in our framework.
Furthermore, we have found that the fine-tuning process is enhanced by a couple of techniques: 1) enriching shorter lists in the datasets with items from the user's 3-hop affinity; 2) randomly swapping items between the instruction and the generation label.
Last but not the least, we only fine-tune on 20% of users. We intend to demo the strong inductive learning capabilities of LLMs. This is not possible for item-embedding based models such as (Han et al., 2017; Wang et al., 2018), which must be trained on the full data to function effectively.
## 3. Experiments
### Experiments Settings
#### 3.1.1. Datasets
The two public datasets are collected from the real-world platforms and have been widely used for sequential recommendation. _Amazon Beauty1_ is one category of Amazon review datasets, which contains a collection of user-item interactions
Figure 2. Here is an example for movie recommendation. (Above) The “Natural Language User Profile” leverages an LLM to summarize the user’s preferences by taking into account the movies the user has watched and the movie keywords. (Below) The “LLM for Recommendation” takes the “Natural Language Prompt” as its input, which is composed of three parts: the “Interaction History Sequence,” the “Natural Language User Profile,” and the “Candidates for Recommendation.”
on Amazon spanning from May 1996 to July 2014. _Movielens-\(1M^{2}\)_ is a common benchmark dataset that includes one million movie ratings.
For dataset preprocessing, we follow the common practice(Krizhevsky et al., 2014; Krizhevsky et al., 2014). We convert all numeric ratings or presence of a review to "1" and others to "0". Then, for each user, we discard duplicated interactions and then sort their historical items by the interacted time step chronologically to obtain the user interacted sequence. It is worth mentioning that to guarantee each user/item with enough interactions, we follow the preprocessing procedure in(Krizhevsky et al., 2014; Krizhevsky et al., 2014), which only keeps the "5-core" datasets. We discard users and items with fewer than 5 interaction records iteratively. The statistics of these datasets are reported in Table 1.
#### 3.1.2. Evaluation
We adopt the leave-one-out strategy to evaluate the performance of each method, which is widely employed in many related works. For each user, we hold out the last interacted item as the test data and utilize the item just before the last as the validation data. The remaining items are used for training. We evaluate each method on the whole item set without sampling as suggested in previous study (Krizhevsky et al., 2014). We employ Hit Ratio (HR) and Normalized Discounted Cumulative Gain (NDCG) to evaluate the performance. HR focuses on the presence of the positive item, while NDCG further takes the rank position information into account.
#### 3.1.3. Baselines
To verify the effectiveness of our method, we compare it with the following representative baselines.
* **BPR-MF**(Krizhevsky et al., 2014). It utilizes matrix factorization to model users and items with the pairwise Bayesian Personalized Ranking (BPR) loss.
* **NCF**(Krizhevsky et al., 2014). It employs a neural network architecture to model non-sequential user-item interactions instead of the inner product used by matrix factorization.
* **GRU4Rec**(Krizhevsky et al., 2014). It utilizes GRU to model the sequential behavior of users for recommendation.
* **Caser**(Krizhevsky et al., 2014). It devises horizontal and vertical CNN to exploit user's recent sub-sequence behaviors for recommendation.
* **SASRec**(Krizhevsky et al., 2014). It models user sequences through self-attention modules to capture users' dynamic interests. and it is a competitive benchmark in sequential recommendation.
### Overall Performance Comparison
Table 2 summarizes the best results of all models on two benchmark datasets. As shown in Table 2, our \(PALR_{02}\) outperforms multiple baselines by a large margin on two datasets. A comparison between \(PALR_{01}\) and \(PALR_{02}\) reveals the crucial role of candidates retrieval in improving performance. As we mentioned before, our framework does not depend on any particular retrieval algorithms. Ideally, PALR can function as an effective ranking model in conjunction with various retrieval methods.In this paper, we utilize SASRec as our retrieval layer and consider its top 50 recommendations. By comparing \(PALR_{02}\) and SASRec, it's obvious that the top10 recommendations re-ranked by our PALR are superior to the original recommendations provided by SASRec. We also evaluate our framework using different recommendation algorithms, including BERT4Rec and LightGCN, and observe a similar trend.
By conducting various experiments, we are able to gain a deeper understanding of the significance of fine-tuning. We could observe \(PALR_{01}\) has shown some ability to connect historical interacted items with possible future interacted items. Prior to fine-tuning, the model tends to only recommend popular movies in movie recommendation tasks. However, \(PALR_{01}\) isn't able to retrieve the target item from a list of candidate items. We have tried to use \(PALR_{01}\) for retrieval and observe that it could only randomly select from the candidates. The performance from \(PALR_{02}\) has demonstrated the effectiveness of incorporating an additional instruction during the fine-tuning stage.
## 4. Conclusion
The paper introduces PALR, a novel generative framework for producing personalized recommendations, which utilizes a multi-step paradigm to better utilize the knowledge in LLMs' parameters and reasoning abilities for sequential recommendation tasks. Additionally, the paper discusses the recent advances in LLMs and how they can be leveraged for recommendation tasks. Besides its competitive experiment results mentioned in the paper, LLMs has some other unique benefits in the recommendation task. The first advantage of using LLMs in recommendation tasks is the case with which external knowledge from different sources can be incorporated into the framework. The second advantage is that LLMs offer an easier pathway to more complex recommendation scenarios, including explainable recommendations and conversational recommendations. Moving forward, our research will focus on further leveraging LLMs in recommendation tasks while ensuring a balance between their
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Dataset** & **\# Users** & **\# Items** & **\# Interactions** \\ \hline Beauty & 22,363 & 12,101 & 198,502 \\ Movielens-\(1\)M & 6,040 & 3,416 & 999,611 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Statistics of datasets after preprocessing.
\begin{table}
\begin{tabular}{l l c c} \hline \hline Dataset & Model & HR@10 & NDCG@10 \\ \hline \multirow{8}{*}{Beauty} & BPR-MF & 0.0299 & 0.0122 \\ \cline{2-5} & NCF & 0.0293 & 0.0130 \\ \cline{2-5} & GRU4Rec & 0.0194 & 0.0091 \\ \cline{2-5} & Caser & 0.0282 & 0.0136 \\ \cline{2-5} & SASRec & 0.0617 & 0.0283 \\ \cline{2-5} & \(PALR_{01}\) & 0.0181 & 0.0101 \\ \cline{2-5} & \(PALR_{02}\) & **0.0721** & **0.0446** \\ \hline \multirow{8}{*}{ML-\(1\)M} & BPR-MF & 0.0354 & 0.0158 \\ \cline{2-5} & NCF & 0.0313 & 0.0143 \\ \cline{1-1} \cline{2-5} & GRU4Rec & 0.1017 & 0.0468 \\ \cline{1-1} \cline{2-5} & Caser & 0.1338 & 0.0614 \\ \cline{1-1} \cline{2-5} & SASRec & 0.1978 & 0.1192 \\ \cline{1-1} \cline{2-5} & \(PALR_{01}\) & 0.1216 & 0.0569 \\ \cline{1-1} \cline{2-5} & \(PALR_{02}\) & **0.2110** & **0.1276** \\ \hline \hline \end{tabular}
\end{table}
Table 2. Experimental results on the two datasets. The best results are in boldface.
powerful capabilities and latency. As LLMs can be computationally intensive, we will explore ways to optimize their performance and reduce latency without sacrificing accuracy or personalization.
## Acknowledgement
We thank the LLaMA team for giving us access to their models.
|
2301.04804 | A Generalized Estimating Equation Approach to Network Regression | Regression models applied to network data where node attributes are the
dependent variables poses a methodological challenge. As has been well studied,
naive regression neither properly accounts for community structure, nor does it
account for the dependent variable acting as both model outcome and covariate.
To address this methodological gap, we propose a network regression model
motivated by the important observation that controlling for community structure
can, when a network is modular, significantly account for meaningful
correlation between observations induced by network connections. We propose a
generalized estimating equation (GEE) approach to learn model parameters based
on clusters defined through any single-membership community detection algorithm
applied to the observed network. We provide a necessary condition on the
network size and edge formation probabilities to establish the asymptotic
normality of the model parameters under the assumption that the graph structure
is a stochastic block model. We evaluate the performance of our approach
through simulations and apply it to estimate the joint impact of baseline
covariates and network effects on COVID-19 incidence rate among countries
connected by a network of commercial airline traffic. We find that during the
beginning of the pandemic the network effect has some influence, the percentage
of urban population has more influence on the incidence rate compared to the
network effect after the travel ban was in effect. | Riddhi Pratim Ghosh, Jukka-Pekka Onnela, Ian Barnett | 2023-01-12T04:26:10Z | http://arxiv.org/abs/2301.04804v2 | # A Generalized Estimating Equation Approach to Network Regression
###### Abstract
Regression models applied to network data where node attributes are the dependent variables poses a methodological challenge. As has been well studied, naive regression neither properly accounts for community structure, nor does it account for the dependent variable acting as both model outcome and covariate. To address this methodological gap, we propose a network regression model motivated by the important observation that controlling for community structure can, when a network is modular, significantly account for meaningful correlation between observations induced by network connections. We propose a generalized estimating equation (GEE) approach to learn model parameters based on clusters defined through any single-membership community detection algorithm applied to the observed network. We provide a necessary condition on the network size and edge formation probabilities to establish the asymptotic normality of the model parameters under the assumption that the graph structure is a stochastic block model. We evaluate the performance of our approach through simulations and apply it to estimate the joint impact of baseline covariates
and network effects on COVID-19 incidence rate among countries connected by a network of commercial airline traffic. We find that during the beginning of the pandemic the network effect has some influence, the percentage of urban population has more influence on the incidence rate compared to the network effect after the travel ban was in effect.
_Keywords:_ network regression; transportation networks; generalized estimating equations; COVID-19
Introduction
Network data provide quantitative information to study and unveil the pattern of interactions among various objects from individuals to countries. The inferential task in scientific applications requires learning about the influence that an individual has on others, which is further complicated as the individuals are often nested in communities. Two important methodological challenges of network regression are (1) accounting for correlation induced by network structure, and (2) allowing for node attributes to appear in the data both as outcome as well as covariates in the design matrix. For example, network regression was applied in the study of the nature and extent of the person-to-person spread of obesity, Christakis & Fowler (2007) found that obesity appears to spread through social ties, a phenomenon that may in part be explained by the notion of homophily - that birds of a feather flock together (Shrum et al., 1988; Igarashi et al., 2005; Tifferet, 2019). Sensitivity analyses suggest that contagion effects for obesity and smoking cessation are reasonably robust to possible latent homophily or environmental confounding; those for happiness and loneliness are somewhat less so (VanderWeele, 2011). To further investigate the causal relationship, Shalizi & Thomas (2011) provided three factors underlying such interaction: homophily, or the formation of social ties due to matching individual traits; social contagion; the causal effect of an individual's covariates on his or her measurable responses; and an individual's response. This has led to the development of two types of models: (1) community detection models that aim to find distinct communities or clusters of similar individuals (Holland et al., 1983; Newman, 2018), and (2) a more general framework of network regression that can model such phenomenon using a direct link between observed individual attributes or covariates and the network interactions (Holland & Leinhardt, 1981; P. D. Hoff et al., 2002; P. Hoff, 2021). Some approaches that aim at merging the above two models exploit distributional assumptions to incorporate covariate information in detecting communities in the network (Binkiewicz et al., 2017; Mu et al., 2022). These examples highlight the need for regression techniques that are robust to non-trivial network structure because in all these
studies community labels are known which makes the evaluation of covariate effects easy. However, in a more realistic situation, community labels are unknown (unobserved) with differences in covariates existing through the unobserved communities making the inference challenging.
In this article, we aim to develop a network regression model motivated by the important observation that the differences between the communities in a network can be attributed to the differences in the influences of covariates responsible for an edge formation. Our model is based on a flexible network effect assumption, and it allows us to perform tests for the model parameters in addition to the estimation. Aside from detecting communities solely based on degree, the degree corrected stochastic block model was one of the first model that allows within community heterogeniety (Qin & Rohe, 2013). There has been a surge of interests among psychologists and social scientists to study such behavior- Aukett et al. (1988) shows that gender difference plays an important role in friendship patterns: women shows a preference for a few, closer, intimate same-sex friendships based on sharing emotions whereas men build up friendship based on the activities they do together. Staber (1993) studies how women and men form entrepreneurial relationships, concluding that women's networks are wider with more strangers and higher proportion of cross-sex ties. In each of these network regression examples, community labels are known which alleviates the inferential task of finding the effects of covariates and learning community structures. However, in many other scenarios, community labels are unknown which makes the inferential task more challenging. For example, here we will consider the network regression problem of seeing the impact of air travel network flow between countries on COVID-19 incidence rates, where network structure is unknown and must be estimated. The current literature lacks powerful methods that can address this important issue. Our focus is to bridge this gap by leveraging covariate-assisted information to propose an effective tool which also accounts for the network modeling through adjacency matrix.
Generalized estimating equations (GEE) (Liang & Zeger, 1986; Zeger, 1986) provide a
popular approach that is often used to analyze longitudinal and other types of correlated data (Burton et al., 1998; Diggle et al., 2002; Mandel et al., 2021) We propose first performing community detection in order to estimate community membership before then using a GEE regression model to account for the resulting estimated community structure. This approach is general and agnostic to the community detection algorithm used so long as each node can belong to at most one community. Given that network regression needs to account for correlation between attributes from nodes that are connected, a GEE allows for arbitrary correlation between nodes within the same community. Of note, this approach is best suited for highly modular networks so that the GEE assumption of independence between different communities is more accurate, though we explore its performance in less modular networks with more between-community mixing.
The rest of this article is organized as follows. In Section 2 we introduce our network regression model along with our GEE extension, and we describe the transportation network we use to model country-specific COVID-19 incidence rates. In Section 3 we present the theoretical results followed by extensive simulation results and real data analysis. Finally, Section 4 concludes the article with a discussion.
## 2 Methods
In this section, we introduce our network regression model and describe the air travel networks for each month of the first quarter of 2020 (the beginning of the COVID-19 pandemic) with country-specific information such as COVID-19 incidence rate, GDP, and population size. Next, we present a generalized estimating equation (GEE) approach to network regression that accounts for network-induced correlation between observations.
### Model and notation
We consider a directed network of \(n\) nodes and an \(n\times n\) adjacency matrix \(\mathbf{A}=(a_{ij})\) with 0's on the diagonal (i.e. no self-loops). We denote the feature variable of the \(i\)th node by \(y_{i}\), the \(l\)-dimensional vector of covariates by \(\mathbf{\alpha}\), and the corresponding design matrix by \(\mathbf{x}_{i}\). Denoting the coefficient of interest by \(\beta\), the network regression model for node attribute \(y_{i}\) we consider:
\[y_{i}=\mathbf{\alpha}^{\top}\mathbf{x}_{i}+\beta\cdot\sum_{j\neq i}A_{ji}y_{j}/(n-1)+ \epsilon_{i}, \tag{1}\]
where \(\epsilon_{i}\overset{iid}{\sim}N(0,\sigma^{2})\) and \(\mathbf{\alpha}\in\mathbb{R}^{l}\). One can note that the above model is reminiscent of the first-order autoregressive spatial model of Kelejian & Prucha (1998) which frequently contains a spatial lag of the dependent variable as a covariate that is spatially autoregressive.
Using vector and matrix notation, the above model can also be written as
\[\mathbf{y}=\mathbf{X}^{\top}\mathbf{\alpha}+\beta\cdot\mathbf{A}\mathbf{y}/(n-1)+\mathbf{\epsilon},\]
where \(\mathbf{y}\) is the concatenation of the \(y_{i}\)s in a vector of length \(n\), and \(\mathbf{X}=[\mathbf{x}_{1}:\mathbf{x}_{2}:...:\mathbf{x}_{n}]\), is the matrix of covariates. Note, model (1) can be trivially adapted to generalized linear models with non-linear link functions and non-Gaussian data in the scope of GEE models.
### Data description
With 622 million confirmed cases and 6.5 million deaths globally as of October 01, 2022, the COVID-19 pandemic has had a tremendous impact on the world, shrinking the global economy by 5.2%, the largest recession in the history post World War II (Bank, 2020). The travel bans in places worldwide have severely affected the tourism industry, with estimated losses of 900 billion to 1.2 trillion USD and tourism down 58%-78% (Le et al., 2022). The
airline industry has also suffered heavily, with 43 airlines declaring bankruptcy and 193 of 740 European airlines at risk of closing. Here we focus on the start of the pandemic covering the transition to travel bans across the world to study the relative effectiveness of travel bans for controlling and contributing to COVID-19 incidence rates.
We use pandemic data from the Johns Hopkins University coronavirus data repository through April 30, 2020, (COVID, 19). Flight data are from the Official Airline Guide (OAG) (Strohmeier et al., 2021). Because only data for January and February 2020 are available from OAG, we used the estimated fight data for other time periods using the OpenSky Network database (Schafer et al., 2014; Strohmeier et al., 2021). This database tracks the number of fights from one country to another over time, which we use to estimate country-to-country flight data for other months. We include as covariates the GDP, total population, and percentage of the urban population for each country in our network. Constructing a network based on by the flight data and incorporating the above country-specific attributes such as GDP, population, etc. as covariates through \(\mathbf{\alpha}\) in (1), we aim to estimate the effectiveness of travel bans through \(\beta\) in model (1).
### Generalized estimation equation (GEE) approach
Out network contains \(K\) communities where community \(k\) is defined by
\[E_{k}=\{i:g_{i}=k\},\]
where index \(g_{i}\) represents the community membership of node \(i\), \(|E_{k}|=n_{k}\) (number of nodes in community \(k\)) and \(\sum n_{k}=n\). Let \(\mathbf{y}_{k}\) and \(\mathbf{\epsilon}_{k}\) denote the concatenated vectors of \(y_{ij}\)s and \(\epsilon_{ij}\)s respectively, and \(\mathbf{X}_{k}=[\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{n_{k}}]\) denote the \(l\times n_{k}\) sub-matrix of covariates corresponding to cluster \(k\).
To fit our network regression model in the GEE framework, we use communities as clusters and use the following equation to model the node attributes of members of the \(k\)th
cluster:
\[\mathbf{y}_{k}=\mathbf{X}_{k}^{\top}\mathbf{\alpha}+\beta\mathbf{Z}_{k}+\mathbf{\epsilon}_{k}, \tag{2}\]
where \(\mathbf{A}_{k}\) is the \(n_{k}\times n_{k}\) sub-matrix of \(\mathbf{A}\) pertaining to the cluster \(k\), and \(\mathbf{Z}_{k}=\mathbf{A}_{k}\mathbf{y}_{k}/(n-1)\).
The marginal mean \(\mathbf{\mu}_{k}\) of \(\mathbf{y}_{k}\) has the form:
\[\mathbf{\mu}_{k}=E(\mathbf{y}_{k}|\mathbf{X}_{k})=(\mathbf{I}_{n_{k}}-\beta\mathbf{A}_{k}/(n-1))^{ -1}\mathbf{X}_{k}^{\top}\mathbf{\alpha},\quad k=1,2,...,K,\]
where \(\mathbf{I}_{n_{k}}\) is the identity matrix of order \(n_{k}\).
Adopting a GEE approach (Liang & Zeger, 1986), the resulting estimating equation is given by
\[\sum_{k=1}^{K}\mathbf{D}_{k}^{\top}\mathbf{V}_{k}^{-1}(\mathbf{y}_{k}-\mathbf{\mu}_{k})=\mathbf{0 },\quad k=1,2,...,K, \tag{3}\]
where \(\mathbf{D}_{k}=\frac{\partial\mathbf{\mu}_{k}}{\partial(\beta,\mathbf{\alpha})^{\top}}\) is of dimension \(n_{k}\times(l+1)\) and \(\mathbf{V}_{k}\) is the \(n_{k}\times n_{k}\) working covariance matrix of \(\mathbf{y}_{k}\). The explicit form for \(\mathbf{D}_{k}\) is
\[\mathbf{D}_{k}=[-\underbrace{(\mathbf{I}_{n_{k}}-(\beta/(n-1))\mathbf{A}_{k})^{-1}\mathbf{A}_ {k}(\mathbf{I}_{n_{k}}-(\beta/(n-1))\mathbf{A}_{k})^{-1}\mathbf{X}^{\top}\mathbf{\alpha}}_{n_{ k}\times 1}\ :\ \underbrace{(\mathbf{I}_{n_{k}}-(\beta/(n-1))\mathbf{A}_{k})^{-1}\mathbf{X}_{k}^{\top}}_{n_{ k}\times l}].\]
One can note that \(\mathbf{D}_{k}\) consists of two partitioned matrices where the first one corresponds to the network parameter \(\beta\) and the second one is due to the covariate \(\mathbf{\alpha}\). We can solve for \(\hat{\mathbf{\alpha}}\) and \(\hat{\beta}\) in equation (3) through iterative reweighted least squares, and use the robust sandwich covariance estimator to perform inference on \(\mathbf{\alpha}\) and \(\beta\).
Results
### Theoretical results
In this section, we prove the asymptotic normality of the resulting GEE estimator for \(\beta\) and \(\boldsymbol{\alpha}\) jointly. Towards this, we assume constant probabilities of edge formation between and within communities where we denote these quantities by \(p\) and \(q\) respectively as in a stochastic block model (Holland et al., 1983). Our proof of asymptotic normality hinges on Theorem 2 of Liang and Zeger (1986) which establishes the asymptotic normality of the regression parameter in the classical GEE approach under the assumption that the correlation parameter appropriately scaled by the number of communitites is consistent. Our primary distinction from this approach is that we must account for probabilities of edge formation instead of correlation between observations. Therefore, we first establish the consistency of \(p\) and \(q\) following Proposition 1 of Chen et al. (2021) and subsequently show asymptotic normality of \((\hat{\beta},\hat{\boldsymbol{\alpha}})\) from the estimating equation (3).
#### 3.1.1 Consistency of \(\hat{p}\) and \(\hat{q}\)
**Proposition 3.1**.: Consider a network generated from a stochastic block model of \(K\) communities of size \(m\) so that total number of nodes is \(n=Km\). Assume that \(m^{\gamma}p\to p^{*}\) and \(m^{\gamma}q\to q^{*}\) as \(n\to\infty\), where \(p^{*}\) and \(q^{*}\) are positive fixed constants, and \(\gamma\in[0,2)\), then \(m^{1+\gamma/2}(\hat{p}-p)\) and \(m^{1+\gamma/2}(\hat{q}-q)\) are both \(o_{P}(1)\).
Proof.: See the appendix in Section 5.1.
Guided by the above proposition, we establish the asymptotic normality of the model parameters in the following theorem. The key difference with the classical formula is the inclusion of the community size in the covariance coming from the consistency of \(p\) and \(q\).
#### 3.1.2 Asymptotic normality of \(\hat{\beta}\)
**Theorem 3.2**.: Under the conditions of Proposition 3.1\(K^{1/2}m^{1+\gamma/2}\big{(}(\hat{\beta},\hat{\mathbf{\alpha}})-(\beta,\mathbf{\alpha}) \big{)}^{\top}\) is asymptotically multivariate normal with zero mean and covariance given by
\[V=\lim_{K\to\infty}Km^{2+\gamma}\Big{(}\sum_{k=1}^{K}\mathbf{D}_{k}^{\top}\mathbf{V}_{k} ^{-1}\mathbf{D}_{k}\Big{)}^{-1}\Big{(}\sum_{k=1}^{K}\mathbf{D}_{k}^{\top}\mathbf{V}_{k}^{- 1}cov(\mathbf{Y}_{k})\mathbf{V}_{k}^{-1}\mathbf{D}_{k}\Big{)}\Big{(}\sum_{k=1}^{K}\mathbf{D}_{ k}^{\top}\mathbf{V}_{k}^{-1}\mathbf{D}_{k}\Big{)}^{-1}\]
Proof.: See the appendix in Section 5.2.
_Remark:_ The variance formula involves the term \(m^{2+\gamma}\), where \(m\) is the community size. If \(m\) is large, then the covariance will increase at a rate \(m^{2+\gamma}\). This is reminiscent of the well-known fact that sandwich estimator \(\hat{V}\) of the covariance of \((\beta,\mathbf{\alpha})^{\top}\) is not stable if \(m\) is large relative to \(K\). In essence, if \(m\) grows at a similar rate to \(K\) the sandwich estimator becomes and unstable estimator of covariance. In practice this implies that the GEE approach works best when the network contains many smaller communities rather than only a few larger communities.
### Simulations
We simulate a network of \(n\) nodes having balanced communities of size \(m=10\) via a stochastic block model and vary \(n\) in \(\{200,400\}\) with the number of communities (\(K\)) being \(20\) and \(40\) for both values of \(n\), respectively. Let \(p\) and \(q\) denote the within community and between community probabilities of an edge formation as in a stochastic block model. In each setting, we vary \((p,q)\in\{(0.8,0),(0.7,0.1),(0.6,0.2),(0.5,0.3)\}\).
#### 3.2.1 Estimation of \(\beta\) and \(\alpha\)
The choice of true model parameters and the corresponding data-generating process are detailed to span networks with varying degrees of modularity. We set \(\beta_{0}=0.5\) and
\((1,1,1,1,0.5,0.5,0.5,-0.5,-0.5,2)^{\top}\) (\(l=10\) in equation (1)). We simulate the \(l\times n\) matrix \(\mathbf{X}\) by a multivariate normal distribution in the following manner. The \(j\)th column of \(\mathbf{X}\), if it belongs to community \(k\) (\(k=1,2,...,K\)), follows MVN(\((k/10)\mathbf{1}_{l},0.0001\mathbf{I}_{l}\)), where \(\mathbf{1}_{l}\) and \(\mathbf{I}_{l}\) are the vector with 1s and identity matrix of dimension \(l\), respectively. In each setting, the adjacency matrix \(\mathbf{A}\) of dimension \(n\) is simulated by the stochastic block model which has \(K\) communities of the aforementioned size such that within and between community edge probabilities are \(p\) and \(q\), respectively. Finally, the response variable \(y_{i}\) is generated according to equation (1) with \(\sigma\) = 0.01.
To estimate \(\beta\) and \(\mathbf{\alpha}\), we first perform a community detection on the directed graph obtained from the adjacency matrix \(\mathbf{A}\) as in Rosvall & Bergstrom (2007). Next, with the resulting communities we fit a GEE using the _geepack_ R package (Halekoh et al., 2006) and report the estimates of bias and variance by taking the average of over \(B=1000\) replications.
The squared bias and variance of the estimated \(\beta\) increase as the networks become less modular (i.e. \(q\) increases) for both our GEE approach as well as naive least squares (see Figure 1). The naive least squares method does not assume any community structure and reports the parameter estimates from assuming independence between observations using equation (1) directly.
From Figure 1 we see that less modular networks are more difficult to fit in general based on the increase in bias for both methods. Despite this our GEE approach uniformly is less biased that naive least squares, which demonstrates that controlling for correlation within communities is effectively done by GEE. This also exposes the weakness of our GEE approach: if the network is not modular with high degrees of mixing between communities (i.e. large \(q\)) the GEE framework cannot accommodate this due its assumption of independence between communities. However, even when \(q\) is large we still observe that GEE somewhat mitigates the impact of correlation induced by the network at least partially, which explains why its bias is smaller than that of naive least squares. Essentially, even when the network structure suggests the GEE assumptions are incorrect, one is still generally better
off partially accounting for network structure using our GEE approach rather than ignore community structure altogether.
The smaller standard errors demonstrated by least squares in Figure 1 reflect the inaccuracy of the method. By ignoring community structure and assuming independence between observations in network regression settings, we expect to see anti-conservative hypothesis tests, too-narrow confidence intervals, and general overconfidence. This overconfidence is reflected in the too-small standard errors for naive least squares as well as in the anti-conservative Type I errors we demonstrate next.
#### 3.2.2 Hypothesis testing of \(\beta\)
We consider an approach to testingthe hypothesis of \(H_{0}:\beta=0\) against the alternative \(H_{A}:\beta\neq 0\). We perform a simulation study to obtain Type I error in a variety of network
Figure 1: **Bias squared and standard error of estimates of \(\beta\) with varying degrees of network modularity.** In order to estimate \(\beta\)\(B=1000\) replications were performed for \(n=200,400\) with the average degree \(7\) and \(16\) respectively. The values of \((p,q)\) are varied over \(\{(.8,0),(.7,.1),(.6,.2),(.5,.3)\}\).
structures. We consider two working correlation structures for our GEE model: independence and exchangeable. First, we simulate our data as in Section 3.2 with \(\beta=0\). We obtain an empirical null distribution by replicating this procedure \(B=1000\) times to obtain \(\hat{\beta}^{(1)},\hat{\beta}^{(2)},...,\hat{\beta}^{(B)}\). We estimate the P-value by \(\sum_{b=1}^{B}I(|\hat{\beta}^{(b)}|<\hat{\beta})/B\). We summarize our Type I error results at the \(0.05\) significance level in Table 1 which demonstrates that as network modularity decreases (\(q\) gets larger), the hypothesis test for \(H_{0}\) becomes anti-conservative. This is true for both least squares as well as GEE, although least squares is more adversely impacted. In addition, while GEE excels in the highly modular setting, as expected, LS is still anti-conservative. This demonstrates the efficacy of GEE as a better inferential tool over naive least squares in network regression settings, even when GEE assumptions of between-community independence do not hold. It is also instructive to note that although the P-values for independent correlation structure are generally less than those for exchangeable structure, the differences are not overwhelming.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \((n,K)\) & \((p,q)\) & GEE (independent) & GEE (exchangeable) & LS \\ \hline \multirow{3}{*}{\((200,20)\)} & (0.8, 0) & 0.049 & 0.055 & 0.062 \\ & (0.7, 0.1) & 0.050 & 0.055 & 0.079 \\ & (0.6, 0.2) & 0.058 & 0.061 & 0.091 \\ & (0.5, 0.3) & 0.072 & 0.075 & 0.095 \\ \hline \multirow{3}{*}{\((400,40)\)} & (0.8, 0) & 0.050 & 0.050 & 0.060 \\ & (0.7, 0.1) & 0.051 & 0.056 & 0.075 \\ \cline{1-1} & (0.6, 0.2) & 0.059 & 0.062 & 0.090 \\ \cline{1-1} & (0.5, 0.3) & 0.070 & 0.075 & 0.096 \\ \hline \end{tabular}
\end{table}
Table 1: **Type I error for the test of \(H_{0}:\beta=0\).** Comparison of Type-I error at the \(0.05\) significance level for a network of \(n\) nodes with \(K\) balanced communities for different choices of within community edge probability (\(p\)) and between community edge probability (\(q\)) among GEE with independent and exchangeable correlation structure and naive least squares.
### Real data analysis
Air travel networks were constructed from flight data retrieved from the Official Airline Guide (OAG), with node attribute outcomes being the country-specific COVID-19 incidence rates by month available from the Johns Hopkins University coronavirus data repository through April 30, 2020, (COVID, 19). In addition we included country-specific GDP (WBG, 2020a),total population (WBG, 2020b), and percentage of the urban population (WBG, 2020c)of the countries from the website of the World Bank as covariates. We study the effectiveness of travel bans on the incidence rate of the COVID-19 using our network regression model by performing a month-by-month analysis from January-April, 2020 which span the period of before the pandemic till one month after the travel restriction was in effect. In addition, we study the importance of baseline covariates effects such as GDP, percentage of urban population, and population size of the countries vs the network effect. The GDP and the population of the most populated countries are in the order of \(10^{12}\) and \(10^{6}\) respectively, so we scale these variables by \(10^{12},10^{6}\), and \(10^{2}\), respectively, in order to stablize coefficient estimation.
With these data retrieved across different continents to model the network effect through our porposed model in (1) we assume a directed stochastic block model framework, where nodes correspond to countries and each block contains countries having a large number of commercial flights traveling among them compared to the others. Edge formation is determined by thresholding the population-normalized count of flights arriving at the destination country.
We let the incidence rate \(y_{i}\) of the \(i\)th country to be the number of cases per 1000 populations, and the covariates \(\mathbf{x}_{i_{n\times 4}}=(\mathbf{1}_{n\times 1}:\mathbf{x}_{i2_{n\times 1}}:\mathbf{x}_{i3_{n \times 1}}:\mathbf{x}_{i4_{n\times 1}})\) to be a matrix whose first column has 1s as the intercept and the second, third and fourth columns have the population size, GDP, percentage of the urban population of the \(i\)th country scaled by \(10^{6}\), \(10^{12}\), and \(10^{2}\) respectively. For constructing the adjacency matrix \(\mathbf{A}\), we consider two scenarios: unweighted and weighted. For unweighted adjacency matrices, from the directed
graph dictated by the number of flights in a particular month, we first construct a count matrix \(\mathbf{C}\) that counts the number of flights from one country to the other. Then we construct an unweighted adjacency matrix \(\mathbf{A}\) with 0s and 1s where the entry of \(\mathbf{A}\) is 1 if it exceeds the third quartile of the elements of \(\mathbf{C}\), and 0 otherwise. For weighted adjacency matrices we divide the elements of \(\mathbf{C}\) by the total population of the destination country per million. The main rationale behind this scaling is that one would expect more flights traveling to a populated country compared to a less populated country. Therefore, dividing it by the population of the country will result in the elements of the weighted adjacency matrix being on the same scale.
With the two adjacency matrices constructed in this way, we fit model (3) and summarize the results in Table 2. The estimates of \(\beta\) increase from February to April both for the weighted and unweighed cases suggesting that there is an increasing association between the travel and spread of the pandemic. To further investigate this behavior, we plot the average number of flights per million population in Figure 2 which shows that although the average number of flights per million population is 1,000 flights per million population, the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population. The average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,00 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population is 1,000 flights per million population, and the average number of flights per million population
number of flights is decreasing from January to April, the estimates of \(\beta\) are increasing. This reflects the fact that while travel bans led to a decrease in the total number of flights in March and April, the increasing \(\beta\) in these months implies that each flight had an increased likelihood of transmitting COVID-19 to the destination country, increasing the correlation between the incidence rates in the two countries.
Table 2 also demonstrates that while the overall population of a country has a negligible effect on the incidence rate leading to smaller values of \(\alpha_{2}\), the percentage of the urban population plays a crucial role especially when the travel ban has already been in effect, for example, the value of \(\alpha_{4}\) is 62 (55 for weighted case) in April compared to 0 in February. Moreover, the corresponding values are consistent for weighted and unweighted networks.
Table 2 also demonstrates the utility of including baseline covariates alongside network effect to study the effectiveness of travel bans in mitigating the spread of COVID-19. By comparing the values of \(\beta\) and \(\alpha_{4}\), one can note that for the initial months of the pandemic, the network effect is more compared to the effect of urban population. However, when the travel ban has already taken place during March (for most of the countries), the effect of urban population supersedes the network effect as its value increase to 62.33 drastically from 9.88 suggesting that urbanity is the next important factor to consider for controlling the spread of the pandemic after the travel bans.
We also compare our results with naive least squares in Table 3 which shows coefficient estimates for both models. Of note, the April estimate of \(\beta\) is dramatically larger in the naive least squares model, which may be a manifestation of the increased bias we expect to see for naive least squares. Standard errors for the coefficient estimates are uniformly smaller for least squares as we expect, likely a representation of the overconfidence of the least squares model that comes from ignoring network induced correlation. This case demonstrates that using least squares would incorrectly dramatically increase both the magnitude and degree of significance of the network effects on incidence rates, particularly in the months post-lockdown when travel bans were in place. In contrast, the GEE model provides a more
realistic view.
## 4 Discussion
We have proposed a generalized estimating equation (GEE) approach to network regression model. Assuming independent community structure and using the simultaneous estimation of memberships, the GEE approach allows for community dependent covariate coefficient estimation, and thereby provides a flexible and efficient solution to the network regression model. Moreover, this allows us to do a hypothesis testing of the network regression parameter \(\beta\) which helps us to decide the importance of including such term in our analysis. We provided a relevant real data example of COVID-19 cases along with baseline covariates such as GDP, population size, and percentage of urban population of countries across
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline Month of 2020 & \(\beta\) & \(\alpha_{1}\) & \(\alpha_{2}\) & \(\alpha_{3}\) & \(\alpha_{4}\) \\ \hline Jan & 1.8759 (0.0038) & -0.0002 (-0.0001) & 0.0000 (0.0000) & 0.0013 (0.0013) & 0.0003 (0.0000 ) \\ Feb & 2.1946 (0.0014) & -0.0048 (-0.0037) & 0.0000 (0.0000) & 0.0555 (0.0575) & -0.0094 (-0.0072 ) \\ Mar & 2.7186 (0.0030) & -0.7846 (-0.7490) & -0.0005 (-0.0007) & -0.1446 (-0.0233) & 9.8844 (9.8862 ) \\ Apr & 4.6574 (0.0328) & -4.7307 (-3.5655) & -0.0085 (-0.0125) & 0.0626 (0.4685) & 62.3300 (54.9767) \\ \hline \end{tabular}
\end{table}
Table 2: **Estimates of the parameters for unweighted and (weighted) networks under the GEE model. \(\beta,\alpha_{1},\alpha_{2},\alpha_{3}\) and \(\alpha_{4}\) correspond to the coefficients of the adjacency matrix (network effect), intercept, population, GDP, and percentage of the urban population, respectively.**
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline Month of 2020 & \(\beta\) & \(\alpha_{1}\) & \(\alpha_{2}\) & \(\alpha_{3}\) & \(\alpha_{4}\) \\ \hline Jan & 0.0424 (0.0038) & -0.0001 (-0.0001) & 0.0000 (0.0000) & 0.0013 (0.0013) & 0.0000 (0.0000 ) \\ Feb & 0.0105 (0.0014) & -0.0037 (-0.0037) & 0.0000 (0.0000) & 0.0576 (0.0575) & -0.0073 (-0.0072 ) \\ Mar & 0.0458 (0.0030) & -0.7060 (-0.7490) & -0.0009 (-0.0007) & 0.0017 (-0.0233) & 9.5519 (9.8862 ) \\ Apr & 1.2655 (0.0328) & -5.5415 (-3.5655) & -0.0073 (-0.0125) & -0.2041 (0.4685) & 67.5900 (54.9767) \\ \hline \end{tabular}
\end{table}
Table 3: **Comparison of the naive linear regression with GEE for the weighted networks. The numbers reported in the parenthesis correspond to our GEE method while the others correspond to least squares. \(\beta,\alpha_{1},\alpha_{2},\alpha_{3}\) and \(\alpha_{4}\) correspond to the coefficients of the adjacency matrix, intercept, population, GDP, and percentage of the urban population respectively.**
different continents, and the number of commercial flights traveling between them to study the importance of travel bans to mitigate the spread of the COVID-19 pandemic. We have constructed the adjacency matrix from the count of flights rendered in a network of countries via a stochastic block model where each block contains countries having a similar number of flights. Our proposed model has helped us to understand the importance of the baseline covariates vs network effect. Since we have dealt with longitudinal data, it is also instructive to note that our proposed model offers us the flexibility of clustering both in the network space and also over time.
One limitation of our results is the balanced community assumption made in Proposition 3.1 which may not reflect some unbalanced networks. This assumption can be relaxed somewhat with small departures from the balanced design. Further, under the stochastic block model, extremely unbalanced networks still allow for consistent estimation of \(p\) and \(q\), however for a more flexible network model that allows for community-specific edge probabilities, consistent estimation of edge probabilities requires each community to grow asymptotically with \(n\).
## 5 Appendix
### Proof of Proposition 3.1
From the construction of the adjacency matrices, one can note that the entries \(A_{ij}\)s are i.i.d. Bernoulli random variables with
\[E(A_{ij})=\begin{cases}p,&\text{if $i,j$ belong to the same community}\\ q,&\text{otherwise}\end{cases}\]
\[Var(A_{ij})=\begin{cases}p(1-p),&\text{if $i,j$ belong to the same community}\\ q(1-q),&\text{otherwise}\end{cases}\]
Denote by \(S\) the set \(\{i,j:1\leq i<j\leq n,i,j\text{ belongs to the same community}\},\text{ and }\text{ its complement by }S^{\prime}\). Therefore, for a directed graph, one can write
\[E\Big{(}\sum_{i,j\in S}A_{ij}\Big{)}=2K\binom{m}{2}p,\] \[Var\Big{(}\sum_{i,j\in S}A_{ij}\Big{)}=2K\binom{m}{2}p(1-p)=s_{m_ {p}}^{2},\] \[E\Big{(}\sum_{i,j\in S^{\prime}}A_{ij}\Big{)}=K(K-1)m^{2}q,\text{ and }\] \[Var\Big{(}\sum_{i,j\in S^{\prime}}A_{ij}\Big{)}=K(K-1)m^{2}q(1-q )=s_{m_{q}}^{2}.\]
Next, one can verify Lindeberg condition to establish the central limit theorem:
\[\frac{\sum_{i,j\in S}A_{ij}-2K\binom{m}{2}p}{\sqrt{2K\binom{m}{2}p(1-p)}}\quad \text{ and }\quad\frac{\sum_{i,j\in S^{\prime}}A_{ij}-K(K-1)m^{2}q}{\sqrt{K(K-1)m^{2}q(1 -q)}}\stackrel{{ d}}{{\rightarrow}}N(0,1) \tag{4}\]
as \(Km^{2}p(1-p)\) and \(Km^{2}q(1-q)\rightarrow\infty\).
The Lindeberg's condition requires us to verify
\[\frac{1}{s_{m_{p}}^{2}}\sum_{i,j\in S}E\Big{(}B_{ij}^{2}I_{\{|B_{ij}|\geq \varepsilon s_{m_{p}}\}}\Big{)}\to 0\quad\text{ and }\frac{1}{s_{m_{q}}^{2}}\sum_{i,j\in S^{\prime}}E\Big{(}B_{ij}^{2}I_{\{|B_{ij }|\geq\varepsilon s_{m_{q}}\}}\Big{)}\to 0,\]
where \(B_{ij}=A_{ij}-E(A_{ij})\). Since \(|B_{ij}|\leq 1\), the above condition is satisfied when \(Km^{2}p(1-p)\) and \(Km^{2}q(1-q)\) tend to \(\infty\) under which \(\epsilon s_{m_{p}}\) and \(\epsilon s_{m_{q}}\) are both greater than 1.
Dividing the numerator and denominator of (4) by \(2K{m\choose 2}\) and \(K(K-1)m^{2}\) respectively, one obtains both
\(m^{\gamma/2}\sqrt{2K{m\choose 2}}\frac{\sum_{i,j\in S}A_{ij}/\left(2K{m\choose 2} \right)-p}{\sqrt{m^{\gamma}p(1-p)}}\) and \(m^{\gamma/2}\sqrt{K(K-1)m^{2}}\frac{\sum_{i,j\in S^{\prime}}A_{ij}/\left(K(K- 1)m^{2}\right)-q}{\sqrt{m^{\gamma}q(1-q)}}\)
both converge to \(N(0,1)\) distribution. Since \(\hat{p}=\sum_{i,j\in S}A_{ij}/\left(K{m\choose 2}\right)\), and \(\hat{q}=\sum_{i,j\in S^{\prime}}A_{ij}/\left(K(K-1)m^{2}\right)\), both \(\hat{p}\) and \(\hat{q}\) are \(m^{1+\gamma/2}\) consistent.
### Proof of Theorem 3.2
Here we present the sketch of the proof of Theorem 3.2. One can write (3) as
\[\sum_{k=1}^{K}U_{k}(\alpha,\boldsymbol{\beta},p,q)=\sum_{k=1}^{K}\boldsymbol{ D}_{k}^{\top}\boldsymbol{V}_{k}^{-1}\boldsymbol{S}_{k}=0, \tag{5}\]
where \(\boldsymbol{S}_{k}=\boldsymbol{y}_{k}-\boldsymbol{\mu}_{k}\), and \(U\) is a function of the model parameters.
Let \(\boldsymbol{b}=(\beta,\boldsymbol{\alpha})^{\top}\) denote the vector of model parameters, and \(\boldsymbol{\pi}=(p,q)^{\top}\) denote the vector of model parameters, and vector of within and between edge probabilities respectively. Letting \(\boldsymbol{b}\) fixed, the Taylor expansion yields
\[\frac{\sum_{k=1}^{K}U_{k}(\boldsymbol{b},\boldsymbol{\pi}^{*})}{ K^{1/2}m^{1+\gamma/2}} =\frac{\sum_{k=1}^{K}U_{k}(\boldsymbol{b},\boldsymbol{\pi})}{K^{1/2}m ^{1+\gamma/2}}+\frac{\sum_{k=1}^{K}\frac{\partial U_{k}(\boldsymbol{b}, \boldsymbol{\pi})}{\partial\pi}}{K^{1/2}m^{1+\gamma/2}}m^{1+\gamma/2}( \boldsymbol{\pi}^{*}-\boldsymbol{\pi})+o_{P}(1) \tag{6}\] \[=\tilde{A}+\tilde{B}\tilde{C}+o_{P}(1),\]
One can note that \(\tilde{B}=o_{P}(1)\) as \(\partial U_{k}(\boldsymbol{b},\boldsymbol{\pi})/\partial\boldsymbol{\pi}\) are linear functions of \(\boldsymbol{S}_{k}\)'s defined in (5) whose means are zero, and \(\tilde{C}=O_{P}(1)\) thanks to Proposition 3.1. Therefore, \(\frac{\sum_{k=1}^{K}U_{k}(\boldsymbol{b},\boldsymbol{\pi}^{*})}{K^{1/2}m^{1+ \gamma/2}}\) is asymptotically equivalent to \(\frac{\sum_{k=1}^{K}U_{k}(\boldsymbol{b},\boldsymbol{\pi})}{K^{1/2}m^{1+\gamma /2}}\) whose asymptotic distribution is multivariate
normal with zero mean and covariance is equal to \(V\) as defined in Theorem 3.2. The proof is thus complete following Liang & Zeger (1986).
## 6 Acknowledgement
RG would like to thank Anupam Kundu, postdoctoral associate at Yale School of Public Health and Thien Le, postdoctoral fellow at Harvard for many useful discussions regarding real data processing. JP acknowledges support from R01AI138901, and IB acknowledges support from R01MH116884.
|
2304.08638 | Deep Continuum Deformation Coordination and Optimization with Safety
Guarantees | In this paper, we develop and present a novel strategy for safe coordination
of a large-scale multi-agent team with ``\textit{local deformation}"
capabilities. Multi-agent coordination is defined by our proposed method as a
multi-layer deformation problem specified as a Deep Neural Network (DNN)
optimization problem. The proposed DNN consists of $p$ hidden layers, each of
which contains artificial neurons representing unique agents. Furthermore,
based on the desired positions of the agents of hidden layer $k$
($k=1,\cdots,p-1$), the desired deformation of the agents of hidden layer $k +
1$ is planned. In contrast to the available neural network learning problems,
our proposed neural network optimization receives time-invariant reference
positions of the boundary agents as inputs and trains the weights based on the
desired trajectory of the agent team configuration, where the weights are
constrained by certain lower and upper bounds to ensure inter-agent collision
avoidance. We simulate and provide the results of a large-scale quadcopter team
coordination tracking a desired elliptical trajectory to validate the proposed
approach. | Harshvardhan Uppaluru, Hossein Rastgoftar | 2023-04-17T22:15:55Z | http://arxiv.org/abs/2304.08638v1 | # Deep Continuum Deformation Coordination and Optimization with Safety Guarantees
###### Abstract
In this paper, we develop and present a novel strategy for safe coordination of a large-scale multi-agent team with _"local deformation"_ capabilities. Multi-agent coordination is defined by our proposed method as a multi-layer deformation problem specified as a Deep Neural Network (DNN) optimization problem. The proposed DNN consists of \(p\) hidden layers, each of which contains artificial neurons representing unique agents. Furthermore, based on the desired positions of the agents of hidden layer \(k\) (\(k=1,\cdots,p-1\)), the desired deformation of the agents of hidden layer \(k+1\) is planned. In contrast to the available neural network learning problems, our proposed neural network optimization receives time-invariant reference positions of the boundary agents as inputs and trains the weights based on the desired trajectory of the agent team configuration, where the weights are constrained by certain lower and upper bounds to ensure inter-agent collision avoidance. We simulate and provide the results of a large-scale quadcopter team coordination tracking a desired elliptical trajectory to validate the proposed approach.
## I Introduction
First inspired by natural phenomena, formation flight and cooperative control in Multi-Agent Systems (MAS) have been fascinating and important areas of study for the past 20 years. Research into MAS has led to interesting theoretical problems and potential practical uses in a wide range of situations. MAS formation flight and cooperative control are achieved either in a centralized or decentralized manner. The centralized technique makes use of a central computer that controls every agent in the MAS. However, the decentralized technique, also known as distributed control, allows for computation onboard each agent, and information is shared across neighboring agents [1]. For cooperative multi-agent control, the decentralized method has many benefits, such as low operational costs, fewer system requirements, great robustness, strong adaptability, and flexible scalability.
### _Related Work_
With applications ranging from surveillance [2, 3] to formation flying [3, 4], rescue missions [5], wildlife monitoring and exploration [6], precision agriculture [7], cooperative payload delivery [8, 9], and hazardous environment sensing [10], several multi-agent coordination techniques have been researched and presented.
A group of agents, acting as particles of a single virtual rigid body, use the centralized technique of Virtual Structure (VS) [11, 12]. VS is capable of maintaining the rigid geometric relationship between the agents and evolving as a rigid body in a given direction and orientation. Consensus [13, 14] is among the most exhaustively researched cooperative control approaches. In this approach, a team of agents reaches an agreement or consensus regarding some quantities of interest only by communicating with their neighbors. It is a decentralized coordination and control approach and is broadly divided into two categories: leaderless consensus (i.e., consensus without a leader) [15, 16] and leader-follower consensus (i.e., consensus with a leader) [17, 18].
Another decentralized leader-follower method is called Containment Control [19, 20, 21, 22], where the collective motion of all agents is achieved with multiple leaders. The follower agents obtain the desired positions through local communication with in-neighbor agents, and all agents are contained within a particular area defined by geometric constraints. A recent multi-agent coordination approach known as Homogeneous (or Affine) Transformation, is based on the principles of continuum mechanics, where the agents in the system are treated as particles of a deformable body undergoing a homogeneous transformation [23, 24, 25, 26, 27, 28, 29]. This technique ensures that all agents in the system remain inside a bounding envelope and allows for translation, rotation, and shearing of the bounding envelope while ensuring collision avoidance. Homogeneous transformation advances containment control by ensuring inter-agent collision avoidance. To achieve the desired homogeneous transformation in \(n\)-D (\(n=1,2,3\)), \(n+1\) leaders in \(\mathbb{R}^{n}\) communicate with the follower agents via local communication.
### _Contributions_
Although homogeneous transformation coordination can allow for aggressive and safe changes to the inter-agent distances, it has "deformation uniformity" problems. This is due to the fact that at any moment \(t\), the deformation of the complete agent team configuration is given by a single Jacobian matrix that can be specified based on unique rotation and shear deformation angles, as well as axial deformations [30]. Therefore, the rotation, axial, and shear deformations must be consistent over the whole MAS arrangement. To overcome this deformation uniformity issue, the existing homogeneous transformation coordination, specified based on a single Jacobian matrix, is advanced in this paper to Deep Continuum Deformation Coordination (DCDC), which allows us to plan safe _"local deformation"_ of an agent team without having to change the inter-agent distances between |
2305.11054 | Ising systems, measures on the sphere, and zonoids | We give an interpretation of a class of discrete-to-continuum results for
Ising systems using the theory of zonoids. We define the classes of rational
zonotopes and zonoids, as those of the Wulff shapes of perimeters obtained as
limits of finite-range homogeneous Ising systems and of general homogeneous
Ising systems, respectively. Thanks to the characterization of zonoids in terms
of measures on the sphere, rational zonotopes, identified as finite sums of
Dirac masses, are dense in the class of all zonoids. Moreover, we show that a
rational zonoid can be obtained from a coercive Ising system if and only if the
corresponding measure satisfies some connectedness properties, while it is
always a continuum limit of discrete Wulff shapes under the only condition that
the support of the measure spans the whole space. Finally, we highlight the
connection with the homogenization of periodic Ising systems and propose a
generalized definition of rational zonotope of order N, which coincides with
the definition of rational zonotope if N=1 | Andrea Braides, Antonin Chambolle | 2023-05-18T15:47:06Z | http://arxiv.org/abs/2305.11054v1 | # Ising systems, measures on the sphere, and zonoids
###### Abstract
We give an interpretation of a class of discrete-to-continuum results for Ising systems using the theory of zonoids. We define the classes of _rational zonotopes and zonoids_, as those of the Wulff shapes of perimeters obtained as limits of finite-range homogeneous Ising systems and of general homogeneous Ising systems, respectively. Thanks to the characterization of zonoids in terms of measures on the sphere, rational zonotopes, identified as finite sums of Dirac masses, are dense in the class of all zonoids. Moreover, we show that a rational zonoid can be obtained from a coercive Ising system if and only if the corresponding measure satisfies some 'connectedness' properties, while it is always a continuum limit of 'discrete Wulff shapes' under the only condition that the support of the measure spans the whole space. Finally, we highlight the connection with the homogenization of periodic Ising systems and propose a generalized definition of rational zonotope of order \(N\), which coincides with the definition of rational zonotope if \(N=1\).
## 1 Introduction
We consider Ising systems; that is, energies depending on a _spin parameter_, formally written as
\[-\sum_{i\neq j}\alpha_{i-j}u_{i}u_{j}. \tag{1.1}\]
In this notation the spin functions \(u\colon\Omega\cap\mathbb{Z}^{d}\to\{-1,1\}\) is defined on the portion of the standard cubic lattice contained in the (bounded) open set \(\Omega\), and we write \(u_{i}\) in the place of \(u(i)\). The systems are supposed to be _ferromagnetic_; that is, \(\alpha_{k}\geq 0\) for all \(k\in\mathbb{Z}^{d}\). This condition implies that the interactions between nodes at distance \(k\) such that \(\alpha_{k}>0\) are minimal if \(u_{i}=u_{j}\). This condition in turn implies that the only _ground states_ are the constant states, provided that the network of interaction is _connected_, in the sense that for every \(j\in\mathbb{Z}^{d}\backslash\{0\}\) there exist \(K\in\mathbb{N}\) and \(k_{1},\ldots,k_{K}\) such that \(k=k_{1}+\cdots+k_{K}\) and
\(\alpha_{k_{\ell}}>0\) for all \(\ell\in\{1,\ldots,K\}\), and also \(\Omega\) is likewise connected. While the form (1.1) is quite descriptive, it is convenient for our purposes to consider an equivalent energy of the form
\[\sum_{i\neq j}\alpha_{i-j}(u_{i}-u_{j})^{2}. \tag{1.2}\]
Indeed, since \(u_{j}^{2}=u_{i}^{2}=1\) developing the square, the terms in (1.2) can be rewritten as
\[\sum_{i\neq j}\alpha_{i-j}(u_{i}-u_{j})^{2}=2\sum_{i\neq j}\alpha_{i-j}-2\sum_ {i\neq j}\alpha_{i-j}u_{i}u_{j},\]
and \(2\sum_{i\neq j}\alpha_{i-j}\) is a constant depending only on \(\Omega\). In the form (1.2), ground states have always zero energy and we can also consider \(\Omega\) unbounded since we thus avoid annoying \(+\infty-\infty\) indeterminate forms. In this paper \(\Omega\) plays no role, and we take \(\Omega=\mathbb{R}^{d}\) for simplicity. Furthermore, the lattice \(\mathbb{Z}^{d}\) can be substituted with a Bravais lattice, at the expense only of a heavier notation in some proofs.
The overall behaviour of systems (1.2) can be described by introducing an _effective surface tension_, which takes the form
\[\varphi(\nu)=4\sum_{k\in\mathbb{Z}^{d}}\alpha_{k}|\langle k,\nu\rangle|, \tag{1.3}\]
which describes the energy density of a minimal interface macroscopically oriented as an hypersurface with normal \(\nu\). This effective surface tension can be obtained in various ways, which describe different ways of looking at the problem. One way is to compute the average limit behaviour of minimum problems on large cubes with two faces orthogonal to \(\nu\) and boundary conditions jumping in correspondence of the mid-plane of the cube; a more complete analysis is obtained by looking at minimizers of problems in the whole space with the volume constraint \(\#\{u_{i}=1\}=N\), and prove that, upon suitably scaling and translating them, they converge (after suitable interpolation) to minimizers of the perimeter energy related to \(\varphi\). A relatively recent way to explain this convergence is by a _discrete-to-continuum_ approach, as a result of a limit analysis for the scaled energies
\[E_{\varepsilon}(u)=\sum_{i\neq j}\varepsilon^{d-1}\alpha_{i-j}(u_{i}-u_{j})^{ 2}, \tag{1.4}\]
where the scaled _spin parameter_\(u\colon\Omega\cap\varepsilon\mathbb{Z}^{d}\to\{-1,1\}\) is now defined on the scaled standard cubic lattice, and we write \(u_{i}\) in the place of \(u(\varepsilon i)\). Each function \(u\) is extended as a piecewise-constant function, so that the domain of each \(E_{\varepsilon}\) can be identified as a subset of \(L^{1}_{\rm loc}(\Omega;\{-1,1\})\), and, if the lattice \(\mathbb{Z}^{d}\) is connected with respect to \(\{\alpha_{k}\}\) in the sense above, the family \(E_{\varepsilon}\) is equicoercive with respect to the strong convergence in \(L^{1}_{\rm loc}(\Omega;\{-1,1\})\), so that a family \(u^{\varepsilon}\) with equibounded energy converges, up to subsequences, to a continuum parameter \(u\) with \(u(x)\in\{-1,1\}\) almost everywhere. If we write \(u=2\chi_{A}-1\), this defines
a discrete-to-continuum convergence of spin functions \(u^{\varepsilon}\) to sets \(A\), which are indeed sets of finite perimeter. With respect to this convergence, the \(\Gamma\)-limit of the functionals \(E_{\varepsilon}\) is the (anisotropic) perimeter functional
\[F(A)=\int_{\Omega\cap\partial^{*}A}\varphi(\nu)d{\cal H}^{d-1}, \tag{1.5}\]
where \(\partial^{*}A\) and \(\nu_{A}\) are the _reduced boundary_ and the measure-theoretical _internal normal_ to \(\partial^{*}A\), respectively, and \(\varphi\) is defined by (1.3). In this form, the result has been proved in various versions, first in the context of free-discontinuity problems by Chambolle [9] and by Braides and Gelli [5], whose proof is then reset in terms of Ising systems in [1, Section 3.1] (for a simplified exposition in a two-dimensional context see also Section 3.2.4 in the book by Braides and Solci [8]). This analysis can be seen as a homogenization problem with a 1-periodic system of interactions [7] and as such \(\varphi\) can be defined via an asymptotic homogenization formula, reducing to the limit analysis of minimum problems on cubes as described above. Conversely, the convergence of minimum problems with volume constraints to a _Wulff problem_ for the perimeter \(F\) is a consequence of the property of convergence of minima of \(\Gamma\)-convergence.
Scope of this paper is to connect these variational descriptions of Ising systems with the concept of _zonoid_ from Convex Geometry. A zonoid is defined as a limit in the Hausdorff metric of _zonotopes_, which are simply defined to be vector sums of a finite number of line segments. As such, their support functions can be written in the form
\[f(z)=\sum_{j=1}^{N}m_{j}|\langle\nu_{j},z\rangle|, \tag{1.6}\]
where \(\nu_{i}\in S^{d-1}\) and \(m_{j}>0\). Comparing (1.6) with (1.3) we note that the latter requires some restrictions on \(\nu_{j}\). With this observation in mind, we then define the subclass of _rational zonotopes_ as those for which all \(\nu_{j}\) in (1.6) are rational directions; i.e., \(\nu_{j}=\frac{k_{j}}{\|k_{j}\|}\) for some \(k_{j}\in{\mathbb{Z}}^{d}\backslash\{0\}\). Hence, an effective surface tension \(\varphi\) in (1.3) for a system \(\{\alpha_{k}\}\) with \(\alpha>0\) only for a finite set of \(k\in{\mathbb{Z}}^{d}\) can be interpreted as the support function of a rational zonotope. Note that the family of rational zonotopes is still a dense class in the family of zonoids.
The fundamental property for the analysis of zonoids is that they can be identified with (symmetric) positive bounded measures \(\mu\) on \(S^{d-1}\) such that the support function \(f\) of the zonoid can be written as
\[f(z)=\int_{S^{d-1}}|\langle\nu,z\rangle|\,d\mu(\nu). \tag{1.7}\]
In the case of zonotopes this measure is a finite sum of Dirac deltas, which, in case of rational zonotopes, are concentrated on a set of rational directions. We then define the class of _rational zonoids_ as that of the zonoids corresponding to possibly infinite sums of
Dirac deltas concentrated on rational directions. This is the class corresponding to general \(\varphi\) in (1.3).
We then have the following characterizations.
_Exact reachability_. The functionals \(F\) obtained as limits of Ising systems are all functionals whose Wulff shapes are rational zonoids.
_Approximate reachability_. For each zonoid there exists a family \(F_{n}\) of functionals obtained as limits of Ising systems such that the corresponding Wulff shapes converge to the zonoid.
_Convergence of discrete Wulff shapes_. If a rational zonoid has positive Lebesgue measure then there exists an Ising system with constrained minimizers, suitably identified with sets, that approximate the zonoid. For lower-dimensional rational zonoids the same holds in a lower-dimensional subspace.
_Coercive Ising systems_. We highlight a connectedness property of the generating measure of a rational zonoid which is necessary and sufficient for the existence of a corresponding coercive Ising system.
We further note that we do not have uniqueness of generating Ising systems, in the sense that the same rational zonoid corresponds to infinitely many equivalent Ising systems, for some of which we may not have the property of convergence of discrete minimizers.
The concept of zonoid has been generalized in various ways (see [13, Chapter 9]). The variational interpretation of rational zonoids allows to view them as a particular case of homogenization of periodic Ising systems when the period is \(1\). With this observation in mind, we finally propose a generalization of rational zonotopes and zonoids as those obtained by homogenization of periodic Ising systems. If such a system of period \(N\) is of finite range, the Wulff shape of the corresponding perimeter is a polytope, due to the results by Chambolle and Kreutz [10], which allows to define rational zonotopes of order \(N\). The closure of all zonotopes of order \(N\) with varying \(N\) is proved to be the set of all convex centered sets using the results of Braides and Kreutz [6].
## 2 Zonotopes, zonoids and their support functions
In the following two sections we recall some definitions and properties from the theory of zonoids, for which we refer to the monograph by Schneider [13]. In Section 3.1 we introduce the subclass of rational zonoids.
### Zonoids
A (centered) _zonotope_ in \(\mathbb{R}^{d}\) is a polytope that is obtained as a Minkowski sum of a finite number of centered segments \([-w_{i},w_{i}]\) with \(w_{i}\in\mathbb{R}^{d}\), \(i\in\{1,\ldots,N\}\), and \(N\in\mathbb{N}\); that is, a
set of the form
\[W=\Big{\{}w\in\mathbb{R}^{d}\colon\text{ there exist }s_{i}\in[-1,1]\text{ such that }w=\sum_{i=1}^{N}s_{i}w_{i}\Big{\}}. \tag{2.1}\]
The usual definition of zonoid (see [13]) does not require that the segments be centered. However, any (general) zonoid is the translation of a centered zonoid. Since we will mainly deal with symmetric sets in \(\mathbb{R}^{d}\) we directly use centered zonoids in order to simplify the notation and terminology.
We say that \(W\) is _non-degenerate_ if the vectors \(w_{1},\ldots,w_{N}\) span the whole \(\mathbb{R}^{d}\) so that \(W\) is a convex set symmetric with respect to the origin and of non zero Lebesgue measure. If otherwise, a degenerate zonotope can be identified as a non-degenerate zonotope in a lower-dimensional space.
It is worth noting that zonotopes are particular centered symmetric polytopes characterized by the fact that their faces are themselves (congruent to \(d-1\)-dimensional) zonotopes. This property rules out a number of polytopes in dimension \(d\geq 3\); e.g. octahedrons.
Using (2.1), the _support function_ of a zonotope is then given by
\[f_{W}(z)=\sup\{\langle z,w\rangle:w\in W\}=\sum_{i=1}^{N}|\langle z,w_{i} \rangle|.\]
Conversely, given \(f\) of this form, the set \(W\) in (2.1) coincides with the _Wulff shape_ of \(f\), given by
\[W_{f}=\Big{\{}w\in\mathbb{R}^{d}:\langle z,w\rangle\leq 1\text{ for all }z\in\mathbb{R}^{d}\text{ such that }f(z)\leq 1\Big{\}}. \tag{2.2}\]
The family of (centered) _zonoids_ in \(\mathbb{R}^{d}\) is the family of all convex symmetric sets that can be obtained as limits of zonotopes in the Hausdorff metric. We say that a zonoid is non-degenerate if it has a non empty interior, in which case it is the limit of non-degenerate zonotopes. Note that in dimension \(d=2\) all convex symmetric sets are zonoids, while the symmetry restrictions on the faces of zonotopes imply that zonoids are nowhere dense in the family of all convex symmetric sets if \(d\geq 3\).
### Generating measures and support functions of zonoids
For a zonotope \(W\) as in (2.1), after setting \(\nu_{i}=\frac{w_{i}}{\|w_{i}\|}\), we can write
\[f_{W}(z) = \sum_{i=1}^{N}|\langle z,w_{i}\rangle|\] \[= \sum_{i=1}^{N}|\langle z,\nu_{i}\rangle|\,\|w_{i}\|=\int_{S^{d-1 }}|\langle z,\nu\rangle|\,\|w_{i}\|\,d\Big{(}\frac{\delta_{\nu_{i}}+\delta_{- \nu_{i}}}{2}\Big{)}(\nu)\] \[= \int_{S^{d-1}}|\langle z,\nu\rangle|d\mu_{W}(\nu),\]
where
\[\mu_{W}=\sum_{i=1}^{N}\frac{\|w_{i}\|}{2}\big{(}\delta_{\nu_{i}}+\delta_{-\nu_{i} }\big{)}.\]
Conversely, given a positive measure of the form \(\mu=\sum_{i=1}^{N}\lambda_{i}(\delta_{\nu_{i}}+\delta_{-\nu_{i}})\), with \(\nu_{i}\in S^{d-1}\), setting
\[f_{\mu}(z)=\int_{S^{d-1}}|\langle z,\nu\rangle|\,d\mu(\nu)=\sum_{i=1}^{N}2 \lambda_{i}|\langle z,\nu_{i}\rangle|,\]
we have that \(f_{\mu}=f_{W}\), where \(W\) is given by (2.1) with \(w_{i}=2\lambda_{i}\nu_{i}\). Hence, zonotopes correspond to (symmetric) linear combinations of Dirac deltas on \(S^{d-1}\) with positive coefficients. Note that the Hausdorff convergence of zonoids corresponds to the weak* convergence of the related measures. By the weak* density of sums of Dirac deltas this shows that positive symmetric measures on \(S^{d-1}\) are in bijection with zonoids.
The support functions of (centered) zonoids in \(\mathbb{R}^{d}\) are characterized by elements of the cone of positive symmetric measures on \(S^{d-1}\) as in the following proposition.
**Proposition 2.1**.: _For every (centered) zonoid \(W\) in \(\mathbb{R}^{d}\) there exists a unique symmetric positive measure \(\mu_{W}\) on \(S^{d-1}\) such that the support function \(f_{W}\) can be written as_
\[f_{W}(z)=\int_{S^{d-1}}|\langle z,\nu\rangle|\,d\mu_{W}(\nu). \tag{2.3}\]
Such a measure is called the _generating measure_ of \(W\).
## 3 Ising systems and a variational interpretation of rational zonotopes and zonoids
We consider homogeneous systems of discrete interactions governed by energies of the form
\[E_{\varepsilon}(u)=\sum_{i,j\in\mathbb{Z}^{d}}\varepsilon^{d-1}\,\alpha_{i-j }(u_{i}-u_{j})^{2}, \tag{3.1}\]
defined on functions \(u\colon\varepsilon\mathbb{Z}^{d}\to\{-1,1\}\), where we use the notation \(u_{i}=u(\varepsilon i)\). Note that we can assume, and we will, that \(\alpha_{-k}=\alpha_{k}\) since otherwise we can replace both coefficients by \(\frac{\alpha_{k}+\alpha_{-k}}{2}\), and this change does not influence the value of the energy. We assume that the system is _ferromagnetic_; that is, \(\alpha_{k}\geq 0\) for any \(k\in\mathbb{Z}^{d}\). We further assume the _decay condition_
\[\sum_{k\in\mathbb{Z}^{d}}\alpha_{k}\|k\|<+\infty. \tag{3.2}\]
This condition is necessary to have non-trivial energies, in the sense that if this condition fails then the limit of \(E_{\varepsilon}\) as defined below is finite only if \(u\) is identically \(1\) or \(-1\). The
convex function \(\varphi\colon\mathbb{R}^{d}\to[0,+\infty)\)
\[\varphi(z)=4\sum_{k\in\mathbb{Z}^{d}}\alpha_{k}|\langle z,k\rangle|, \tag{3.3}\]
is well defined and finite thanks to (3.2).
### Rational zonoids
The particular form of the functions \(\varphi\) in (3.3) suggests a definition of a class of zonoids, of which such types of functions are support functions.
**Definition 3.1**.: _We say that \(\nu\in S^{d-1}\) is a rational direction if there exist \(w\in\mathbb{Z}^{d}\backslash\{0\}\) such that_
\[\nu=\frac{w}{\|w\|}.\]
_A set \(W\) is a_ (centered) rational zonotope _if its generating measure is of the form_
\[\mu_{W}=\sum_{i=1}^{N}\lambda_{i}(\delta_{\nu_{i}}+\delta_{-\nu_{i}}), \tag{3.4}\]
_where \(N\in\mathbb{N}\), \(\nu_{i}\) are rational directions and \(\lambda_{i}>0\). A set is a_ (centered) rational zonoid _if there exists a sequence \(\{\nu_{i}\}\) of rational directions and a summable sequence \(\{\lambda_{i}\}\) of positive numbers such that_
\[\mu_{W}=\sum_{i=1}^{+\infty}\lambda_{i}(\delta_{\nu_{i}}+\delta_{-\nu_{i}}). \tag{3.5}\]
**Remark 3.2**.: By the density of rational directions in \(S^{d-1}\), rational zonotopes (and hence also rational zonoids) are dense in the class of zonoids.
### Sets of finite perimeter and their energies
To each Ising system we will associate a _perimeter energy_. To that end we recall that a subset \(A\) in \(\mathbb{R}^{d}\) is a _set of finite perimeter_ if the distributional gradient of its characteristic function \(\chi_{A}\) is a bounded measure. We refer to [2, 4, 12] for an introduction to the topic. Here we only recall that if \(A\) is set of finite perimeter there exists a Borel set \(\partial^{*}A\), the _reduced boundary of \(A\)_, and a function \(\nu=\nu_{A}\colon\partial^{*}A\to S^{d-1}\), the _inner normal to \(A\)_, such that \(D\chi_{A}=\nu\mathcal{H}^{d-1}\mathbin{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}\partial^{*}A\). Furthermore if \(\varphi\colon\mathbb{R}^{d}\to[0,+\infty)\) is a convex function positively homogeneous of degree one, then the _perimeter energy_\(F=F_{\varphi}\) defined by
\[F_{\varphi}(A)=\int_{\partial^{*}A}\varphi(\nu(x))d\mathcal{H}^{d-1}(x) \tag{3.6}\]
is weakly lower semicontinuous with respect to the convergence of \(\chi_{A_{\varepsilon}}\) to \(\chi_{A}\) in \(L^{1}_{\mathrm{loc}}(\mathbb{R}^{d})\) on families such that the total variations \(\|D\chi_{A_{\varepsilon}}\|\) are equibounded.
We finally recall that families of sets of finite perimeter such that \(\|D\chi_{A_{\varepsilon}}\|\) are equibounded are precompact with respect to the convergence \(\chi_{A_{\varepsilon}}\to\chi_{A}\) in \(L^{1}_{\mathrm{loc}}(\mathbb{R}^{d})\), and that if \(A\) is a set of finite perimeter then there exists a family of polyhedral sets \(A_{\varepsilon}\) converging in the sense above to \(A\) as \(\varepsilon\to 0\), and such that \(F_{\varphi}(A_{\varepsilon})\) tends to \(F_{\varphi}(A)\). Note that for polyhedral sets we have \(\partial^{*}A=\partial A\), up to an \(\mathcal{H}^{d-1}\)-negligible set.
### Convergence of the scaled energies of an Ising system
We say that a sequence \(u^{\varepsilon}\colon\varepsilon\mathbb{Z}^{d}\to\{-1,1\}\)_converges to a set of finite perimeter \(A\)_ if the piecewise-constant interpolations \(u_{\varepsilon}\) of \(u^{\varepsilon}\) on \(\varepsilon\mathbb{Z}^{d}\), defined by \(u_{\varepsilon}(x)=u_{i}^{\varepsilon}\) (\(=u^{\varepsilon}(\varepsilon i)\)) if \(x\in\varepsilon i+[0,\varepsilon)^{d}\), locally converge in \(L^{1}(\mathbb{R}^{d})\) to the function \(u=2\chi_{A}-1\) and there exists \(C\) such that \(\|Du_{\varepsilon}\|=\|Du_{\varepsilon}\|(\mathbb{R}^{d})\leqslant C\), where \(\|Du_{\varepsilon}\|\) denotes the total variation of the measure \(Du_{\varepsilon}\). In other words, the sequence \(u_{\varepsilon}\) converges weakly in \(BV_{\mathrm{loc}}(\mathbb{R}^{d})\).
The condition that \(\|Du_{\varepsilon}\|(\mathbb{R}^{d})\leqslant C\) is a consequence of the boundedness of the energies \(E_{\varepsilon}(u^{\varepsilon})\) if there holds a condition of the type
\[\alpha_{k}\geqslant c>0\text{ if }\|k\|=1\text{ ({\it co}erciveness of nearest-neighbour interactions)}. \tag{3.7}\]
In this case, we have that there exists \(A\) such that \(u^{\varepsilon}\to A\) up to subsequences. A thorough description of the limit of Ising systems in terms of perimeter functionals when condition (3.7) is satisfied is given in [1, Section 3.1].
A general necessary and sufficient condition for coerciveness will be given below. Note however that we do not make any such assumption in our definition of convergence of \(E_{\varepsilon}\), in order to include also degenerate cases in our treatment.
**Theorem 3.3** (limits of homogeneous Ising systems as rational zonoids).: _A functional of the form (3.6) is a \(\Gamma\)-limit with respect to the convergence \(u^{\varepsilon}\to A\) of energies \(E_{\varepsilon}\) of the form (3.1) for some Ising system \(\{\alpha_{k}\}\) with \(\alpha_{k}\geqslant 0\) satisfying (3.2) if and only if the Wulff shape of \(\varphi\) is a rational zonoid. Furthermore, if the range of \(\{\alpha_{k}\}\) is finite the Wulff shape of \(\varphi\) is a rational zonotope._
Proof.: Given \(\{\alpha_{k}\}\) non-negative coefficients satisfying (3.2) with respect to the convergence \(u^{\varepsilon}\to A\), the \(\Gamma\)-limit of the sequence of energies \(E_{\varepsilon}\) defined by is given by an energy (3.6) with \(\varphi\) given by (3.3). We briefly give a proof. This can also be achieved with a perturbation argument from the analog result for coercive functionals, for which we refer e.g. to Section 3.1 in [1], where the interested reader can find further common details with the proof presented below.
In order to provide a lower bound, we examine separately energies with only the contribution for a fixed \(k\in\mathbb{Z}^{d}\backslash\{0\}\). We can suppose that the last component \(k_{d}\) be strictly
positive, and define the lattice \(\mathcal{L}=\mathcal{L}_{k}\) as the Bravais lattice generated by \(\{e_{1},\ldots,e_{d-1},k\}\), which is a sub-lattice of \(\mathbb{Z}^{d}\). We consider the functionals
\[E_{\varepsilon}^{k}(u)=\sum_{i\in\mathcal{L}}\varepsilon^{d-1}(u_{i+k}-u_{i})^{ 2}.\]
These can be seen as a system of nearest-neighbour interactions on the lattice \(\mathcal{L}\) with \(\alpha_{e_{j}}=0\) for \(j\in\{1,\ldots,d-1\}\).
Let \(u^{\varepsilon}\to A\) with \(\sup_{\varepsilon}\|Du_{\varepsilon}\|<+\infty\). For every \(u^{\varepsilon}\) we can define its interpolation \(u^{\mathcal{L}}_{\varepsilon}\) on the lattice \(\varepsilon\mathcal{L}\) defined by \(u^{\mathcal{L}}_{\varepsilon}(x)=u^{\varepsilon}_{i}\) if \(x\in\varepsilon i+\varepsilon U\), where \(U\) is the \(d\)-dimensional parallelogram with sides \(e_{1},\ldots,e_{d-1},k\) and \(i\in\mathcal{L}\). Note that \(\sup_{\varepsilon}\|Du^{\mathcal{L}}_{\varepsilon}\|<+\infty\), so that we can suppose that \(u^{\mathcal{L}}_{\varepsilon}\to 2\chi_{A}\varepsilon-1\) in \(L^{1}_{\mathrm{loc}}(\mathbb{R}^{d})\), for some set of finite perimeter \(A^{\mathcal{L}}\). Since \(u_{\varepsilon}(x)=u^{\mathcal{L}}_{\varepsilon}(x)\) on \(\varepsilon\mathcal{L}+\varepsilon(U\cap[0,1)^{d})\) we then deduce that \(A^{\mathcal{L}}=A\). If we define \(A^{\mathcal{L}}_{\varepsilon}=\{x\in\mathbb{R}^{d}:u^{\mathcal{L}}_{ \varepsilon}(x)=1\}\) then we have
\[E_{\varepsilon}^{k}(u^{\varepsilon})=\frac{\|k\|}{k_{d}}4\int_{\partial A^{ \mathcal{L}}_{\varepsilon}}\Bigl{|}\Bigl{\langle}\nu,\frac{k}{\|k\|}\Bigr{\rangle} \Bigl{|}d\mathcal{H}^{d-1}=\frac{1}{k_{d}}4\int_{\partial A^{\mathcal{L}}_{ \varepsilon}}|\langle\nu,k\rangle|d\mathcal{H}^{d-1},\]
where we have taken into account that the parts of the boundary of \(A^{\mathcal{L}}_{\varepsilon}\) with normal \(\nu\in\{e_{1},\ldots,e_{d-1}\}\) do not contribute to the energy, and the projection of \(U\) on the hyperplane orthogonal to \(k\) has \(d-1\)-measure equal to \(\frac{k_{d}}{\|k\|}\). Note that in this case \(E_{\varepsilon}^{k}(u^{\varepsilon})\) equals the total variation \(\|D_{k}u^{\mathcal{L}}_{\varepsilon}\|\), where \(D_{k}\) denotes the distributional directional derivative in the direction \(k\).
Taking into account the lower semicontinuity of this perimeter functional, we then have
\[\liminf_{\varepsilon\to 0}E_{\varepsilon}^{k}(u^{\varepsilon})\geq\frac{1}{k_{d}}4 \int_{\partial^{*}A}|\langle\nu,k\rangle|d\mathcal{H}^{d-1}.\]
Since \(|U|=k_{d}\) the number of equivalence classes of \(\mathbb{Z}^{d}\) modulo \(\mathcal{L}\) is \(k_{d}\), from which (proceeding as above in each of these equivalence classes) we have
\[\liminf_{\varepsilon\to 0}\sum_{i\in\mathbb{Z}^{d}}\varepsilon^{d-1}(u_{i+k}-u_{i} )^{2}\geq k_{d}\liminf_{\varepsilon\to 0}E_{\varepsilon}^{k}(u^{ \varepsilon})\geq 4\int_{\partial^{*}A}|\langle\nu,k\rangle|d\mathcal{H}^{d-1}.\]
From these inequalities, valid for all \(k\in\mathbb{Z}^{d}\), the inequality \(\liminf_{\varepsilon\to 0}E_{\varepsilon}(u^{\varepsilon})\geq F(A)\) follows. To prove the upper bound it suffices to note that if \(A\) is a polyhedron then the restriction of \(u^{\varepsilon}=2\chi_{A}-1\) to \(\varepsilon\mathbb{Z}^{d}\) is a recovery sequence satisfying \(\|Du_{\varepsilon}\|\leq C<+\infty\). The proof of the \(\Gamma\)-convergence is then completed by an approximation argument.
Since the function \(\varphi\) is the (locally uniform) limit of the functions
\[\varphi_{n}(z)=4\sum_{k\in\mathbb{Z}^{d},\ \|k\|\leq n}\alpha_{k}|\langle k,z \rangle|,\]
whose Wulff shapes are rational zonotopes corresponding to the measures on \(S^{d-1}\)
\[\mu_{n}=\sum_{k\in\mathbb{Z}^{d},\ \|k\|\leq n}4\alpha_{k}\|k\|\delta_{\frac{k}{ \|k\|}},\]
\(\varphi\) is the support function of a rational zonoid corresponding to
\[\mu=\sum_{k\in\mathbb{Z}^{d}}4\alpha_{k}\|k\|\delta_{\frac{k}{\|k\|}}.\]
All these measures are symmetric since we assume \(\alpha_{k}=\alpha_{-k}\). Note that if \(\{\alpha_{k}\}\) is of finite range then \(\varphi\) is the support function of a rational zonotope.
Conversely, if we have a rational zonoid \(W\) corresponding to a finite symmetric positive measure
\[\mu_{W}=\sum_{i}\beta_{i}\big{(}\delta_{\nu_{i}}+\delta_{-\nu_{i}}\big{)},\]
with \(\nu_{i}\in S^{d-1}\) rational directions. Note that
\[\int_{S^{d-1}}|\langle z,\nu\rangle|d(\delta_{\nu_{i}}+\delta_{-\nu_{i}})(\nu )=2|\langle z,\nu_{i}\rangle|,\]
so that
\[f_{W}(z)=\int_{S^{d-1}}|\langle z,\nu\rangle|d\mu_{W}(\nu)=2\sum_{i}\beta_{i}| \langle z,\nu_{i}\rangle|.\]
Then for all \(i\) we can fix \(k_{i}\in\mathbb{Z}^{d}\) such that \(\nu_{i}=\frac{k_{i}}{\|k_{i}\|}\) and define
\[\alpha_{k}=\begin{cases}\frac{\beta_{i}}{4\|k_{i}\|}&\text{if $k=k_{i}$ or $k=-k_{i}$ for some $i$}\\ 0&\text{otherwise},\end{cases}\]
so that we have
\[\varphi(z)=4\sum_{k\in\mathbb{Z}^{d}}\alpha_{k}|\langle k,z\rangle|=2\sum_{i} \beta_{i}\Big{|}\Big{\langle}z,\frac{k_{i}}{\|k_{i}\|}\Big{\rangle}\Big{|},\]
and \(\sum_{k}\alpha_{k}\|k\|=\frac{1}{2}\sum_{i}\beta_{i}<+\infty\), so that (3.2) is satisfied.
**Definition 3.4**.: _We say that an Ising system \(\{\alpha_{k}\}\) generates a rational zonoid \(W\), or equivalently it generates a measure \(\mu_{W}\) (the generating measure of \(W\)), or equivalently it generates an energy density \(f_{W}\), the support function of \(W\), if we have that \(E_{\varepsilon}\)\(\Gamma\)-converges to \(F\) in the sense of Theorem 3.3 with \(\varphi=f_{W}\). We say that two Ising systems as above are equivalent if they generate the same zonoid._
**Remark 3.5** (equivalent Ising systems).: For every rational direction \(\nu\in S^{d-1}\) let \(\mathcal{I}(\nu)=\big{\{}k\in\mathbb{Z}^{d}\backslash\{0\}:\frac{k}{\|k\|}=\nu\big{\}}\). We can rewrite formula (3.3) as
\[\varphi(z)=4\sum_{\nu}\sum_{k\in\mathcal{I}(\nu)}\alpha_{k}|\langle z,k\rangle| =4\sum_{\nu}\Big{(}\sum_{k\in\mathcal{I}(\nu)}\alpha_{k}\|k\|\Big{)}|\langle z,\nu\rangle|. \tag{3.8}\]
From (3.8) and taking the symmetry of \(\alpha_{k}\) into account we note that two Ising systems satisfying (3.2) generate the same rational zonoid if and only if for every rational direction \(\nu\in S^{d-1}\) we have
\[\sum_{k\in\mathcal{I}(\nu)}\alpha_{k}\|k\|=\sum_{k\in\mathcal{I}(\nu)}\alpha_ {k}^{\prime}\|k\|. \tag{3.9}\]
As an example, we may take the systems (parameterized on sequences \(\{\lambda_{n}\}\))
\[\alpha_{k}=\begin{cases}\lambda_{|n|}&\text{ if }k=ne_{\ell},n\in\mathbb{Z}, \ \ell\in\{1\ldots,d\}\\ 0&\text{ otherwise},\end{cases} \tag{3.10}\]
with \(\sum_{n=1}^{\infty}n\lambda_{n}=\lambda\). Then \(\varphi(z)=4\lambda\sum_{n=1}^{d}|z_{n}|=:4\|z\|_{1}\), and the corresponding \(W\) is the same coordinate square depending only on \(\lambda\) and not on the particular sequence.
Let \(\mu\) be a measure generated by the system \(\{\alpha_{k}\}\). Note that if \(\nu\in S^{d-1}\) is such that \(\mu(\{\nu\})>0\) then the set of indices \(k\in\mathcal{I}(\nu)\) such that \(\alpha_{k}>0\) may be infinite even though \(\{\alpha_{k}\}\) generates a rational zonotope. Conversely, for all rational zonoid \(W\) there exists an Ising system \(\{\alpha_{k}\}\) generating \(W\) such that for all \(\alpha_{k}>0\)\(k\neq 0\) such that \(\mu_{W}\big{(}\frac{k}{\|k\|}\big{)}>0\). Indeed, it suffices to note that, if \(\alpha(\nu)=\sum_{k\in\mathcal{I}(\nu)}\alpha_{k}\|k\|\), then an equivalent Ising system is \(\{\alpha_{k}^{\prime}\}\) given by \(\alpha_{k}^{\prime}=2^{-n}\frac{\alpha(\nu)}{n\|k_{0}(\nu)\|}\) if \(k\in\mathcal{I}(\nu)\) and \(k=nk_{0}(\nu)\), where \(k_{0}\) is the element of least norm in \(\mathcal{I}(\nu)\).
The following definition generalizes condition (3.7).
**Definition 3.6**.: _We say that an Ising system \(\{\alpha_{k}\}\) is a coercive system if there exists a constant \(M>0\) such that_
\[\|Du\|\leqslant ME(u)\text{, where }\quad E(u)=\sum_{i,j\in\mathbb{Z}^{d}} \alpha_{i-j}(u_{i}-u_{j})^{2}, \tag{3.11}\]
_where in the left-hand side we have identified \(u\colon\mathbb{Z}^{d}\to\{-1,1\}\) with its piecewise-constant extension from \(\mathbb{Z}^{d}\)._
**Remark 3.7**.: Note that in (3.11)
\[\|Du\|=4\mathcal{H}^{d-1}(\partial\{x:u(x)=1\})=2\#\{(i,j):u_{i}\neq u_{j}\}, \tag{3.12}\]
the factor \(4\) coming from the fact that \((u_{i}-u_{j})^{2}=4\) if \(u_{i}\neq u_{j}\), and the factor \(2\) coming from counting both \((i,j)\) and \((j,i)\) if \(u_{i}\neq u_{j}\). From the last equality we have that (3.7) implies that \(\{\alpha_{k}\}\) is coercive.
From the definition of \(E_{\varepsilon}\) we obtain that (3.11) is equivalent to \(\|Du_{\varepsilon}\|\leqslant ME_{\varepsilon}(u^{\varepsilon})\) with \(M\) independent of \(\varepsilon\), so that if \(\sup_{\varepsilon}E_{\varepsilon}(u^{\varepsilon})<+\infty\) then also \(\sup_{\varepsilon}\|Du_{\varepsilon}\|<+\infty\) and then, up to subsequences, \(u^{\varepsilon}\to A\) for some set of finite perimeter \(A\).
**Remark 3.8** (equivalent coercive and non-coercive systems).: In the assumptions of Theorem 3.3, in general the sequence \(E_{\varepsilon}\) is not coercive; that is, we cannot deduce that there exists \(A\) such that \(u^{\varepsilon}\to A\) up to subsequences from the boundedness of the energies \(E_{\varepsilon}(u^{\varepsilon})\). In the case that (3.7) holds we have a subclass of \(\varphi\), for which \(\alpha_{k}>0\) if \(k\in\{e_{1}\ldots,e_{d}\}\), and the construction in the proof of the theorem gives coercive approximating \(E_{\varepsilon}\). Note however that from the form of \(\varphi\) we cannot deduce the equicoerciveness of \(E_{\varepsilon}\). Indeed, let \(\varphi(z)=4\|z\|_{1}\), for which we may take (see (3.10) in Remark 3.5)
\[\alpha_{k}=\begin{cases}\frac{1}{2}&\text{ if }k=2e_{\ell},\ \ell\in\{1 \ldots,d\}\\ 0&\text{ otherwise;}\end{cases} \tag{3.13}\]
that is, the only non-zero interactions are those at distance \(2\). The corresponding energies are not coercive. Indeed, they have additional ground states, e.g., those given by the checkerboard functions \(v\) and \(-v\), where \(v_{i}=(-1)^{\|i\|_{1}}\). The interpolations \(v_{\varepsilon}\) of the corresponding scaled functions \(v^{\varepsilon}\) do not converge strongly locally in \(L^{1}(\mathbb{R}^{d})\).
**Example 3.9** (non-exact reachability of rational zonotopes by coercive systems).: Let \(d=2\) and let \(\varphi(z)=8(|\langle e_{1}+e_{2},z\rangle|+|\langle e_{1}-e_{2},z\rangle|)\), corresponding to \(\alpha_{k}=1\) for \(k\in\{e_{1}+e_{2},e_{1}-e_{2},-e_{1}+e_{2},-e_{1}-e_{2}\}\), and \(\alpha_{k}=0\) elsewhere, which is not coercive, again with ground states \(v\) and \(-v\) as in Remark 3.8.
We give a definition of connectedness related to an Ising system \(\{\alpha_{k}\}\).
**Definition 3.10**.: _We say that \(i\) and \(j\in\mathbb{Z}^{d}\) are connected with respect to \(\{\alpha_{k}\}\), or that \(i\) and \(j\) are \(\{\alpha_{k}\}\)-connected, if there exist \(N\in\mathbb{N}\) and \(\{k_{1},\ldots,k_{N}\}\) such that \(\sum_{\ell=1}^{N}k_{\ell}=j-i\), and \(\alpha_{k_{\ell}}>0\) for all \(\ell\in\{1,\ldots,N\}\). We say that the system \(\{\alpha_{k}\}\) is connected if all \(i\) and \(j\in\mathbb{Z}^{d}\) are connected with respect to \(\{\alpha_{k}\}\)._
**Remark 3.11**.: Note that the following statements are equivalent
(i) \(\{\alpha_{k}\}\) is connected;
(ii) \(0\) and \(j\) are connected for all \(j\in\mathbb{Z}^{d}\);
(iii) \(0\) and \(e_{n}\) are connected for all \(n\in\{1,\ldots,d\}\).
The only non-trivial implication is that (iii) implies (ii). This is proved e.g. by induction on \(n=\|j\|_{1}\). If \(n=1\) the two statements are the same. If \(\|j\|_{1}=n>1\) then we can write \(j=j^{\prime}+e_{m}\) for some \(j^{\prime}\) with \(\|j^{\prime}\|_{1}=n-1\) and some \(m\). By the inductive hypothesis there exist \(N\in\mathbb{N}\) and \(\{k_{1},\ldots,k_{N}\}\) such that \(\sum_{\ell=1}^{N}k_{\ell}=j^{\prime}\), and \(\alpha_{k_{\ell}}>0\) for all \(\ell\in\{1,\ldots,N\}\), and there exist \(N_{m}\in\mathbb{N}\) and \(\{k_{1}^{m},\ldots,k_{N_{m}}^{m}\}\) such that \(\sum_{\ell=1}^{N_{m}}k_{\ell}^{m}=e_{m}\), and \(\alpha_{k_{\ell}^{m}}>0\). Then the claim is proven by writing \(j=\sum_{\ell=1}^{N}k_{\ell}+\sum_{\ell=1}^{N_{m}}k_{\ell}^{m}\).
We now give a necessary and sufficient condition for a rational zonoid to be obtained from a coercive Ising system.
**Theorem 3.12**.: _Let \(\mu\) be a symmetric positive measure on \(S^{d-1}\) generating a rational zonoid. Then there exists a coercive Ising system \(\{\alpha_{k}\}\) generating \(\mu\) if and only if the set \(\{k\in\mathbb{Z}^{d}:\mu\big{(}\frac{k}{\|k\|}\big{)}>0\}\) spans the whole \(\mathbb{Z}^{d}\) on \(\mathbb{Z}\)._
Proof.: Let \(\{\alpha_{k}\}\) be an Ising system generating \(\mu\). Note that we may assume that \(\alpha_{k}>0\) for all points in \(k\in\mathbb{Z}^{d}\) such that \(\mu\big{(}\frac{k}{\|k\|}\big{)}>0\) since this assumption does not influence \(\mu\) and increases the connectedness of \(\{\alpha_{k}\}\). We then have
\[\Big{\{}k\in\mathbb{Z}^{d}:\mu\Big{(}\frac{k}{\|k\|}\Big{)}>0\Big{\}}=\{k\in \mathbb{Z}^{d}:\alpha_{k}>0\},\]
and note that the span of this set is just the set of all finite sums of points \(k_{\ell}\) with \(\alpha_{k_{\ell}}>0\); that is, the set of all points \(\{\alpha_{k}\}\)-connected with \(0\).
If this set is not the whole \(\mathbb{Z}^{d}\), then we consider the function defined by
\[u_{i}=\begin{cases}1&\text{ if $i$ is $\{\alpha_{k}\}$-connected with $0$}\\ -1&\text{ otherwise.}\end{cases}\]
Note that we have \(E(u)=0\), but \(u\) is not a constant, so that \(\|Du\|>0\) and hence the Ising system generating \(\mu\) is not coercive.
Conversely, if the set is the whole \(\mathbb{Z}^{d}\), in particular it contains \(\{e_{1},\ldots,e_{d}\}\). Then if \(i,j\in\mathbb{Z}^{d}\) are such that \(\|i-j\|=1\) and \(u_{i}\neq u_{j}\), using the \(\alpha_{k}\)-connectedness there exist \(\{k_{\ell}\}\) with \(\sum_{\ell=1}^{N}k_{\ell}=j-i\) and \(\alpha_{k_{\ell}}>0\). Writing \(i_{n}=\sum_{\ell=1}^{n}k_{\ell}\) and \(i_{0}=i\), we have \(\sum_{n=1}^{N}(u_{i_{n}}-u_{i_{n-1}})=u_{j}-u_{i}\neq 0\), and there exist \(n\in\{1,\ldots,N\}\) such that \(u_{i_{n}}-u_{i_{n-1}}\neq 0\). Since \(k_{i_{n}}=k_{i_{n}-i_{n-1}}\) is such that \(\alpha_{k_{i_{n}}}>0\) and the family of all such \(\{k_{\ell}\}\) is finite we deduce that there exists a constant \(C>0\) such that \(\alpha_{k_{i_{n}}}(u_{i_{n}}-u_{i_{n-1}})^{2}\geq C\). These indices \(i_{n}\) and \(i_{n-1}\) may be shared by a number of pairs \((i,j)\) bounded by \((\sum_{\ell}\|k_{\ell}\|)^{d}\), so that we can bound \(\#\{(i,j):u_{i}\neq u_{j}\}\) by the energy, and the coerciveness of \(\{\alpha_{k}\}\) follows.
We now examine non-degenerate non-coercive Ising systems and show that they can be seen as a superposition of a finite number of coercive Ising systems.
**Remark 3.13** (discrete-to-continuum convergence to multiple parameters).: Let \(\{\alpha_{k}\}\) be an Ising system with symmetric \(\alpha_{k}\geq 0\) satisfying (3.2). If the system \(\{\alpha_{k}\}\) is non-degenerate, the set
\[\mathcal{L}=\{i\in\mathbb{Z}^{d}:i\text{ is $\{\alpha_{k}\}$-connected to $0$}\} \tag{3.14}\]
is a \(d\)-dimensional Bravais sublattice of \(\mathbb{Z}^{d}\). If the system is not coercive, we can consider the equivalence classes \(\mathbb{Z}^{d}/\mathcal{L}\), which are a finite number \(M\), that we can represent as \(\mathcal{L}_{\ell}=m_{\ell}+\mathcal{L}\) for \(\ell\in\{1,\ldots,M\}\). If \(v^{\varepsilon}:\varepsilon\mathcal{L}_{\ell}\to\{-1,1\}\) then we can define the convergence \(v^{\varepsilon}\to A_{\ell}\) on the
(translated) lattice \(\varepsilon\mathcal{L}_{\ell}\) as the convergence of the piecewise-constant interpolations \(v^{\ell}_{\varepsilon}\) on \(\varepsilon\mathcal{L}_{\ell}\) to \(2\chi_{A_{\ell}}-1\) in \(L^{1}_{\mathrm{loc}}(\mathbb{R}^{d})\). Thanks to the \(\{\alpha_{k}\}\)-connectedness of \(\mathcal{L}_{\ell}\) the functionals \(E_{\varepsilon}\) are coercive with respect to this convergence, so that if \(u^{\varepsilon}\) is a sequence with \(\sup_{\varepsilon}E_{\varepsilon}(u^{\varepsilon})<+\infty\), up to subsequences, we can suppose that, denoting by \(u^{\varepsilon,\ell}\) the restrictions of \(u^{\varepsilon}\) to \(\varepsilon\mathcal{L}_{\ell}\), the corresponding piecewise-constant interpolations \(u^{\ell}_{\varepsilon}\) on \(\varepsilon\mathcal{L}_{\ell}\) converge to \(2\chi_{A_{\ell}}-1\). This defines a convergence \(u^{\varepsilon}\to(A_{1},\ldots,A_{M})\) with respect to which the functionals \(E_{\varepsilon}\) are coercive.
Note that at the same time the piecewise-constant interpolations \(u_{\varepsilon}\) of \(u^{\varepsilon}\) on \(\varepsilon\mathbb{Z}^{d}\) converge weakly in \(L^{1}_{\mathrm{loc}}(\mathbb{R}^{d})\) to \(u=\frac{1}{M}\sum_{\ell=1}^{M}(2\chi_{A_{\ell}}-1)\), while the stronger convergence in Theorem 3.3 implies that \(A_{1}=\cdots=A_{M}\).
The convergence in the previous remark allows to generalize the \(\Gamma\)-convergence result as follows.
**Theorem 3.14** (\(\Gamma\)-convergence to multiple parameters).: _Let \(\{\alpha_{k}\}\) be an Ising system with symmetric \(\alpha_{k}\geqslant 0\) satisfying (3.2), and such that the lattice \(\mathcal{L}\) defined in (3.14) be a \(d\)-dimensional Bravais sublattice of \(\mathbb{Z}^{d}\). Then the family \(E_{\varepsilon}\) is equicoercive with respect to the convergence \(u^{\varepsilon}\to(A_{1},\ldots,A_{M})\) in Remark 3.13 and the \(\Gamma\)-limit with respect to that convergence is_
\[F_{\mathcal{L}}(A_{1},\ldots,A_{M})=\frac{1}{M}\sum_{\ell=1}^{M}F(A_{\ell}),\]
_where \(F=F_{\varphi}\) is given by (3.6) with \(\varphi\) as in (3.3)._
Proof.: The functional \(E_{\varepsilon}\) can be written as a sum of functionals \(E_{\varepsilon}^{\ell}\) defined on \(v\colon\varepsilon\mathcal{L}_{\ell}\to\{-1,1\}\) by
\[E_{\varepsilon}^{\ell}(v)=\sum_{i,j\in\mathcal{L}_{\ell}}\varepsilon^{d-1} \alpha_{j-i}(v_{j}-v_{i})^{2}, \tag{3.15}\]
with the usual notation \(v_{i}=v(\varepsilon i)\). Since \(E_{\varepsilon}\) converge in the sense of Theorem 3.3 to the functional \(F\) therein, we note that each of these functionals \(E_{\varepsilon}^{\ell}\)\(\Gamma\)-converges to \(\frac{1}{M}F\) with respect to the corresponding convergence. Hence, the claim of the theorem follows.
The next theorem is an immediate consequence of the characterization of support functions of zonoids through their generating measures. Note however that optimal approximation of zonoids is a delicate problem (see e.g. [3]).
**Theorem 3.15** (approximate reachability of zonoids).: _Let \(\varphi\) be a support function of a zonoid. Then for every \(\eta>0\) there exists a coercive Ising system with a limit energy density \(\varphi_{\eta}\) and Wulff shape a rational zonotope such that_
\[\max\{|\varphi_{\eta}(\nu)-\varphi(\nu)|:\nu\in S^{d-1}\}<\eta. \tag{3.16}\]
Proof.: Let \(\{\alpha_{k}^{\eta}\}\) be an Ising system parameterized by \(\eta\), and \(\varphi_{\eta}\) the corresponding energy function. Note that we can write
\[\varphi(z)=\int_{S^{d-1}}|\langle z,\nu\rangle|d\mu_{\eta},\]
where
\[\mu_{\eta}=2\sum_{k\in\mathbb{Z}^{d}}\alpha_{k}^{\eta}\|k\|\delta_{\frac{k}{|k |}}\,.\]
The convergence
\[\lim_{\eta\to 0}\max\{|\varphi_{\eta}(\nu)-\varphi(\nu)|:\nu\in S^{d-1}\}=0\]
is then implied by the weak* convergence of \(\mu_{\eta}\to\mu\), where \(\mu\) is a generating measure for \(\varphi\). The existence of such \(\mu_{\eta}\) is then ensured by the weak* density of finite sums of Dirac deltas. We may also suppose that \(\alpha_{k}\geqslant\eta\) for \(k\in\{e_{1},\ldots,e_{d}\}\), up to adding a term \(\sum_{n}\eta\delta_{e_{n}}\) whose weak* limit is the null measure. The claim then follows after reparameterizing the measures \(\mu_{\eta}\).
### Convergence of Wulff shapes to rational zonoids
If \(\varphi(\nu)>0\) for all \(\nu\in S^{d-1}\) the Wulff shape \(W_{\varphi}\) of \(\varphi\) as defined in (3.24) admits a variational characterization, as the minimizer symmetric with respect to \(0\) of
\[\min\{F(A):|A|=|W_{\varphi}|\}, \tag{3.17}\]
where the functional \(F\) is as in (3.6). The condition \(\varphi>0\) is necessary and sufficient in order that \(|W_{\varphi}|>0\). If this condition is not satisfied on the whole \(S^{d-1}\) by the convexity of \(\varphi\), either \(\varphi\) is identically \(0\) or \(\varphi>0\) on a \(d^{\prime}\)-dimensional space with \(d^{\prime}<d\), and we can consider it as defined on \(\mathbb{R}^{d^{\prime}}\). We will then restrict to the case that \(\varphi(\nu)>0\) for all \(\nu\in S^{d-1}\).
With this variational characterization in mind, given a homogeneous Ising system \(\{\alpha_{k}\}\) generating a function \(\varphi\) we can define a _discrete Wulff shape for \(\{\alpha_{k}\}\)_ as any \(u^{\varepsilon}\) solution of the minimum problem
\[\min\Bigl{\{}E_{\varepsilon}(u):\#\{i:u_{i}=1\}=N_{\varepsilon}\Bigr{\}}, \tag{3.18}\]
where \(N_{\varepsilon}\in\mathbb{N}\) is such that \(\varepsilon^{d}N_{\varepsilon}\) tends to \(|W_{\varphi}|\); e.g., \(N_{\varepsilon}=\left\lfloor\frac{1}{\varepsilon}|W_{\varphi}|^{\frac{1}{d}} \right\rfloor^{d}\).
The following result shows the relation between rational zonoids and discrete Wulff shapes.
**Theorem 3.16** (Rational zonoids as limits of discrete Wulff shapes).: _Let \(\varphi\) be the energy density of the limit \(F\) of the Ising system \(\{\alpha_{k}\}\) as in Theorem 3.3, and let \(W\) be the corresponding related rational zonoid. Suppose that \(\varphi(\nu)>0\) for all \(\nu\in S^{d-1}\); then there exists a family \(u^{\varepsilon}\) of discrete Wulff shapes such that \(u^{\varepsilon}\to W\)._
Proof.: If the system \(\{\alpha_{k}\}\) is coercive then this result is a consequence of the \(\Gamma\)-convergence of \(E_{\varepsilon}\) to \(F\), the fact that, by a translation argument, we can suppose that discrete Wulff shapes are bounded and with barycenter at distance at most of order \(\varepsilon\) from \(0\), and that recovery sequences \(u^{\varepsilon}\) for a set \(A\) with \(|A|=|W_{\varphi}|\) can be taken with \(\#\{i:u_{i}^{\varepsilon}=1\}=N_{\varepsilon}\).
If the system is not coercive, by the condition \(\varphi>0\) it must nevertheless be non-degenerate. As in the proof of Theorem 3.14 the functional \(E_{\varepsilon}\) can be written as the sum of the functionals \(E_{\varepsilon}^{\ell}\) in (3.15), sing the same notation as in Remark 3.13 for the sets \(\mathcal{L}_{\ell}\). Since each \(\mathcal{L}_{\ell}\) is disconnected from \(\mathcal{L}_{\ell^{\prime}}\) if \(\ell\neq\ell^{\prime}\), we can decompose the minimum problem as
\[\min\Bigl{\{}E_{\varepsilon}(u):\#\{i:u_{i}=1\}=N_{\varepsilon} \Bigr{\}}\] \[=\min\bigg{\{}\sum_{\ell=1}^{M}\min\Bigl{\{}E_{\varepsilon}^{ \ell}(u):\#\{i\in\mathcal{L}_{\ell}:u_{i}=1\}=N_{\varepsilon}^{\ell}\Bigr{\}} :\sum_{\ell=1}^{M}N_{\varepsilon}^{\ell}=N_{\varepsilon}\biggr{\}}.\]
We can suppose that \(\varepsilon^{d}N_{\varepsilon}^{\ell}\to\lambda_{\ell}\) for each \(\ell\in\{1,\ldots,M\}\), with \(\sum_{\ell=1}^{M}\lambda_{\ell}=|W_{\varphi}|\). Since the unit cell of \(\mathcal{L}\) has measure \(M\), we have
\[\lim_{\varepsilon\to 0}\min\Bigl{\{}E_{\varepsilon}^{\ell}(u):\#\{i \in\mathcal{L}_{\ell}:u_{i}=1\}=N_{\varepsilon}^{\ell}\Bigr{\}} = \frac{1}{M}F\Bigl{(}M\frac{\lambda_{\ell}}{|W_{\varphi}|}W_{ \varphi}\Bigr{)}\] \[= M^{d-1}\frac{\lambda_{\ell}^{d-1}}{|W_{\varphi}|^{d-1}}F(W_{ \varphi}),\]
and
\[\lim_{\varepsilon\to 0}\min\Bigl{\{}E_{\varepsilon}(u):\#\{i:u_{i}=1\}=N_{ \varepsilon}\Bigr{\}}\] \[=\min\bigg{\{}\sum_{\ell=1}^{M}M^{d-2}\frac{\lambda_{\ell}^{d-1} }{|W_{\varphi}|^{d-1}}F(W_{\varphi}):\sum_{\ell=1}^{M}\frac{\lambda_{\ell}}{|W _{\varphi}|}=1\biggr{\}}=F(W_{\varphi}).\]
In the last equality we have used the convexity of the \((d-1)\)-th power, which implies that \(\lambda_{\ell}=\frac{1}{M}|W_{\varphi}|\) for all \(\ell\). This also implies that, using the arguments for coercive systems, we can take all interpolations of minimizers \(u_{\ell}^{\varepsilon}\) in each \(\varepsilon\mathcal{L}_{\ell}\) converging to \(W_{\varphi}\). The corresponding \(u^{\varepsilon}\) give discrete Wulff shapes converging to \(W_{\varphi}\).
### Asymptotic surface tension of an Ising system
The simplest variational way to associate an energy density to an Ising system is by computing the average limit surface energy; that is, the energy necessary to have a transition from a state \(1\) to a state \(-1\) through an hyperplane oriented with a normal \(\nu\). This can be done for more general non-homogeneous Ising systems. To that end, given non-negative coefficients \(\{a_{ij}\}\) we define a localized energy on a cube \(TQ^{\nu}\), where \(T>0\) and \(Q^{\nu}\) is a
unit cube centered in \(0\) with two faces orthogonal to \(\nu\), as follows:
\[E(u,TQ^{\nu})=\frac{1}{T^{d-1}}\sum_{i\text{ or }j\in\mathbb{Z}^{d}\cap TQ^{\nu}}a_{ ij}(u_{i}-u_{j})^{2}.\]
Note that, if we set \(\varepsilon=\frac{1}{T}\), this can be interpreted as the part of the energy (3.1) 'contained in the cube \(Q^{\nu}\)'.
In order to impose boundary conditions, due to the non-local nature of the energies we have to fix the values of functions outside \(TQ^{\nu}\). To that end, we consider minimum problems of the form
\[m_{T}(\nu)=\min\Bigl{\{}E(u,TQ^{\nu}):u_{i}=\pm 1\text{ if }\pm\langle i,\nu \rangle>0\text{ for all }i\notin TQ^{\nu}\Bigr{\}}. \tag{3.19}\]
This value can be considered as the minimum value of the transition from \(-1\) to \(1\) around the hyperplane \(\Pi^{\nu}=\{z\in\mathbb{R}^{d}:\langle z,\nu\rangle=0\}\).
**Definition 3.17** (surface tension of an Ising system).: _The surface tension of the Ising system \(\{a_{ij}\}\) is defined as_
\[\varphi(\nu)=\liminf_{T\to+\infty}\frac{1}{T^{d-1}}m_{T}(\nu). \tag{3.20}\]
We note that the definition of surface tension does not require any condition on the coefficients \(a_{ij}\) except their non-negativity. In the case of coefficients \(a_{ij}=\alpha_{i-j}\) a straightforward computation gives the formula for \(\varphi\).
**Proposition 3.18** (surface tension of a homogeneous Ising system).: _If \(\alpha_{k}\geqslant 0\) for all \(k\in\mathbb{Z}^{d}\) and \(a_{ij}=\alpha_{i-j}\) then the surface tension \(\varphi\) of the Ising system is given by_
\[\varphi(\nu)=4\sum_{k\in\mathbb{Z}^{d}\setminus\{0\}}\alpha_{k}|\langle k,\nu \rangle|. \tag{3.21}\]
_Furthermore, the \(\liminf\) in (3.20) is a limit._
Proof.: With fixed \(k\in\mathbb{Z}^{d}\) with \(\langle\nu,k\rangle\neq 0\), we note that for any test function \(u\) and any line \(L_{i,k}=\{i+tk:t\in\mathbb{R}\}\) with \(i\in\mathbb{Z}^{d}\) and such that \(L_{i,k}\cap TQ^{\nu}\cap\Pi^{\nu}\neq\emptyset\), there exist and least one index \(n\in\mathbb{Z}\) such that \(u_{i+nk}\neq u_{i+(n+1)k}\). This implies that
\[m_{T}(\nu)\geqslant 4T^{d-1}\sum_{\left\lvert k\right\rvert\leqslant K}\alpha_{ k}|\langle k,\nu\rangle|+O(T^{d-2})\]
for every fixed \(K\), and the lower bound letting \(K\to+\infty\). An upper bound is simply given taking \(u_{i}=1\) if \(\langle i,\nu\rangle\geqslant 0\) and \(u_{i}=-1\) if \(\langle i,\nu\rangle<0\) for \(i\in\mathbb{Z}^{d}\). A direct computation shows that this is a minimizing sequence and the existence of the limit in (3.20).
### Directed Ising systems and non-centered rational zonoids
We conclude this section with a generalization of Ising systems, where the energies take the form
\[E_{\varepsilon}(u)=\sum_{i,j\in\mathbb{Z}^{d}}\varepsilon^{d-1}\,\alpha_{i-j}((u _{i}-u_{j})^{+})^{2}, \tag{3.22}\]
where \(t^{+}\) indicates the positive part of \(t\in\mathbb{R}\). In this case the interaction between two points \(i\) and \(j\) such that \(u_{i}=1\) and \(u_{j}=-1\) is taken into account with the coefficient \(\alpha_{i-j}\), while if \(u_{j}=1\) and \(u_{i}=-1\) with the coefficient \(\alpha_{j-i}\). This is a particular case of the inhomogeneous directed Ising systems studied in [10].
For energies (3.22) we do not assume that \(\alpha_{-k}=\alpha_{k}\) in order not to lose in generality. Nevertheless, the proof of the convergence in Theorem 3.3 works essentially unchanged, with the limit energy density given by
\[\varphi(z)=4\sum_{k\in\mathbb{Z}^{d}}\alpha_{k}\langle k,z\rangle^{+}. \tag{3.23}\]
Note that in the perimeter functional (3.6) the integration is done with \(\nu\) the inner normal to the set \(A\), which may reflect the asymmetry of the Ising system.
If the range of \(\alpha_{k}\) is finite, then the Wulff shape of the function \(\varphi\) is
\[W_{\varphi}=\Big{\{}w\in\mathbb{R}^{d}\colon\text{ there exist }s_{k}\in[0,1] \text{ such that }w=4\sum_{k\in\mathbb{Z}^{d}}s_{k}\alpha_{k}k\Big{\}}; \tag{3.24}\]
that is, \(W_{\varphi}\) is the finite sum of the segments \([0,w_{\ell}]\) in \(\mathbb{R}^{d}\), where the set \(\{w_{\ell}\}\) coincides with the set of \(4\alpha_{k}k\) such that \(\alpha_{k}>0\). This is the translation of a (centered) rational zonotope by the vector \(\frac{1}{2}\sum_{\ell}w_{\ell}\). Proceeding as in Theorem 3.3 we deduce that directed Ising systems correspond to all translations of (centered) rational zonoids, and then, using Theorem 3.15, that all (non-centered) zonoids are reached by sequences of zonotopes generated by directed Ising systems.
## 4 Connections with discrete-to-continuum homogenization
We now highlight the connection between the definitions given until now and general results on periodic Ising systems. This will allow us to define a generalization of rational zonoids.
The limit in Theorem 3.3 can be interpreted as a particular case of homogenization of periodic Ising systems. We say that an Ising system \(\{a_{ij}\}\) is _periodic with period \(N\)_ if we have
\[a_{i+Ne_{n}\,j+Ne_{n}}=a_{ij} \tag{4.1}\]
for all \(i,j\in\mathbb{Z}^{d}\) and \(n\in\{1,\ldots,d\}\), which in turn is equivalent to
\[a_{i+k\,j+k}=a_{ij} \tag{4.2}\]
for all \(i,j\in\mathbb{Z}^{d}\) and \(k\in N\mathbb{Z}^{d}\). If \(N=1\) then this condition is equivalent to requiring that \(a_{ij}=\alpha_{i-j}\) for some \(\alpha_{k}\), so that homogeneous Ising systems coincide with Ising systems periodic with period \(1\).
For periodic systems we have the following result.
**Theorem 4.1** (homogenization and crystallinity of periodic Ising systems [7, 10]).: _Let \(\{a_{ij}\}\) be an \(N\)-periodic Ising system satisfying_
\[\max_{i\in\mathbb{Z}^{d}}\sum_{j\in\mathbb{Z}^{d}}a_{ij}\|j-i\|<+\infty. \tag{4.3}\]
_Then there exists the \(\Gamma\)-limit in the sense of Theorem 3.3. If in addition the system is with finite range; that is, there exists a constant \(K\) such that \(a_{ij}=0\) if \(\|i-j\|>K\) then the Wulff shape of the limit functional \(F\) is a polytope._
### Zonotopes generated by homogenized Ising systems
In this section we observe that the definition of rational zonotope can be generalized in view of Theorem 4.1, as the following definitions.
**Definition 4.2**.: _We say that \(W\) is a rational zonotope of order \(N\) if the corresponding \(\varphi\) is the limit of an \(N\)-periodic Ising system with finite range. We say that \(W\) is a rational zonoid of order \(N\) if the corresponding \(\varphi\) is the limit of an \(N\)-periodic Ising system satisfying (4.3). We say that \(W\) is a zonoid of order \(N\) if it is the limit in the Hausdorff distance of rational zonotopes of order \(N\)._
**Remark 4.3**.: Rational zonotopes and zonoids of order \(1\) are rational zonotopes and zonoids as defined above. Note that condition (4.3) corresponds to (3.2) if \(N=1\).
The analysis in [10, Proposition 2.9] imply that the energy density \(\varphi\) of a rational zonotope of order \(N\) is differentiable outside rational directions, which suggests that \((d-1)\)-dimensional faces of Wulff shapes should have normals in rational directions.
**Proposition 4.4**.: _If \(W\) is a rational zonoid of order \(N\), then it is the limit in the Hausdorff distance of rational zonotopes of order \(N\)._
Proof.: If \(W\) is a rational zonoid of order \(N\) generated by \(\{a_{ij}\}\) it suffices to consider the rational zonotopes \(W_{n}\) of order \(N\) generated by \(\{a_{ij}^{n}\}\), where \(a_{ij}^{n}=a_{ij}\) if \(\|i-j\|\leqslant n\) and \(a_{ij}^{n}=0\) if \(\|i-j\|>n\).
We finally show that the union of all zonoids of order \(N\) is dense in the class of all symmetric convex sets.
**Theorem 4.5** (density of zonotopes of order \(N\) as \(N\to+\infty\)).: _For every convex bounded open set \(W\) symmetric with respect to the origin there exist zonotopes \(W_{k}\) of order \(N_{k}\) converging to \(W\)._
Proof.: Let \(\varphi\) be the support function of \(W\). In [6] the following result is proved: if \(0<\alpha\leqslant\beta\) are such that
\[\alpha\|z\|_{1}\leqslant\varphi(z)\leqslant\beta\|z\|_{1},\]
then there exist periodic systems \(\{a_{ij}^{k}\}\) of period \(N_{k}\) with \(a_{ij}^{k}\in\{\alpha,\beta\}\) such that the related homogenized energy densities \(\varphi_{k}\) converge to \(\varphi\). By Theorem 4.1 the Wulff shapes \(W_{k}\) are zonotopes of order \(N_{k}\) that converge to \(W\).
### Further possible generalizations
While homogeneous directed Ising systems as in Section 3.6 only involve a translation in the resulting generated rational zonoid, the class of homogenized periodic directed Ising systems could be strictly larger than that of translations of the non-directed analog. We can therefore consider periodic Ising systems with coefficients \(a_{ij}\) satisfying (4.1), (4.2), and (4.3), and the corresponding energies
\[E_{\varepsilon}(u)=\sum_{i,j\in\mathbb{Z}^{d}}\varepsilon^{d-1}\,a_{ij}((u_{i }-u_{j})^{+})^{2}. \tag{4.4}\]
Note again that we do not suppose that \(a_{ij}=a_{ji}\), a condition that would not be restrictive for un-directed Ising systems. The results in [10] ensure the validity of the claims of Theorem 4.1. We can therefore give definitions of directed rational zonotope and zonoid of order \(N\) as in Definition 4.2. It would be interesting to know if such zonoids still possess a center of symmetry, in which case it is likely that a result as Theorem 4.5 holds for all \(W\) with a center of symmetry.
Finally, we note that another possible class of perimeter functionals are those generated by perturbed periodic Ising systems of the form
\[E_{\varepsilon}(u)=\sum_{i,j\in\mathbb{Z}^{d}}\varepsilon^{d-1}\alpha_{i-j}(u _{i}-u_{j})^{2}+\sum_{i\in\mathbb{Z}^{d}}\varepsilon^{d}u_{i}g_{i}, \tag{4.5}\]
where \(g\) is a periodic function with zero average, and small enough so that \(E_{\varepsilon}(u)\) remains non-negative on bounded configurations. This corresponds to adding a volume term with zero average. These energies still converge to a perimeter functional, whose form may depend on the perturbation. A link with the homogenization of directed Ising systems can be obtained following the results in [10] as done for the continuous analog in [11, Sec. 4].
**Acknowledgements.** This paper is based on work supported by the National Research Project (PRIN 2017BTM7SN) "Variational Methods for Stationary and Evolution Problems with Singularities and Interfaces", funded by the Italian Ministry of University and Research. Andrea Braides is a member of GNAMPA, INdAM. |
2304.06601 | U-Statistics Based Jackknife Empirical Likelihood Tests for the
Generalized Lorenz Curves | A Lorenz curve is a graphical representation of the distribution of income or
wealth within a population. The generalized Lorenz curve can be created by
scaling the values on the vertical axis of a Lorenz curve by the average output
of the distribution. In this paper, we propose two non-parametric methods for
testing the equality of two generalized Lorenz curves. Both methods are based
on empirical likelihood and utilize a U-statistic. We derive the limiting
distribution of the likelihood ratio, which is shown to follow a chi-squared
distribution with one degree of freedom. We performed simulations to evaluate
how well the proposed methods perform compared to an existing method, by
examining their Type I error rates and power across different sample sizes and
distribution assumptions. Our results show that the proposed methods exhibit
superior performance in finite samples, particularly in small sample sizes, and
are robust across various scenarios. Finally, we use real-world data to
illustrate the methods of testing two generalized Lorenz curves. | Suthakaran Ratnasingam, Anton Butenko | 2023-04-06T17:04:49Z | http://arxiv.org/abs/2304.06601v2 | # \(U\)-Statistics Based Jackknife Empirical Likelihood Tests for the Generalized Lorenz Curves
###### Abstract
A Lorenz curve is a graphical representation of the distribution of income or wealth within a population. The generalized Lorenz curve can be created by scaling the values on the vertical axis of a Lorenz curve by the average output of the distribution. In this paper, we propose two non-parametric methods for testing the equality of two generalized Lorenz curves. Both methods are based on empirical likelihood and utilize a \(U\)-statistic. We derive the limiting distribution of the likelihood ratio, which is shown to follow a chi-squared distribution with one degree of freedom. We performed simulations to evaluate how well the proposed methods perform compared to an existing method, by examining their Type I error rates and power across different sample sizes and distribution assumptions. Our results show that the proposed methods exhibit superior performance in finite samples, particularly in small sample sizes, and are robust across various scenarios. Finally, we use real-world data to illustrate the methods of testing two generalized Lorenz curves.
## 1 Introduction
A Lorenz curve is a visual representation of an income or wealth distribution within a population. It is named after American economist Max Lorenz (Lorenz (1905)) who developed it in 1905. The Lorenz curve is constructed by plotting the cumulative percentage of the population on the \(x\)-axis against the cumulative percentage of the variable (such as income or wealth) on the \(y-\)axis. The resulting curve represents the distribution of the variable in the population. A Lorenz curve that is close to the line of equality (which is a straight line that represents perfect equality) indicates that the distribution of the variable is relatively equal across the population. On the other hand, a Lorenz curve that is farther away from the line of equality indicates a higher degree of inequality in the distribution of the variable. Following Gastwirth (1971), a general definition of the Lorenz curve is given as
\[\xi(t)=\frac{1}{\mu}\int_{0}^{\phi_{t}}xdF(x),\quad t\in[0,1] \tag{1}\]
where \(\mu\) denotes the mean of \(F\), and \(\psi_{t}=F^{-1}(t)=\inf\{x:F(x)\geq t\}\) is the \(t-\)th quantile of \(F\). For a fixed \(t\in[0,1]\), the Lorenz ordinate \(\xi(t)\) is the proportion of the cumulative income of the lowest \(t\)-th quantile of households. The generalized Lorenz curve can be constructed
from a Lorenz curve by scaling the values on the vertical axis by the average output of the distribution. Similarly, the generalized Lorenz curve is defined by
\[\eta(t)=\int_{0}^{\phi_{t}}xdF(x),\quad t\in[0,1] \tag{2}\]
where \(\psi_{t}\) is the \(t-\)th quantile of \(F\) as defined above. For a fixed \(t\in[0,1]\), the generalized Lorenz ordinate \(\eta(t)\) is the average income of the lowest \(t\)-th quantile of households. While Lorenz curves are frequently used in economics to represent financial inequality, they also can be used in other fields of study to visualize the inequality of the distribution within any system. For example, the Lorenz curve has been used by several researchers to analyze physician distributions. Chang & Halfon (1997) examined variations in the distribution of pediatricians among the states between 1982 and 1992 using Lorenz curves and Gini indices. Kobayashi & Takaki (1992) used the Lorenz curve and the Gini coefficient to study the disparity in physician distribution in Japan.
The empirical likelihood (EL) method, introduced by Owen (1988), is a powerful nonparametric approach that offers numerous advantages over traditional methods. Unlike traditional methods, EL does not require strong assumptions to utilize the likelihood ratio approach, yet still preserves many of its desirable features, including Wilk's theorem, asymmetric confidence intervals, and better coverage for small sample sizes. However, the EL method has some computational difficulties when using nonlinear statistics, as demonstrated by Jing et al. (2009), and when constraints' solutions do not exist, as shown by Chen et al. (2008). Specifically, Jing et al. (2009) showed that the EL-based approach loses its appeal when using nonlinear \(U\)-statistics with a degree of \(m\geq 2\) because of the computational difficulty of solving a system of nonlinear equations simultaneously using Lagrange multipliers. To address this issue, Jing et al. (2009) proposed the jackknife empirical likelihood (JEL) approach. The JEL method turns the statistic of interest into a sample mean based on jackknife pseudo-values (Quenouille (1956)), which are asymptotically independent under mild conditions (Shi (1984)). Then, Owen's EL method can be applied consecutively, resulting in a simpler system of equations. On the other hand, Chen et al. (2008) pointed out that under certain conditions, it can be challenging to determine the parameter region over which the likelihood ratio function is well-defined. This makes it difficult to identify the maximum likelihood ratio or find a proper initial value. To tackle this challenge, Chen proposed the adjusted empirical likelihood (AEL) approach, which extends the convex hull to include the origin by adding a pseudo-value. With this adjustment, the empirical likelihood is well-defined for all parameter values, making it easier to find the maximum.
Multiple studies have been conducted on EL for the Lorenz curve by various researchers. For instance, Belinga-Hall (2007) and Yang et al. (2012) developed plug-in empirical likelihood-based inferences to construct confidence intervals for the generalized Lorenz curve. Most recently, Ratnasingam et al. (2023) developed three nonparametric EL-based methods to construct confidence intervals for the generalized Lorenz curve using adjusted empirical likelihood (AEL), transformed empirical likelihood (TEL), and transformed adjusted empirical likelihood (TAEL). Moreover, several studies have focused on comparing two Lorenz curves. For example, Arora & Jain (2006) investigated the generalized Lorenz dominance and proposed tests for the equality of two generalized Lorenz curves over a specified interval. Li & Wei (2018) noted that normal approximation-based methods may have poor performance, especially for the skewed income data, or the limiting distributions are nonstandard
and bootstrap calibrations are needed hence more effective inferences for Lorenz curves are desirable. All of these tests were parametric and they involve making assumptions about the underlying distribution of the data. Xu (1997) proposed an asymptotically distribution-free statistical (ADF) test to evaluate the equality of two generalized Lorenz curves and showed that the test statistic follows the weighted sum of \(\chi^{2}\) with different degrees of freedom.
As far as we know, there have been no previous studies examining the testing of the equality of two generalized Lorenz curves using EL methods. Therefore, this is the first study to investigate the equality of two generalized Lorenz curves using a nonparametric approach. We propose two novel nonparametric methods that employ a \(U\)-statistic based on the jackknife empirical likelihood (JEL) method and its extension to the adjusted jackknife empirical likelihood (AJEL). These methods combine two EL-based approaches, namely the JEL and AEL, previously discussed in Jing et al. (2009) and Chen et al. (2008), respectively.
The remainder of the paper is structured as follows. In Section 2, we present two novel nonparametric techniques for testing the similarity between two generalized Lorenz curves. Section 4 describes the simulation studies carried out to evaluate the effectiveness of the proposed methods in different scenarios and to compare their performance with an existing method. In Section 5, we demonstrate the application of these methods to real datasets. Our findings are discussed in Section 6. All proofs are provided in the appendix.
## 2 Main Results
In this section we develop two new testing procedures using jackknife EL methods. Let \(X_{1},X_{2},\cdots,X_{n_{1}}\) and \(Y_{1},Y_{2},\cdots,Y_{n_{2}}\) be two random samples from two independent populations. The generalized Lorenz curve for these two samples are
\[\eta_{1}(t)=\int_{0}^{\psi_{t}}xdF(x),\quad t\in[0,1] \tag{3}\]
and
\[\eta_{2}(t)=\int_{0}^{\psi_{t}}ydF(y),\quad t\in[0,1] \tag{4}\]
where \(\psi_{t}=F^{-1}(t)=\inf\{x:F(x)\geq t\}\) is the \(t-\)th quantile of \(F\). We are interested in testing the following hypotheses.
\[H_{0}:\eta_{1}(t)=\eta_{2}(t)\quad vs\quad H_{1}:\eta_{1}(t)\neq\eta_{2}(t) \tag{5}\]
From the definition of the generalized Lorenz curve, it can be clearly seen that
\[E[X\,I(X\leq\psi_{t})]-\eta_{1}(t)=0.\]
and
\[E[Y\,I(Y\leq\psi_{t})]-\eta_{2}(t)=0.\]
As a result, the generalized Lorenz ordinates \(\eta_{1}(t)\) and \(\eta_{2}(t)\) are the means of the random variable \(X\) and \(Y\) truncated at \(\psi_{t}\) respectively. Let's consider the kernel function,
\[h(X,Y)=X\,I(X\leq\psi_{t})I(Y\leq\psi_{t})-Y\,I(X\leq\psi_{t})I(Y\leq\psi_{t}) \tag{6}\]
We can easily show that \(\theta(t)\equiv E\big{[}h(X_{i},Y_{j}\big{]}=(\eta_{1}(t)-\eta_{2}(t))P(X\leq\psi_{ t})P(Y\leq\psi_{t})\). Thus, we are interested in testing
\[H_{0}:\theta(t)=0\quad vs\quad H_{1}:\theta(t)\neq 0. \tag{7}\]
Now consider, the two-sample \(U\)-statistics of degree (1,1) with the kernel \(h\) is given by,
\[\begin{split} U_{n_{1},n_{2}}&=\frac{1}{n_{1}} \frac{1}{n_{2}}\sum_{1\leq i\leq n_{1}}\sum_{1\leq j\leq n_{2}}h(X_{i},Y_{j})\\ &=\frac{1}{n_{1}}\frac{1}{n_{2}}\sum_{1\leq i\leq n_{1}}\sum_{1 \leq j\leq n_{2}}X_{i}\,I(X_{i}\leq\psi_{t})I(Y_{j}\leq\psi_{t})-Y_{j}\,I(X_{i }\leq\psi_{t})I(Y_{j}\leq\psi_{t})\end{split} \tag{8}\]
Let \(n=n_{1}+n_{2}\). We can write the \(U\)-statistics
\[U_{n_{1},n_{2}}(X_{1},\ldots,X_{n_{1}},Y_{1},\ldots,Y_{n_{2}})=U_{n}(Z_{1},Z_{ 2},\ldots,Z_{n}) \tag{9}\]
where
\[Z_{k}=\begin{cases}X_{k}&k=1,2,\ldots,n_{1}\\ Y_{k-n_{1}}&k=n_{1}+1,\ldots,n\end{cases}\]
We define the corresponding jackknife pseudo-values by
\[\widetilde{V}_{k}=nU_{n}-(n-1)U_{n-1}^{-k},\quad k=1,2,\cdots n, \tag{10}\]
where \(U_{n-1}^{-k}=U_{n}(Z_{1},Z_{2},\ldots,Z_{k-1},Z_{k+1},\ldots,Z_{n})\). Further, the jackknife estimator of \(\theta\) is \(n^{-1}\sum_{i=1}^{n}\widetilde{V}_{i}\). In particular, under mild conditions, the \(\widetilde{V}_{k}\)'s are asymptotically independent. For more details, readers are referred to Shi (1984). Thus, we can use the EL approach to the \(\widetilde{V}_{k}\)'s. It should be noted that \(\widetilde{V}_{k}(t)\) is the function of \(t\) and can be calculated at a fixed value \(t_{0}\) such that \(t_{0}\in[0,1]\). For the simplicity of notations, we use \(\widetilde{V}_{k}\) instead of \(\widetilde{V}_{k}(t)\). The JEL for \(\theta(t)\) is defined as follows:
\[L(\theta(t))=\sup_{\mathbf{p}}\Big{\{}\prod_{k=1}^{n}p_{k}:\sum_{k=1}^{n}p_{k} =1,\sum_{k=1}^{n}p_{k}(\widetilde{V}_{k}-\mathbf{E}\widetilde{V}_{k})=0\Big{\}}, \tag{11}\]
where \(\mathbf{p}=(p_{1},p_{2},\ldots,p_{n})\) is a probability vector satisfying \(\sum_{k=1}^{n}p_{k}=1\) and \(p\geq 0\) for all \(k\), and \(\mathbf{E}\widetilde{V}_{k}\) can be determined using the equation (14) in Jing et al. (2017). Note that \(\prod_{k=1}^{n}p_{k}\), subject to \(\sum_{k=1}^{n}p_{k}=1\), attains its maximum \(n^{-n}\) at \(p_{k}=n^{-1}\). Thus, the JEL ratio for \(\theta(t)\) is given as
\[\mathcal{R}(\theta(t))=\sup\Big{\{}\prod_{k=1}^{n}np_{k}:\sum_{k=1}^{n}p_{k}=1,\sum_{k=1}^{n}p_{k}(\widetilde{V}_{k}-\mathbf{E}\widetilde{V}_{k})=0\Big{\}} \tag{12}\]
Further, under null hypothesis \(H_{0}:\theta(t)=0\), the JEL ratio becomes,
\[\mathcal{R}(0)=\sup\Big{\{}\prod_{k=1}^{n}np_{k}:\sum_{k=1}^{n}p_{k}=1,\sum_{k =1}^{n}p_{k}\widetilde{V}_{k}=0\Big{\}}. \tag{13}\]
Using the Lagrange multiplier method, we have
\[p_{k}=\frac{1}{n}\Big{\{}1+\lambda\widetilde{V}_{k}\Big{\}}^{-1},\quad k=1, \ldots,n.\]
where \(\lambda\) is the solution to
\[\frac{1}{n}\sum_{k=1}^{n}\frac{\widehat{V}_{k}}{1+\lambda\widehat{V}_{k}}=0.\]
Hence, the profile jackknife empirical log-likelihood ratio for \(\theta(t)\) becomes
\[\ell(\theta(t))=-2\log\mathcal{R}(\theta(t))=2\sum_{k=1}^{n}\log\{1+\lambda \widehat{V}_{k}\}. \tag{14}\]
Let \(h_{1,0}(x)=\mathbf{E}h(x,Y_{1})\), \(\sigma_{1,0}^{2}=Var(h_{1,0}(X_{1}))\), \(h_{0,1}(y)=\mathbf{E}h(X_{1},y)\), and \(\sigma_{0,1}^{2}=Var(h_{0,1}(Y_{1}))\). We have the following theorem for the JEL.
**Theorem 2.1**.: _Assume that_
1. \(E(X^{2})<\infty\)_, and_ \(E(Y^{2})<\infty\)__
2. \(\mathbf{E}h^{2}(X_{1},Y_{1})<\infty\)_,_ \(\sigma_{1,0}^{2}>0\)_, and_ \(\sigma_{0,1}^{2}>0\)__
3. \(n_{1}/n_{2}\longrightarrow r\)_, where_ \(0<r<\infty\)__
_For any given \(t=t_{0}\in(0,1)\), the limiting distribution of \(\ell(\theta(t_{0}))\) defined by (14) is a chi-square distribution with one degree of freedom,_
\[\ell(\theta(t_{0}))\longrightarrow\chi_{1}^{2},\quad\text{as}\ \min(n_{1},n_{2}) \longrightarrow\infty. \tag{15}\]
Proof.: Proof of Theorem 2.1 is given in Appendix.
Further, Chen et al. (2008) proposed the AEL method by adding a pseudo-observation to the data set. This method bypasses the convex hull constraint and ensures a solution at any parameter point. By adopting Chen et al. (2008)'s idea, we extend the proposed JEL method by employing the adjusted jackknife empirical likelihood (AJEL) to examine the equality of two Lorenz curves. The AJEL for \(\theta(t)\) is defined as follows:
\[L^{\text{Adj}}(\theta(t))=\sup_{\mathbf{p}}\bigg{\{}\prod_{k=1}^{n+1}p_{k}^{ \text{Adj}}:\sum_{k=1}^{n+1}p_{k}^{\text{Adj}}=1,\sum_{k=1}^{n+1}p_{k}^{\text {Adj}}g_{k}^{\text{Adj}}(t)=0\bigg{\}}, \tag{16}\]
where \(g_{k}^{\text{Adj}}(t)=\widehat{V}_{k}-\mathbf{E}\widehat{V}_{k}\), \(k=1,\ldots,n\), and \(g_{n+1}^{\text{Adj}}=-a_{n}\bar{g}_{n}(t)=-\frac{a_{n}}{n}\sum_{i=1}^{n}g_{i}(t)\). As recommended by Chen et al. (2008), \(a_{n}=\max\{1,\log(n)/2\}\). Using the Lagrange multiplier method, we can determine \(L^{\text{Adj}}(\theta(t))\) as follows.
\[p_{k}^{\text{Adj}}=\frac{1}{n+1}\bigg{\{}1+\lambda^{\text{Adj}}(t)g_{k}^{ \text{Adj}}(t)\bigg{\}}^{-1},\quad k=1,\ldots,n+1.\]
where \(\lambda^{\text{Adj}}\) is the solution to
\[\frac{1}{n+1}\sum_{k=1}^{n+1}\frac{g_{k}^{\text{Adj}}(t)}{1+\lambda^{\text{Adj }}(t)g_{k}^{\text{Adj}}(t)}=0.\]
Note that \(\prod_{k=1}^{n+1}p_{k}^{\text{Adj}}\), subject to \(\sum_{k=1}^{n+1}p_{k}^{\text{Adj}}=1\), attains its maximum \((n+1)^{-n-1}\) at \(p_{k}=(n+1)^{-1}\). Thus, the AJEL ratio for \(\theta(t)\) is given as
\[\mathcal{R}^{\text{Adj}}(\theta(t))=\prod_{k=1}^{n+1}(n+1)p_{k}^{\text{Adj}}= \prod_{k=1}^{n+1}\big{\{}1+\lambda^{\text{Adj}}(t)g_{k}^{\text{Adj}}(t)\big{\}} ^{-1} \tag{17}\]
Hence, the profile-adjusted jackknife empirical log-likelihood ratio for \(\theta(t)\) is
\[\ell^{\mathrm{Adj}}(\theta(t))=-2\log\mathcal{R}^{\mathrm{Adj}}(\theta(t))=2 \sum_{k=1}^{n+1}\log\{1+\lambda^{\mathrm{Adj}}(t)g_{k}^{\mathrm{Adj}}(t)\}. \tag{18}\]
**Theorem 2.2**.: _Under the same conditions of Theorem 2.1 and for any given \(t=t_{0}\in(0,1)\), the limiting distribution of \(\ell^{\mathrm{Adj}}(\theta(t_{0}))\) defined by (18) is a chi-square distribution with one degree of freedom,_
\[\ell^{\mathrm{Adj}}(\theta(t_{0}))\longrightarrow\chi_{1}^{2},\quad\text{as }n \longrightarrow\infty. \tag{19}\]
Proof.: Proof of Theorem 2.2 is given in Appendix.
## 3 Simulation Study
In this section, we conduct a simulation study to evaluate the performance of the proposed testing methods. In our simulation analysis, we take into account Chi-Square, Exponential, and Half-Normal distributions as the overall distribution function \(F(x)\) because the majority of income distributions are positively skewed. Under the null hypothesis, we examine the distributions of \(\chi_{4}^{2},\ Exp(4)\), and \(HN(1)\) using different sample sizes \((n_{1},n_{2})\) such as \((20,30),(40,50),(75,75)\), and \((100,100)\). We first assess the Type I error probabilities of the ADF, JEL and AJEL methods with a nominal level of \(\alpha=0.05\). Tables 1-3 provide a summary of the findings, including the probabilities of Type I errors (TE) and their corresponding standard errors (SE), whereas Figure 1 depicts the outcomes graphically. The JEL method appears to perform slightly better or similar to the AJEL method. For instance, when using the \(\chi_{4}^{2}\) distribution with sample sizes of (20, 30) at \(t=0.1\), the Type I error probability for the ADF method is 0.007 with a standard error of 0.0027, the JEL method is 0.027 with a standard error of 0.0051, and the AJEL method has a probability of 0.075 and a standard error of 0.0083. When using the \(\chi_{4}^{2}\) test with sample sizes of 20 and 30, the AJEL method produces a Type I error rate that is slightly higher than the expected level. The ADF method comes next, with the JEL method following. However, when testing for the \(Exp(1)\) distribution, the ADF method results in a Type I error rate that is much lower than the expected level, and the test becomes more conservative for \(t>0.2\). When using the \(HN(1)\) distribution, the JEL method performs the best among the three methods, while the ADF method performs the worst for all sample sizes. The Type I error probabilities are slightly above the nominal level for small sample sizes, but improve for larger sample sizes and remain within an acceptable range.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{3}{c}{ADF} & \multicolumn{3}{c}{JEL} & \multicolumn{3}{c}{AJEL} \\ \hline \((n_{1},n_{2})\) & \(t\) & TE & SE & TE & SE & TE & SE \\ \hline (20, 30) & 0.1 & 0.007 & 0.0027 & 0.027 & 0.0051 & 0.075 & 0.0083 \\ & 0.2 & 0.011 & 0.0033 & 0.019 & 0.0043 & 0.058 & 0.0074 \\ & 0.3 & 0.027 & 0.0051 & 0.018 & 0.0042 & 0.062 & 0.0076 \\ & 0.4 & 0.037 & 0.0060 & 0.017 & 0.0041 & 0.065 & 0.0078 \\ & 0.5 & 0.069 & 0.0080 & 0.017 & 0.0041 & 0.065 & 0.0078 \\ & 0.6 & 0.062 & 0.0076 & 0.059 & 0.0075 & 0.086 & 0.0089 \\ & 0.7 & 0.069 & 0.0080 & 0.052 & 0.0070 & 0.070 & 0.0081 \\ & 0.8 & 0.071 & 0.0081 & 0.046 & 0.0066 & 0.065 & 0.0078 \\ & 0.9 & 0.070 & 0.0081 & 0.038 & 0.0060 & 0.052 & 0.0075 \\ \hline (40,50) & 0.1 & 0.005 & 0.0023 & 0.016 & 0.0040 & 0.055 & 0.0072 \\ & 0.2 & 0.009 & 0.0030 & 0.019 & 0.0042 & 0.039 & 0.0061 \\ & 0.3 & 0.014 & 0.0037 & 0.010 & 0.0031 & 0.044 & 0.0065 \\ & 0.4 & 0.027 & 0.0051 & 0.013 & 0.0036 & 0.054 & 0.0071 \\ & 0.5 & 0.047 & 0.0067 & 0.022 & 0.0046 & 0.061 & 0.0076 \\ & 0.6 & 0.044 & 0.0065 & 0.049 & 0.0068 & 0.072 & 0.0082 \\ & 0.7 & 0.051 & 0.0070 & 0.050 & 0.0069 & 0.065 & 0.0078 \\ & 0.8 & 0.059 & 0.0075 & 0.042 & 0.0063 & 0.065 & 0.0078 \\ & 0.9 & 0.060 & 0.0075 & 0.043 & 0.0063 & 0.061 & 0.0076 \\ \hline (75,75) & 0.1 & 0.001 & 0.0012 & 0.010 & 0.0029 & 0.012 & 0.0034 \\ & 0.2 & 0.002 & 0.0014 & 0.011 & 0.0033 & 0.013 & 0.0039 \\ & 0.3 & 0.015 & 0.0038 & 0.019 & 0.0042 & 0.016 & 0.0040 \\ & 0.4 & 0.016 & 0.0040 & 0.012 & 0.0034 & 0.018 & 0.0048 \\ & 0.5 & 0.022 & 0.0046 & 0.023 & 0.0047 & 0.024 & 0.0051 \\ & 0.6 & 0.024 & 0.0048 & 0.037 & 0.0060 & 0.046 & 0.0073 \\ & 0.7 & 0.025 & 0.0049 & 0.031 & 0.0056 & 0.031 & 0.0056 \\ & 0.8 & 0.028 & 0.0052 & 0.030 & 0.0054 & 0.035 & 0.0071 \\ & 0.9 & 0.036 & 0.0059 & 0.032 & 0.0056 & 0.031 & 0.0056 \\ \hline (100, 100) & 0.1 & 0.002 & 0.0016 & 0.011 & 0.0030 & 0.038 & 0.0060 \\ & 0.2 & 0.006 & 0.0024 & 0.012 & 0.0034 & 0.040 & 0.0062 \\ & 0.3 & 0.014 & 0.0037 & 0.020 & 0.0044 & 0.048 & 0.0068 \\ & 0.4 & 0.024 & 0.0048 & 0.015 & 0.0038 & 0.046 & 0.0066 \\ & 0.5 & 0.033 & 0.0056 & 0.010 & 0.0031 & 0.044 & 0.0065 \\ & 0.6 & 0.036 & 0.0059 & 0.031 & 0.0055 & 0.046 & 0.0066 \\ & 0.7 & 0.038 & 0.0060 & 0.035 & 0.0058 & 0.046 & 0.0066 \\ & 0.8 & 0.047 & 0.0067 & 0.035 & 0.0058 & 0.046 & 0.0066 \\ & 0.9 & 0.045 & 0.0066 & 0.033 & 0.0057 & 0.045 & 0.0065 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Type I error (TE) and standard error (SE) comparison of ADF, JEL, and AJEL tests with nominal level \(\alpha=0.05\) when \(X,Y\sim\chi_{4}^{2}\)
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{3}{c}{ADF} & \multicolumn{3}{c}{JEL} & \multicolumn{3}{c}{AJEL} \\ \hline \((n_{1},n_{2})\) & \(t\) & TE & SE & TE & SE & TE & SE \\ \hline (20,30) & 0.1 & 0.037 & 0.0060 & 0.081 & 0.0086 & 0.091 & 0.0091 \\ & 0.2 & 0.022 & 0.0046 & 0.047 & 0.0067 & 0.060 & 0.0075 \\ & 0.3 & 0.010 & 0.0031 & 0.074 & 0.0083 & 0.075 & 0.0083 \\ & 0.4 & 0.006 & 0.0024 & 0.061 & 0.0076 & 0.074 & 0.0083 \\ & 0.5 & 0.006 & 0.0024 & 0.060 & 0.0075 & 0.061 & 0.0076 \\ & 0.6 & 0.001 & 0.0010 & 0.096 & 0.0093 & 0.101 & 0.0095 \\ & 0.7 & 0.001 & 0.0010 & 0.088 & 0.0090 & 0.087 & 0.0089 \\ & 0.8 & 0.001 & 0.0010 & 0.065 & 0.0078 & 0.075 & 0.0083 \\ & 0.9 & 0.000 & 0.0000 & 0.062 & 0.0076 & 0.064 & 0.0078 \\ \hline (40,50) & 0.1 & 0.024 & 0.0049 & 0.060 & 0.0075 & 0.076 & 0.0084 \\ & 0.2 & 0.010 & 0.0031 & 0.043 & 0.0064 & 0.052 & 0.0070 \\ & 0.3 & 0.000 & 0.0000 & 0.048 & 0.0068 & 0.049 & 0.0068 \\ & 0.4 & 0.000 & 0.0000 & 0.045 & 0.0066 & 0.052 & 0.0070 \\ & 0.5 & 0.000 & 0.0000 & 0.057 & 0.0073 & 0.056 & 0.0073 \\ & 0.6 & 0.000 & 0.0000 & 0.068 & 0.0080 & 0.070 & 0.0081 \\ & 0.7 & 0.000 & 0.0000 & 0.065 & 0.0078 & 0.064 & 0.0077 \\ & 0.8 & 0.000 & 0.0000 & 0.067 & 0.0079 & 0.072 & 0.0082 \\ & 0.9 & 0.000 & 0.0000 & 0.059 & 0.0074 & 0.061 & 0.0075 \\ \hline (75,75) & 0.1 & 0.018 & 0.0043 & 0.062 & 0.0076 & 0.071 & 0.0081 \\ & 0.2 & 0.005 & 0.0022 & 0.050 & 0.0069 & 0.055 & 0.0072 \\ & 0.3 & 0.000 & 0.0000 & 0.049 & 0.0058 & 0.050 & 0.0059 \\ & 0.4 & 0.000 & 0.0000 & 0.051 & 0.0070 & 0.056 & 0.0073 \\ & 0.5 & 0.000 & 0.0000 & 0.056 & 0.0074 & 0.060 & 0.0076 \\ & 0.6 & 0.000 & 0.0000 & 0.069 & 0.0080 & 0.069 & 0.0080 \\ & 0.7 & 0.000 & 0.0000 & 0.052 & 0.0072 & 0.052 & 0.0072 \\ & 0.8 & 0.000 & 0.0000 & 0.054 & 0.0071 & 0.056 & 0.0073 \\ & 0.9 & 0.000 & 0.0000 & 0.052 & 0.0072 & 0.055 & 0.0074 \\ \hline (100, 100) & 0.1 & 0.012 & 0.0035 & 0.062 & 0.0076 & 0.071 & 0.0081 \\ & 0.2 & 0.006 & 0.0024 & 0.049 & 0.0068 & 0.055 & 0.0072 \\ & 0.3 & 0.001 & 0.0010 & 0.063 & 0.0077 & 0.061 & 0.0076 \\ & 0.4 & 0.001 & 0.0010 & 0.054 & 0.0071 & 0.059 & 0.0075 \\ & 0.5 & 0.000 & 0.0000 & 0.054 & 0.0071 & 0.051 & 0.0070 \\ & 0.6 & 0.000 & 0.0000 & 0.051 & 0.0070 & 0.051 & 0.0070 \\ & 0.7 & 0.000 & 0.0000 & 0.057 & 0.0073 & 0.057 & 0.0073 \\ & 0.8 & 0.000 & 0.0000 & 0.057 & 0.0073 & 0.057 & 0.0073 \\ & 0.9 & 0.000 & 0.0000 & 0.052 & 0.0072 & 0.052 & 0.0072 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Type I error and standard error comparison of ADF, JEL, and AJEL tests with nominal level \(\alpha=0.05\) when \(X,Y\sim Exp(4)\)
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{3}{c}{ADF} & \multicolumn{3}{c}{JEL} & \multicolumn{3}{c}{AJEL} \\ \hline \((n_{1},n_{2})\) & \(t\) & TE & SE & TE & SE & TE & SE \\ \hline (20,30) & 0.1 & 0.068 & 0.0080 & 0.074 & 0.0083 & 0.094 & 0.0092 \\ & 0.2 & 0.076 & 0.0084 & 0.063 & 0.0077 & 0.065 & 0.0078 \\ & 0.3 & 0.084 & 0.0088 & 0.050 & 0.0069 & 0.047 & 0.0067 \\ & 0.4 & 0.087 & 0.0089 & 0.051 & 0.0070 & 0.052 & 0.0070 \\ & 0.5 & 0.129 & 0.0106 & 0.050 & 0.0069 & 0.056 & 0.0073 \\ & 0.6 & 0.115 & 0.0101 & 0.077 & 0.0084 & 0.090 & 0.0090 \\ & 0.7 & 0.099 & 0.0094 & 0.063 & 0.0077 & 0.069 & 0.0080 \\ & 0.8 & 0.108 & 0.0098 & 0.062 & 0.0076 & 0.078 & 0.0085 \\ & 0.9 & 0.103 & 0.0096 & 0.041 & 0.0063 & 0.071 & 0.0081 \\ \hline (40,50) & 0.1 & 0.062 & 0.0076 & 0.056 & 0.0073 & 0.066 & 0.0079 \\ & 0.2 & 0.066 & 0.0079 & 0.037 & 0.0060 & 0.042 & 0.0063 \\ & 0.3 & 0.068 & 0.0080 & 0.049 & 0.0068 & 0.054 & 0.0071 \\ & 0.4 & 0.081 & 0.0086 & 0.065 & 0.0078 & 0.058 & 0.0074 \\ & 0.5 & 0.101 & 0.0095 & 0.049 & 0.0068 & 0.059 & 0.0075 \\ & 0.6 & 0.094 & 0.0092 & 0.080 & 0.0086 & 0.087 & 0.0089 \\ & 0.7 & 0.096 & 0.0093 & 0.066 & 0.0079 & 0.064 & 0.0077 \\ & 0.8 & 0.108 & 0.0098 & 0.053 & 0.0071 & 0.070 & 0.0081 \\ & 0.9 & 0.098 & 0.0094 & 0.034 & 0.0057 & 0.055 & 0.0072 \\ \hline (75,75) & 0.1 & 0.038 & 0.0061 & 0.051 & 0.0070 & 0.053 & 0.0071 \\ & 0.2 & 0.045 & 0.0066 & 0.055 & 0.0072 & 0.054 & 0.0071 \\ & 0.3 & 0.059 & 0.0075 & 0.051 & 0.0058 & 0.058 & 0.0063 \\ & 0.4 & 0.058 & 0.0074 & 0.053 & 0.0071 & 0.053 & 0.0071 \\ & 0.5 & 0.077 & 0.0084 & 0.044 & 0.0048 & 0.048 & 0.0052 \\ & 0.6 & 0.079 & 0.0085 & 0.057 & 0.0073 & 0.065 & 0.0078 \\ & 0.7 & 0.081 & 0.0086 & 0.045 & 0.0050 & 0.048 & 0.0052 \\ & 0.8 & 0.091 & 0.0091 & 0.055 & 0.0072 & 0.061 & 0.0076 \\ & 0.9 & 0.098 & 0.0094 & 0.036 & 0.0059 & 0.062 & 0.0076 \\ \hline (100, 100) & 0.1 & 0.049 & 0.0069 & 0.046 & 0.0066 & 0.047 & 0.0067 \\ & 0.2 & 0.055 & 0.0072 & 0.071 & 0.0081 & 0.064 & 0.0077 \\ & 0.3 & 0.060 & 0.0075 & 0.054 & 0.0071 & 0.056 & 0.0073 \\ & 0.4 & 0.067 & 0.0079 & 0.060 & 0.0075 & 0.059 & 0.0075 \\ & 0.5 & 0.072 & 0.0082 & 0.048 & 0.0068 & 0.053 & 0.0071 \\ & 0.6 & 0.078 & 0.0085 & 0.065 & 0.0078 & 0.071 & 0.0081 \\ & 0.7 & 0.082 & 0.0087 & 0.066 & 0.0079 & 0.066 & 0.0079 \\ & 0.8 & 0.087 & 0.0089 & 0.063 & 0.0077 & 0.072 & 0.0082 \\ & 0.9 & 0.086 & 0.0089 & 0.039 & 0.0061 & 0.057 & 0.0073 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Type I error and standard error comparison of ADF, JEL, and AJEL tests with nominal level \(\alpha=0.05\) when \(X,Y\sim HN(1)\)
Figure 1: Type I error comparison for ADF, JEL, and AJEL methods for different distributions, sample sizes, and values of \(t\)
Next, we conduct a power analysis for the ADF, JEL, and AJEL methods. Figure 2 displays the generalized Lorenz curves for Chi-Square, Exponential, and Half-Normal distribution under two sets of parameters. The difference between the generalized Lorenz curves of \(\chi^{2}(4)\) and \(\chi^{2}(5.5)\) increases as \(t\) changes from 0 to 0.5, then the difference decreases as \(t\) changes from 0.5 to 1. The difference between the generalized Lorenz curves for \(Exp(2)\) and \(Exp(4)\) increases significantly as \(t\) changes from 0 to 1, while the difference between the generalized Lorenz curves for \(HN(1)\) and \(HN(1.5)\) also increases but not as significant as in the case of the Exponential distributions. Tables 4-6 present a summary of the outcomes, depicting the power and standard errors, whereas Figure 3 illustrates the results graphically. As expected, the power for Chi-Square distributions tends to increase as the value of \(t\) ranges from 0.1 to 0.5, followed by a slight drop as \(t\) ranges from 0.5 to 0.9. The AJEL method exhibits better power among the three methods, while the ADF method is the weakest. Moreover, for Exponential distributions, the power of the JEL and AJEL methods tends to increase as \(t\) ranges from 0 to 0.9, while the power of the ADF method tends to decrease as the value of \(t\) changes from 0 to 0.9. When considering the Half-Normal distributions, all three methods show a similar pattern, with the ADF method being the most effective when \(t\leq 0.5\), and the JEL and AJEL methods being superior when \(t>0.5\). The increase in power is more pronounced for Exponential distributions than for Half-Normal distributions in the case of the JEL and AJEL methods. In all three cases, the AJEL method outperforms the JEL method slightly.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline & \multicolumn{3}{c}{ADF} & \multicolumn{3}{c}{JEL} & \multicolumn{3}{c}{AJEL} \\ \hline \((n_{1},n_{2})\) & \(t\) & Power & SE & Power & SE & Power & SE \\ \hline (20,30) & 0.1 & 0.128 & 0.0106 & 0.357 & 0.0152 & 0.372 & 0.0153 \\ & 0.2 & 0.190 & 0.0124 & 0.400 & 0.0155 & 0.415 & 0.0156 \\ & 0.3 & 0.330 & 0.0149 & 0.423 & 0.0156 & 0.441 & 0.0157 \\ & 0.4 & 0.360 & 0.0152 & 0.454 & 0.0157 & 0.471 & 0.0158 \\ & 0.5 & 0.454 & 0.0157 & 0.443 & 0.0157 & 0.461 & 0.0158 \\ & 0.6 & 0.475 & 0.0158 & 0.834 & 0.0118 & 0.840 & 0.0116 \\ & 0.7 & 0.439 & 0.0157 & 0.771 & 0.0133 & 0.781 & 0.0131 \\ & 0.8 & 0.450 & 0.0157 & 0.717 & 0.0142 & 0.725 & 0.0141 \\ & 0.9 & 0.488 & 0.0158 & 0.594 & 0.0155 & 0.610 & 0.0154 \\ \hline (40,50) & 0.1 & 0.182 & 0.0122 & 0.483 & 0.0158 & 0.497 & 0.0158 \\ & 0.2 & 0.369 & 0.0153 & 0.595 & 0.0155 & 0.607 & 0.0154 \\ & 0.3 & 0.508 & 0.0158 & 0.660 & 0.0150 & 0.673 & 0.0148 \\ & 0.4 & 0.578 & 0.0156 & 0.664 & 0.0149 & 0.676 & 0.0148 \\ & 0.5 & 0.657 & 0.0150 & 0.644 & 0.0151 & 0.654 & 0.0150 \\ & 0.6 & 0.646 & 0.0151 & 0.935 & 0.0078 & 0.936 & 0.0077 \\ & 0.7 & 0.655 & 0.0150 & 0.914 & 0.0089 & 0.915 & 0.0088 \\ & 0.8 & 0.670 & 0.0149 & 0.878 & 0.0103 & 0.883 & 0.0102 \\ & 0.9 & 0.694 & 0.0146 & 0.783 & 0.0130 & 0.793 & 0.0128 \\ \hline (75,75) & 0.1 & 0.438 & 0.0157 & 0.688 & 0.0147 & 0.695 & 0.0146 \\ & 0.2 & 0.595 & 0.0155 & 0.811 & 0.0124 & 0.820 & 0.0036 \\ & 0.3 & 0.757 & 0.0136 & 0.986 & 0.0037 & 0.987 & 0.0036 \\ & 0.4 & 0.790 & 0.0129 & 0.869 & 0.0107 & 0.872 & 0.0106 \\ & 0.5 & 0.841 & 0.0116 & 0.983 & 0.0041 & 0.983 & 0.0041 \\ & 0.6 & 0.852 & 0.0112 & 0.979 & 0.0045 & 0.980 & 0.0044 \\ & 0.7 & 0.856 & 0.0111 & 0.791 & 0.0129 & 0.797 & 0.0127 \\ & 0.8 & 0.872 & 0.0106 & 0.954 & 0.0066 & 0.957 & 0.0064 \\ & 0.9 & 0.875 & 0.0105 & 0.927 & 0.0082 & 0.928 & 0.0082 \\ \hline (100,100) & 0.1 & 0.625 & 0.0153 & 0.809 & 0.0124 & 0.813 & 0.0123 \\ & 0.2 & 0.786 & 0.0130 & 0.899 & 0.0095 & 0.901 & 0.0094 \\ & 0.3 & 0.871 & 0.0106 & 0.931 & 0.0080 & 0.933 & 0.0079 \\ & 0.4 & 0.923 & 0.0084 & 0.932 & 0.0080 & 0.933 & 0.0079 \\ & 0.5 & 0.932 & 0.0080 & 0.936 & 0.0077 & 0.94 & 0.0075 \\ & 0.6 & 0.933 & 0.0079 & 0.993 & 0.0026 & 0.993 & 0.0026 \\ & 0.7 & 0.942 & 0.0074 & 0.991 & 0.0030 & 0.991 & 0.0030 \\ & 0.8 & 0.951 & 0.0068 & 0.984 & 0.0040 & 0.986 & 0.0037 \\ & 0.9 & 0.953 & 0.0067 & 0.972 & 0.0052 & 0.973 & 0.0051 \\ \hline \end{tabular}
\end{table}
Table 4: Power comparison of ADF, JEL, and AJEL tests with nominal level \(\alpha=0.05\) when \(X\sim\chi_{4}^{2}\) and \(Y\sim\chi_{5.5}^{2}\)
\begin{table}
\begin{tabular}{c c c c c c c c} \hline & \multicolumn{3}{c}{ADF} & \multicolumn{3}{c}{JEL} & \multicolumn{3}{c}{AJEL} \\ \hline \((n_{1},n_{2})\) & \(t\) & Power & SE & Power & SE & Power & SE \\ \hline \((20,30)\) & 0.1 & 0.695 & 0.0146 & 0.363 & 0.0152 & 0.382 & 0.0154 \\ & 0.2 & 0.678 & 0.0148 & 0.463 & 0.0158 & 0.478 & 0.0158 \\ & 0.3 & 0.520 & 0.0158 & 0.551 & 0.0157 & 0.566 & 0.0157 \\ & 0.4 & 0.417 & 0.0156 & 0.613 & 0.0154 & 0.626 & 0.0153 \\ & 0.5 & 0.418 & 0.0156 & 0.664 & 0.0149 & 0.682 & 0.0147 \\ & 0.6 & 0.367 & 0.0152 & 0.893 & 0.0098 & 0.901 & 0.0094 \\ & 0.7 & 0.295 & 0.0144 & 0.884 & 0.0101 & 0.894 & 0.0097 \\ & 0.8 & 0.284 & 0.0143 & 0.868 & 0.0107 & 0.875 & 0.0105 \\ & 0.9 & 0.238 & 0.0135 & 0.834 & 0.0118 & 0.846 & 0.0114 \\ \hline \((40,50)\) & 0.1 & 0.835 & 0.0117 & 0.471 & 0.0158 & 0.485 & 0.0158 \\ & 0.2 & 0.812 & 0.0124 & 0.606 & 0.0155 & 0.615 & 0.0154 \\ & 0.3 & 0.582 & 0.0156 & 0.724 & 0.0141 & 0.730 & 0.0140 \\ & 0.4 & 0.492 & 0.0158 & 0.792 & 0.0128 & 0.803 & 0.0126 \\ & 0.5 & 0.470 & 0.0158 & 0.837 & 0.0117 & 0.851 & 0.0113 \\ & 0.6 & 0.368 & 0.0153 & 0.969 & 0.0055 & 0.971 & 0.0053 \\ & 0.7 & 0.324 & 0.0148 & 0.971 & 0.0053 & 0.972 & 0.0052 \\ & 0.8 & 0.317 & 0.0147 & 0.961 & 0.0061 & 0.966 & 0.0057 \\ & 0.9 & 0.290 & 0.0143 & 0.958 & 0.0063 & 0.960 & 0.0062 \\ \hline \((75,75)\) & 0.1 & 0.901 & 0.0094 & 0.616 & 0.0154 & 0.623 & 0.0153 \\ & 0.2 & 0.886 & 0.0101 & 0.787 & 0.0129 & 0.791 & 0.0129 \\ & 0.3 & 0.719 & 0.0142 & 0.975 & 0.0049 & 0.976 & 0.0048 \\ & 0.4 & 0.586 & 0.0156 & 0.930 & 0.0081 & 0.930 & 0.0081 \\ & 0.5 & 0.546 & 0.0157 & 0.993 & 0.0026 & 0.993 & 0.0026 \\ & 0.6 & 0.440 & 0.0157 & 0.995 & 0.0022 & 0.995 & 0.0022 \\ & 0.7 & 0.408 & 0.0155 & 0.987 & 0.0036 & 0.988 & 0.0034 \\ & 0.8 & 0.371 & 0.0153 & 0.995 & 0.0022 & 0.995 & 0.0022 \\ & 0.9 & 0.348 & 0.0151 & 0.994 & 0.0024 & 0.995 & 0.0022 \\ \hline \((100,100)\) & 0.1 & 0.946 & 0.0071 & 0.661 & 0.0150 & 0.667 & 0.0149 \\ & 0.2 & 0.930 & 0.0081 & 0.851 & 0.0113 & 0.856 & 0.0111 \\ & 0.3 & 0.717 & 0.0142 & 0.927 & 0.0082 & 0.928 & 0.0082 \\ & 0.4 & 0.629 & 0.0153 & 0.965 & 0.0058 & 0.968 & 0.0056 \\ & 0.5 & 0.551 & 0.0157 & 0.984 & 0.0040 & 0.985 & 0.0038 \\ & 0.6 & 0.483 & 0.0158 & 1.000 & 0.0000 & 1.000 & 0.0000 \\ & 0.7 & 0.429 & 0.0157 & 1.000 & 0.0000 & 1.000 & 0.0010 \\ & 0.8 & 0.419 & 0.0156 & 0.999 & 0.0010 & 0.999 & 0.0010 \\ & 0.9 & 0.355 & 0.0151 & 0.999 & 0.0010 & 0.999 & 0.0010 \\ \hline \end{tabular}
\end{table}
Table 5: Power comparison of ADF, JEL, and AJEL tests with nominal level \(\alpha=0.05\) when \(X\sim Exp(4)\) and \(Y\sim Exp(2)\)
\begin{table}
\begin{tabular}{c c c c c c c c} \hline & \multicolumn{3}{c}{ADF} & \multicolumn{3}{c}{JEL} & \multicolumn{3}{c}{AJEL} \\ \hline \((n_{1},n_{2})\) & \(t\) & Power & SE & Power & SE & Power & SE \\ \hline \((20,30)\) & 0.1 & 0.415 & 0.0156 & 0.260 & 0.0139 & 0.274 & 0.0141 \\ & 0.2 & 0.438 & 0.0157 & 0.292 & 0.0144 & 0.303 & 0.0145 \\ & 0.3 & 0.479 & 0.0158 & 0.309 & 0.0146 & 0.327 & 0.0148 \\ & 0.4 & 0.485 & 0.0158 & 0.373 & 0.0153 & 0.390 & 0.0154 \\ & 0.5 & 0.541 & 0.0158 & 0.377 & 0.0153 & 0.395 & 0.0155 \\ & 0.6 & 0.559 & 0.0157 & 0.721 & 0.0142 & 0.737 & 0.0139 \\ & 0.7 & 0.520 & 0.0158 & 0.694 & 0.0146 & 0.716 & 0.0143 \\ & 0.8 & 0.537 & 0.0158 & 0.670 & 0.0149 & 0.688 & 0.0147 \\ & 0.9 & 0.548 & 0.0157 & 0.597 & 0.0155 & 0.616 & 0.0154 \\ \hline \((40,50)\) & 0.1 & 0.654 & 0.0150 & 0.312 & 0.0147 & 0.330 & 0.0149 \\ & 0.2 & 0.661 & 0.0150 & 0.393 & 0.0154 & 0.400 & 0.0155 \\ & 0.3 & 0.676 & 0.0148 & 0.455 & 0.0157 & 0.468 & 0.0158 \\ & 0.4 & 0.690 & 0.0146 & 0.544 & 0.0158 & 0.561 & 0.0157 \\ & 0.5 & 0.725 & 0.0141 & 0.577 & 0.0156 & 0.589 & 0.0156 \\ & 0.6 & 0.713 & 0.0143 & 0.838 & 0.0117 & 0.845 & 0.0114 \\ & 0.7 & 0.723 & 0.0142 & 0.840 & 0.0116 & 0.847 & 0.0114 \\ & 0.8 & 0.733 & 0.0140 & 0.841 & 0.0116 & 0.847 & 0.0114 \\ & 0.9 & 0.746 & 0.0138 & 0.809 & 0.0124 & 0.816 & 0.0123 \\ \hline \((75,75)\) & 0.1 & 0.839 & 0.0116 & 0.381 & 0.0154 & 0.390 & 0.0154 \\ & 0.2 & 0.843 & 0.0115 & 0.508 & 0.0158 & 0.515 & 0.0158 \\ & 0.3 & 0.853 & 0.0112 & 0.896 & 0.0097 & 0.897 & 0.0096 \\ & 0.4 & 0.866 & 0.0108 & 0.689 & 0.0146 & 0.700 & 0.0145 \\ & 0.5 & 0.877 & 0.0104 & 0.912 & 0.0090 & 0.913 & 0.0089 \\ & 0.6 & 0.876 & 0.0104 & 0.933 & 0.0079 & 0.936 & 0.0077 \\ & 0.7 & 0.882 & 0.0102 & 0.846 & 0.0114 & 0.851 & 0.0113 \\ & 0.8 & 0.891 & 0.0099 & 0.932 & 0.0080 & 0.934 & 0.0079 \\ & 0.9 & 0.892 & 0.0098 & 0.929 & 0.0081 & 0.931 & 0.0080 \\ \hline \((100,100)\) & 0.1 & 0.938 & 0.0076 & 0.390 & 0.0155 & 0.400 & 0.0155 \\ & 0.2 & 0.940 & 0.0075 & 0.554 & 0.0157 & 0.561 & 0.0157 \\ & 0.3 & 0.941 & 0.0075 & 0.691 & 0.0146 & 0.695 & 0.0146 \\ & 0.4 & 0.944 & 0.0073 & 0.787 & 0.0129 & 0.792 & 0.0128 \\ & 0.5 & 0.951 & 0.0068 & 0.837 & 0.0117 & 0.839 & 0.0116 \\ & 0.6 & 0.953 & 0.0067 & 0.958 & 0.0063 & 0.958 & 0.0063 \\ & 0.7 & 0.952 & 0.0068 & 0.969 & 0.0055 & 0.970 & 0.0054 \\ & 0.8 & 0.957 & 0.0064 & 0.973 & 0.0051 & 0.973 & 0.0051 \\ & 0.9 & 0.958 & 0.0063 & 0.972 & 0.0052 & 0.975 & 0.0049 \\ \hline \end{tabular}
\end{table}
Table 6: Power comparison of ADF, JEL, and AJEL tests with nominal level \(\alpha=0.05\) when \(X\sim HN(1)\) and \(Y\sim HN(1.5)\)
Figure 3: Power comparison for ADF, JEL, and AJEL methods for different distributions, sample sizes, and values of \(t\)
Applications
In this section, we use the proposed methods to evaluate the equality of the generalized Lorenz curves for various subgroups of employees of California State University (CSU) and University of California (UC) systems in 2021.1 The 2021 data comprises 105,414 records of salaries for CSU and 299,448 records of salaries for UC. The data is anonymous but grouped by employer name and type of position.2 For the purpose of this analysis, we considered testing the hypothesis defined in equation (7) at a significance level of 5% for the following three scenarios. In each scenario, we apply the proposed testing procedures for \(t=0.0,0.2,0.4,0.6,0.8,1.0\), and obtain the corresponding test statistics and p-values.
Footnote 1: The most recent data was obtained and is available on the State Controller’s Office website at [https://publicpay.ca.gov/Reports/Explore.aspx](https://publicpay.ca.gov/Reports/Explore.aspx).
Footnote 2: To identify CSU instructional faculty, we filtered the data set based on the following keywords: “Instructional Faculty”, “Teaching Associate”, “Visiting Faculty”, “Lecturer”, “Academic-Related”, “Department Chair”. To identify UC instructional faculty, we filtered the data set based on the following keywords: “Assoc Prof”, “Assoc Adj”, “Asst Adj”, “Asst Prof”, “Prof In”, “VIS Prof”, “Adj Instr”, “Lect”, “Grad”, “Adj”. All other employees that didn’t possess the listed keywords in the description of their position were identified as non-teaching staff. Further, log-transformed data was used to avoid the unnecessary computational burden.
Using complete data to compare income distributions of faculty at CSU Monterey Bay and CSU San Bernardino
In this scenario, we examine the salaries of all instructional faculty from CSU Monterey Bay and CSU San Bernardino. Filtering data based on the employer name and the type of position resulted in obtaining a total of 546 records for CSU Monterey Bay faculty salaries and 1,265 records for CSU San Bernardino faculty. These two institutions were chosen because their Lorenz and generalized Lorenz curves appear to be very similar as shown in Figure 4.
Table 7 shows the computed test statistics and p-values for ADF, JEL, and AJEL methods at values of \(t=0.0,0.2,0.4,0.6,0.8,1.0\). It is observed that both methods reject the null hypothesis at a 5% significance level for \(t=0.2\) and \(t=0.4\), indicating sufficient evidence to conclude that the generalized Lorenz curves considered are significantly different for the 20th and 40th percentiles. However, for \(t=0.0\), \(t=0.6\), \(t=0.8\), and \(t=1.0\), the p-values obtained by both methods are greater than 0.05, implying that we do not have enough evidence to infer significant differences in the considered generalized Lorenz curves for the 0th, 60th, 80th, and 100th percentiles. These mixed results can be attributed to the multiple intersections of the considered generalized Lorentz curves.
Using complete data to compare income distributions of faculty at CSU San Bernardino and CSU San Francisco
In this scenario, we examine the salaries of all instructional faculty from CSU San Bernardino and CSU San Francisco. Filtering data based on the employer name and the type of position resulted in obtaining the total of 1,265 records for CSU San Bernardino faculty salaries and 2,294 records for CSU San Francisco faculty. These two institutions were chosen because their Lorenz and generalized Lorenz curves appear to be very distinct as shown in Figure 5.
Table 8 presents the computed test statistics and p-values for ADF, JEL, and AJEL methods at \(t=0.0,0.2,0.4,0.6,0.8,1.0\). The results indicate that both methods reject the null hypothesis for all values of \(t\) except \(t=0\), implying sufficient evidence to conclude that the two generalized Lorenz curves are significantly different for the 20th, 40th, 60th, 80th, and 100th
Figure 5: (a) Lorenz Curves and (b) Generalized Lorenz Curves for salaries of CSU San Bernardino and CSU San Francisco faculty
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \multicolumn{8}{c}{\(t\)} \\ \hline Method & Value & 0.0 & 0.2 & 0.4 & 0.6 & 0.8 & 1.0 \\ \hline ADF & Test statistic & - & 17.3485 & 20.1831 & 23.0600 & 24.9908 & 28.8444 \\ & p-value & - & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 \\ \hline JEL & Test statistic & 1.5821 & 217.8804 & 942.6197 & 250.0000 & 30.2795 & 20.0987 \\ & p-value & 0.2085 & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 \\ \hline AJEL & Test statistic & 1.5860 & 218.2452 & 943.7490 & 250.0000 & 30.3477 & 20.1438 \\ & p-value & 0.2079 & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 \\ \hline \end{tabular}
\end{table}
Table 8: Test statistic and p-value
percentiles at a 5% significance level. However, for \(t=0.0\), both methods yield p-values greater than 0.05, suggesting insufficient evidence to infer significant differences in the considered generalized Lorenz curves for the 0th percentile. These findings can be explained by the fact that the minimum salaries for both institutions are nearly identical, but the difference between the institutions increases as \(t\) increases.
### Using incomplete data to compare income distributions of all faculty at CSU and UC
In this scenario, we examine the instructional faculty salaries across all CSU and UC institutions. Filtering data based on the type of position resulted in obtaining the total of 34,927 records for CSU faculty salaries and 18,104 records for UC faculty salaries. The Lorenz and generalized Lorenz curves are shown in Figure 6.
Since both data sets are quite large, we decided to utilize the proposed procedures using the samples. As the empirical likelihood approach is known to be effective for relatively small samples, we selected sample sizes of \(n_{1}=n_{2}=100\). Given that the difference between the generalized Lorenz curves is noticeable, we anticipate that both procedures will be capable of detecting the difference even with such modest sample sizes.
Figure 6: (a) Lorenz curves and (b) generalized Lorenz curves for salaries of CSU and UC faculty
Table 9 displays the test statistics and p-values computed for ADF, JEL, and AJEL methods at \(t=0.0,0.2,0.4,0.6,0.8\), and \(1.0\). The results indicate that all three methods reject the null hypothesis for all values of \(t\) except \(t=0\) at a 5% significance level, providing sufficient evidence to conclude that the generalized Lorenz curves are significantly different for the 20th, 40th, 60th, 80th, and 100th percentiles. Conversely, for \(t=0.0\), the p-values exceed 0.05, indicating insufficient evidence to conclude that the curves are significantly different at the 0th percentile. Similarly to the previous scenario, these findings can be attributed to the fact that the minimum salaries are nearly identical for both institutions, but the difference between them increases as \(t\) increases. However, these findings are noteworthy because they were obtained using relatively small samples.
### Using complete data to compare income distributions of all faculty at CSUSB in Years 2009 and 2020
In this scenario, we examine the salaries of all instructional faculty at CSUSB in the years 2009 and 2020. Filtering data based on the employer name and the type of position resulted in obtaining a total of 1,111 records for 2009 and 1,331 records for 2020. The Lorenz and generalized Lorenz curves are graphed in Figure 7.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & & & & & & & \\ \hline Method & Value & 0.0 & 0.2 & 0.4 & 0.6 & 0.8 & 1.0 \\ \hline ADF & Test statistic & - & 14.9673 & 17.3378 & 18.9842 & 19.8941 & 24.9073 \\ & p-value & - & 0.0001 & 0.0000 & 0.0000 & 0.0000 & 0.0000 \\ \hline JEL & Test statistic & 0.6586 & 48.6775 & 20.5069 & 66.4861 & 38.4036 & 25.2639 \\ & p-value & 0.4171 & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 \\ \hline AJEL & Test statistic & 0.6833 & 49.7646 & 20.9485 & 67.5089 & 39.0850 & 25.7541 \\ & p-value & 0.4084 & 0.0000 & 0.0000 & 0.0000 & 0.0000 & 0.0000 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Test statistic and p-value
## 5 Conclusions
In this paper, we developed two non-parametric JEL-based methods using a \(U\)-statistic to test the equality of two generalized Lorenz curves. The limiting distribution of the likelihood ratios is shown to follow a chi-squared distribution with one degree of freedom. Simulations studies with different distribution types and sample sizes illustrate that both methods show improved Type I error probability and power as the sample size increases. In general, the AJEL resulted in higher test powers in comparison to JEL across all distributions, sample sizes, and values of \(t\), except for a few cases. However, the AJEL method has a higher Type I error rate than JEL, though still within an acceptable range. Moreover, these methods exhibit robustness across a range of scenarios, outperforming the existing ADF method. The proposed testing methods are applied to three distinct scenarios using CSU and UC salary data. The results are found to be acceptable for both complete and incomplete data.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline & & & & \(t\) & & & \\ \hline Method & Value & 0.0 & 0.2 & 0.4 & 0.6 & 0.8 & 1.0 \\ \hline ADF & Test statistic & - & 1.5990 & 2.2190 & 2.2614 & 2.5820 & 2.5776 \\ & p-value & - & 0.2060 & 0.1363 & 0.1326 & 0.1081 & 0.1084 \\ \hline JEL & Test statistic & 0.1719 & 52.4042 & 29.2928 & 0.1161 & 0.0624 & 0.1486 \\ & p-value & 0.6785 & 0.0000 & 0.0000 & 0.7333 & 0.8028 & 0.6999 \\ \hline AJEL & Test statistic & 0.1725 & 52.5651 & 29.3841 & 0.1165 & 0.0626 & 0.1490 \\ & p-value & 0.6779 & 0.0000 & 0.0000 & 0.7329 & 0.8025 & 0.6994 \\ \hline \end{tabular}
\end{table}
Table 10: Test statistic and p-value
Figure 7: (a) Lorenz curves and (b) generalized Lorenz curves for salaries of CSU faculty in years 2009 and 2020
This suggests that the proposed testing methods can be applied to a variety of data types, providing satisfactory results regardless of data completeness.
## Appendix A Proofs of Theorems
Proof.: **Theorem 2.1**
Let \(n_{1}\leq n_{2}\). As shown in Arvesen (1969), the jackknife procedure for the two sample \(U\)-statistics, \(U_{n_{1},n_{2}}\), we have
\[\begin{split}& V_{i,0}=n_{1}U_{n_{1},n_{2}}-(n_{1}-1)U_{n_{1}-1,n_{2}} ^{-i,0},\quad i=1,\cdots,n_{1}\\ & V_{0,j}=n_{2}U_{n_{1},n_{2}}-(n_{2}-1)U_{n_{1}-1,n_{2}}^{0-j}, \quad j=1,\cdots,n_{2}\end{split} \tag{20}\]
Further, they proposed a consistent estimator of \(Var(U_{n_{1},n_{2}})\) given as
\[\widehat{Var}_{\text{Jack}}(U_{n_{1},n_{2}})=\frac{1}{n_{1}(n_{1}-1)}\sum_{i=1 }^{n_{1}}\Big{(}V_{i,0}-\bar{V}_{.0}\Big{)}^{2}+\frac{1}{n_{2}(n_{2}-1)}\sum_{ j=1}^{n_{2}}\Big{(}V_{0,j}-\bar{V}_{0.}\Big{)}^{2},\]
where \(\bar{V}_{.0}\) and \(\bar{V}_{0.}\) are the means of \(V_{i,0}\) and \(V_{0,j}\) respectively.
**Lemma A.1**.: _(See Arvesen (1969))_
1. _Assume that_ \(E|h(X,Y)|<\infty\)_, then_ \(U_{n_{1},n_{2}}\xrightarrow{a.s.}\theta\) _as_ \(n\longrightarrow\infty\)_._
2. _Assume that_ \(Eh^{2}(X,Y)<\infty,\sigma_{1,0}^{2}>0\) _and_ \(\sigma_{0,1}^{2}>0\)_, let_ \(S_{n_{1},n_{2}}^{2}=\frac{1}{n_{1}}\sigma_{1,0}^{2}+\frac{1}{n_{2}}\sigma_{0, 1}^{2}\)_, then_ \[\frac{U_{n_{1},n_{2}}-\theta}{S_{n_{1},n_{2}}}\xrightarrow{d}N(0,1)\quad\text {and}\quad\widehat{Var}_{\text{Jack}}(U_{n_{1},n_{2}})-S_{n_{1},n_{2}}^{2}=o_{ p}(n_{1}^{-1})\quad\text{as}\quad n_{1}\longrightarrow\infty.\]
In order to apply JEL, using (20), we can determine
\[V_{i,0}=\frac{1}{n_{2}}I(X_{i}\leq\psi_{t})\sum_{r=1}^{n_{2}}\big{(}X_{i}-Y_{r }\big{)}\,I(Y_{r}\leq\psi_{t}),\quad i=1,\ldots,n_{1}\]
and
\[V_{0,j}=\frac{1}{n_{1}}I(Y_{j}\leq\psi_{t})\sum_{s=1}^{n_{1}}\big{(}X_{s}-Y_{j }\big{)}\,I(X_{s}\leq\psi_{t}),\quad j=1,\ldots,n_{2}\]
Let \(n=n_{1}+n_{2}\). Consider
\[U_{n}=\frac{1}{n_{1}n_{2}}\sum_{1\leq i\leq n_{1}<j\leq n}\big{(}X_{i}-Y_{j-n_{ 1}}\big{)}I(X_{i}\leq t)I(Y_{j-n_{1}}\leq\psi_{t})\]
and
\[\begin{split} U_{n}^{-i}&=U(Z_{1},Z_{2},\ldots,Z_{i- 1},Z_{i+1},\ldots,Z_{n})\\ &=\binom{n-1}{2}^{-1}\frac{1}{n_{1}n_{2}}\sum_{\begin{subarray}{ c}1\leq r<s\leq n\\ r,s\neq t\end{subarray}}\big{(}X_{r}-Y_{s-n_{1}}\big{)}I(X_{r}\leq t)I(Y_{s-n_{ 1}}\leq\psi_{t})\\ &=\begin{cases}\frac{n}{(n-2)}\bigg{[}U_{n}-\frac{1}{n_{1}n_{2}} \sum_{i\leq n_{1}<j}\big{(}X_{i}-Y_{j-n_{1}}\big{)}I(X_{i}\leq t)I(Y_{j-n_{1}} \leq\psi_{t})\bigg{]}&,&1\leq i\leq n_{1}\\ &\frac{n}{(n-2)}\bigg{[}U_{n}-\frac{1}{n_{1}n_{2}}\sum_{j\leq n_{1}<i}(X_{j} -Y_{i-n_{1}})I(X_{j}\leq t)I(Y_{i-n_{1}}\leq\psi_{t})\bigg{]}&,&n_{1}<i\leq n \end{split}\]
It can be seen that
\[\frac{1}{n_{1}n_{2}}\sum_{i\leq n_{1}<j}(X_{i}-Y_{j-n_{1}})I(X_{i}\leq t)I(Y_{j-n _{1}}\leq\psi_{t})=\frac{1}{n_{1}}V_{i,0}\,\quad 1\leq i\leq n_{1}\]
and
\[\frac{1}{n_{1}n_{2}}\sum_{j\leq n_{1}<i}(X_{j}-Y_{i-n_{1}})I(X_{j}\leq t)I(Y_{i- n_{1}}\leq\psi_{t})=\frac{1}{n_{2}}V_{0,i}\,\quad n_{1}<i\leq n\]
Now, consider JEL given in (10), for \(1\leq k\leq n\), we have
\[\widehat{V}_{k} =nU_{n}-(n-1)U_{n-1}^{-k}\] \[=\frac{n(n-1)}{n-2}\left[\left(\frac{V_{k,0}}{n_{1}}\right)I(1 \leq k\leq n_{1})+\left(\frac{V_{0,k-n_{1}}}{n_{2}}\right)I(n_{1}<k\leq n) \right]-\frac{n}{n-2}U_{n_{1},n_{2}}\]
Thus,
\[E\widehat{V}_{k}=\frac{n\theta}{n-2}\left[\left(\frac{n_{2}-1}{n_{1}}\right) I(1\leq k\leq n_{1})+\left(\frac{n_{1}-1}{n_{2}}\right)I(n_{1}<k\leq n)\right]\]
Under \(H_{0}\), \(E\widehat{V}_{k}=0\). Next, following the similar arguments given in Jing et al. (2009), for fixed \(t=t_{0}\in[0,1]\), it can be proven that \(\ell(\theta(t_{0}))\longrightarrow\chi_{1}^{2}\), as \(n_{1}\longrightarrow\infty\). Thus, details are omitted here.
Proof.: **Theorem 2.2**
The proof of this theorem is similar to Theorem 1 given in Chen et al. (2008). Let \(\lambda^{\text{Adj}}(t)\) be the solution to
\[\sum_{k=1}^{n+1}\frac{g_{k}^{\text{Adj}}(t)}{1+\lambda^{\text{Adj}}(t)g_{k}^{ \text{Adj}}(t)}=0. \tag{21}\]
The first step is to show that \(\lambda^{\text{Adj}}(t)=O_{p}(n^{-1/2})\). By using Lemma 3 of Owen (1990) and the fact that \(E(\hat{V}_{1}^{2}(t))<\infty\), we can establish that \(g^{*}=\max_{1\leq k\leq n}\left\|\hat{V}_{k}\right\|=o_{p}(n^{1/2})\) and \(\bar{g}_{n}(t)=O_{p}(n^{-1/2})\). Let \(\rho=\left\|\lambda^{\text{Adj}}(t)\right\|\), \(a_{n}=o_{p}(n)\) and \(\hat{\lambda}^{\text{Adj}}(t)=\lambda^{\text{Adj}}(t)/\rho\). Multiplying \(\hat{\lambda}^{\text{Adj}}(t)/n\) to both sides gives
\[\begin{split} 0&=\frac{\hat{\lambda}^{\text{Adj}}(t)}{n} \sum_{k=1}^{n+1}\frac{g_{k}^{\text{Adj}}(t)}{1+\lambda^{\text{Adj}}(t)g_{k}^{ \text{Adj}}(t)}\\ &=\frac{\hat{\lambda}^{\text{Adj}}(t)}{n}\sum_{k=1}^{n+1}g_{k}^ {\text{Adj}}(t)-\frac{\rho}{n}\sum_{k=1}^{n+1}\frac{(\hat{\lambda}^{\text{Adj }}(t)g_{k}^{\text{Adj}}(t))^{2}}{1+\rho\hat{\lambda}^{\text{Adj}}(t)g_{k}^{ \text{Adj}}(t)}\\ &\leq\hat{\lambda}^{\text{Adj}}(t)\bar{g}_{n}(t)(1-a_{n}/n)- \frac{\rho}{n(1+\rho g^{*}(t))}\sum_{k=1}^{n}(\hat{\lambda}^{\text{Adj}}(t)g_{ k}^{\text{Adj}}(t))^{2}\\ &=\hat{\lambda}^{\text{Adj}}(t)\bar{g}_{n}(t)-\frac{\rho}{n(1+ \rho g^{*}(t))}\sum_{k=1}^{n}\left(\hat{\lambda}^{\text{Adj}}(t)g_{k}^{\text{ Adj}}(t)\right)^{2}+O_{p}(n^{-3/2}a_{n}).\end{split} \tag{22}\]
The inequality stated above is valid due to the non-negativity of the \((n+1)\)th term in the second summation. According to Chen et al. (2008), for any given \(\epsilon>0\), we have
\[\frac{1}{n}\sum_{k=1}^{n}\left(\lambda^{\text{Adj}}(t)g_{k}^{\text{Adj}}(t) \right)^{2}\geq 1-\epsilon. \tag{23}\]
Therefore, as long as \(a_{n}=o_{p}(n)\), equation (22) implies that
\[\frac{\rho}{(1+\rho g^{*}(t))}\leq\lambda^{\text{Adj}}(t)\frac{\bar{\beta}_{n}(t )(t)}{(1-\epsilon)}=O_{p}(n^{-1/2}). \tag{24}\]
Thus, we get \(\rho=O_{p}(n^{-1/2})\) and hence \(\lambda^{\text{Adj}}(t)=O_{p}(n^{-1/2})\). Now, consider
\[\begin{split} 0&=\frac{1}{n}\sum_{k=1}^{n+1}\frac{g_{k}^{ \text{Adj}}(t)}{1+\lambda^{\text{Adj}}(t)g_{k}^{\text{Adj}}(t)}\\ &=\bar{g}_{n}(t)(t)-\lambda^{\text{Adj}}(t)\hat{V}_{n}(t)+o_{p}( n^{-1/2}),\end{split} \tag{25}\]
where \(\hat{V}_{n}=(1/n)\sum_{k=1}^{n}g_{k}^{\text{Adj}}(t)^{2}\). Hence, when \(n\longrightarrow\infty,\lambda^{\text{Adj}}(t)=\hat{V}_{n}^{-1}\bar{g}_{n}(t )+o_{p}(n^{-1/2})\). Now, we expand \(l^{*}(\theta(t))\) as follows
\[\begin{split} l^{*}(\theta(t))&=\sum_{k=1}^{n+1} \log\left(1+\lambda^{\text{Adj}}(t)g_{k}^{\text{Adj}}(t)\right)\\ &=\sum_{k=1}^{n+1}\left\{\lambda^{\text{Adj}}(t)g_{k}^{\text{Adj }}(t)-\frac{\left(\lambda^{\text{Adj}}(t)g_{k}^{\text{Adj}}(t)\right)^{2}}{2} \right\}+o_{p}(1).\end{split} \tag{26}\]
Substituting the expansion of \(\lambda^{\text{Adj}}\), we get that
\[\begin{split}-2l^{*}(\theta(t_{0}))&=n\hat{V}_{n}^ {-1}\bar{g}_{n}(t)^{2}+o_{p}(1)\\ &\xrightarrow{d}\chi_{1}^{2}.\end{split} \tag{27}\]
This completes the proof.
|
2306.07366 | Quantum geometry of singlet superconductors | We elaborate that $s$-wave and $d$-wave superconductors described by mean
field theories possess a nontrivial quantum geometry. From the overlap of two
quasihole states at slightly different momenta, one can define a quantum metric
that measures the distance in the curved momentum space. The
momentum-integration of the quantum metric represents an average distance that
we call the fidelity number, which may be further expressed as a fidelity
marker defined locally on every lattice site. For $s$-wave superconductors, we
unveil that the quantum metric generally influences the electromagnetic
responses at finite wave length, such as the infrared absorption and
paramagnetic current. In addition, the dielectric response is directly
proportional to the fidelity number, which is found to be determined by the
coherence length and suppressed by disorder. For $d$-wave superconductors, we
demonstrate the singular behavior of the quantum metric near the nodal points,
and a metric-curvature correspondence between the azimuthal quantum metric and
the non-Abelian Berry connection that integrates to a topological charge of the
nodal points. | David Porlles, Wei Chen | 2023-06-12T18:44:21Z | http://arxiv.org/abs/2306.07366v2 | # Quantum geometry of singlet superconductors
###### Abstract
We elaborate that \(s\)-wave and \(d\)-wave superconductors described by mean field theories possess a nontrivial quantum geometry. From the overlap of two quasihole states at slightly different momenta, one can define a quantum metric that measures the distance in the curved momentum space. The momentum-integration of the quantum metric represents an average distance that we call the fidelity number, which may be further expressed as a fidelity marker defined locally on every lattice site. For \(s\)-wave superconductors, we unveil that the quantum metric generally influences the electromagnetic responses at finite wave length, such as the infrared absorption and paramagnetic current. In addition, the dielectric response is directly proportional to the fidelity number, which is found to be determined by the coherence length and suppressed by disorder. For \(d\)-wave superconductors, we demonstrate the singular behavior of the quantum metric near the nodal points, and a metric-curvature correspondence between the azimuthal quantum metric and the non-Abelian Berry connection that integrates to a topological charge of the nodal points.
## I Introduction
The quantum geometry of the valence band Bloch state emerges recently as a key aspect related to various material properties of insulators and semiconductors, especially to their topological properties.[1; 2; 3; 4; 5; 6; 7; 8; 9; 10] Starting from the fully antisymmetric valence band Bloch state \(|\psi({\bf k})\rangle\) at momentum \({\bf k}\), the notion of quantum geometry arises from considering the overlap \(|\langle\psi({\bf k})|\psi({\bf k}+\delta{\bf k})\rangle|=1-g_{\mu\nu}\delta k ^{\mu}\delta k^{\nu}/2\) expanded in terms of the small displacement \(\delta{\bf k}\), yielding a prefactor \(g_{\mu\nu}\) that is referred to as the quantum metric.[11] The periodic Brillouin zone (BZ) is then considered as a compact Euclidean manifold equipped with this quantum metric, from which the usual quantities in differential geometry, such as Ricci scalar, Riemann tensor, geodesic, etc, can be introduced.
Besides these purely mathematical aspects, quantum metric has also been linked to various experimental measurables.[12; 13; 14; 15; 16] Particularly in semiconductors, the exciton absorption rate at momentum \({\bf k}\) as a function of the frequency of a polarized light, which can be measured by detecting the loss of valence band electron population in the pump-probe type of experiments,[17] is described by a quantum metric spectral function that frequency-integrates to the quantum metric.[10] In addition, the frequency-dependence of the optical absorption rate, which has been measured in semiconductor for decades,[18] as well as recently measured in 2D materials from their transmittance,[19; 20; 21; 22] actually corresponds to the momentum integration of the quantum metric spectral function that has been called the fidelity number spectral function.[23; 24] The significance of this spectral function is that it frequency-integrates to a fidelity number that represents the average distance between neighboring Bloch states in the momentum space, thereby serving as a characteristic quantum geometrical property of the BZ manifold. Moreover, the fidelity number can be converted into a fidelity marker defined locally on lattice sites, pointing to the possibility of investigating the influence of real space inhomogeneity on the quantum geometrical properties of solids.[23]
Besides these experimental measurable, another important feature of the quantum metric is its relation with the topological order. It has been pointed out that in systems where the topological order is given by the momentum-integration of Berry connection or Berry curvature, the module of these quantities is equal to the determinant of the filled band quantum metric.[1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30] Along this line of development, it is recognized recently that, in fact, the module of any function that momentum-integrates to the topological order of Dirac models in any dimension and symmetry class[31; 32; 33; 34] is equal to the determinant of the quantum metric, a ubiquitous relation that has been called the metric-curvature correspondence.[9] As a result, the aforementioned exciton absorption experiment that measures the quantum metric can help to reveal the topological order in these materials.
In addition to these remarkable features in insulating materials, quantum metric also manifests in yet another system that is currently under intensive investigation, namely the flat band superconductors (SCs). This subject rises to prominence owing to the flat band superconductivity recently discovered in twisted bilayer graphene.[35; 36] Although the microscopic mechanism for the superconductivity in this system is still under intensive debate, various theories have suggested that the superfluid density therein is directly related to the quantum metric of the flat band.[37; 38; 39; 40; 41]
Motivated by these intensive investigations of flat band SCs and the knowledge about optical absorption in semiconductors, in this paper we present a detailed survey on the quantum geometrical properties of the quasihole band of typical singlet SCs, including both the cases of \(s\)-wave and \(d\)-wave pairing. Our objective is to elaborate that typical singlet SCs described by Bardeen-Cooper-Schrieffer (BCS) mean field theories[42] also has nontrivial quantum geometrical properties. For \(s\)-wave SCs, we will elaborate the remarkably simple form of the quantum metric, and demonstrate that the metric generally appears in optical and dielectric responses. However,
unlike the optical absorption in semiconductors, the infrared absorption and the so-called paramagnetic current of clean SCs is not directly given by the quantum metric spectral function owing to the complication coming from the Bogoliubov transformation, commonly known as the coherence factor.[43] On the other hand, the zero-frequency dielectric function turns out to be directly proportional to the fidelity number, which is essentially given by the coherence length measured in units of lattice constant. For \(d\)-wave SCs, we will emphasize the very singular momentum profile of the quantum metric, as well as the metric-curvature correspondence between the non-Abelian Berry curvature that integrates to a topological charge and the azimuthal quantum metric.
## II Quantum Geometry and Electromagnetic Responses of Singlet Superconductors
### Quantum metric and fidelity marker in singlet superconductors
We start by considering mean-field spin-singlet SCs in any spatial dimension \(D\), whose single-particle Hamiltonian takes the form of a \(2\times 2\) Dirac Hamiltonian
\[H({\bf k})={\bf d}\cdot{\mathbf{\sigma}}=d_{1}\sigma_{1}+d_{3}\sigma_{3}, \tag{1}\]
where \(\sigma_{i}\) are the Pauli matrices, \(d_{1}=\Delta_{\bf k}\) is the momentum-dependent superconducting gap, and \(d_{3}=\varepsilon_{\bf k}\) is the normal state dispersion. The basis of the Hamiltonian is \(|\psi_{\bf k}\rangle=(c_{{\bf k}\uparrow},c_{-{\bf k}\downarrow}^{\dagger})^{T}\). The \({\bf d}\)-vector divided by its module defines a unit vector
\[{\bf n}\equiv{\bf d}/|{\bf d}|=(d_{1}/d,d_{3}/d)=(n_{1},n_{3}), \tag{2}\]
with \(d=\sqrt{d_{1}^{2}+d_{3}^{2}}=\sqrt{\varepsilon_{\bf k}^{2}+\Delta_{\bf k}^{2 }}=E_{\bf k}\) the dispersion of the two bands. We denote the filled quasihole eigenstate by \(|n({\bf k})\rangle\equiv|n\rangle\) (not to be confused with the \({\bf n}\)-vector in Eq. (2)) and the empty quasiparticle eigenstate by \(|m({\bf k})\rangle\equiv|m\rangle\), which take the form
\[|n\rangle=\frac{1}{\sqrt{2d(d-d_{3})}}\left(\begin{array}{c}d-d_ {3}\\ -d_{1}\end{array}\right)={\rm Sgn}(\Delta_{\bf k})\left(\begin{array}{c}v_{ \bf k}\\ -u_{\bf k}\end{array}\right),\] \[|m\rangle=\frac{1}{\sqrt{2d(d+d_{3})}}\left(\begin{array}{c}d+d _{3}\\ d_{1}\end{array}\right)=\left(\begin{array}{c}u_{\bf k}\\ v_{\bf k}\end{array}\right), \tag{3}\]
where \(u_{\bf k}\) and \(v_{\bf k}\) are the usual Bogoliubov coefficients
\[c_{{\bf k}\uparrow}=u_{\bf k}\gamma_{{\bf k}\uparrow}+v_{\bf k} \gamma_{-{\bf k}\downarrow}^{\dagger},\;\;\;c_{-{\bf k}\downarrow}=u_{\bf k} \gamma_{-{\bf k}\downarrow}-v_{\bf k}\gamma_{{\bf k}\uparrow}^{\dagger},\] \[u_{\bf k}=\sqrt{\frac{1}{2}\left(1+\frac{d_{3}}{d}\right)},\;\; \;v_{\bf k}={\rm Sgn}(\Delta_{\bf k})\sqrt{\frac{1}{2}\left(1-\frac{d_{3}}{d} \right)}, \tag{4}\]
that satisfy \(u_{\bf k}v_{\bf k}=\Delta_{\bf k}/2E_{\bf k}=d_{1}/2d\). The sign of the gap \({\rm Sgn}(\Delta_{\bf k})={\rm Sgn}(d_{1})\) is unimportant in practice for \(s\)-wave SCs, but will be important for \(d\)-wave SCs. Note that when taking the derivative of momentum on \(v_{\bf k}\), one only takes derivative on the square root but not on the sign
\[\partial_{\mu}v_{\bf k}={\rm Sgn}(\Delta_{\bf k})\partial_{\mu}\sqrt{\frac{1}{ 2}\left(1-\frac{d_{3}}{d}\right)}, \tag{5}\]
because \(v_{\bf k}\) and an infinitely small shift along \(\hat{\mathbf{\mu}}\) direction \(v_{{\bf k}+\delta k\hat{\mathbf{\mu}}}\) have the same sign if \(\delta k\to 0\). The derivative \(\partial_{\mu}v_{\bf k}\) is ill-defined at where the gap changes sign in a \(d\)-wave SC.
We are interested in the quantum metric[11]\(g_{\mu\nu}({\bf k})\) of the filled quasihole state \(|n({\bf k})\rangle\) defined from the inner product of this state at momentum \({\bf k}\) and at momentum \({\bf k}+\delta{\bf k}\)
\[|\langle n({\bf k})|n({\bf k}+\delta{\bf k})\rangle|=1-\frac{1}{2}g_{\mu\nu} \delta k^{\mu}\delta k^{\nu}, \tag{6}\]
which amounts to several equivalent expressions
\[g_{\mu\nu}=\frac{1}{2}\langle\partial_{\mu}n|m\rangle\langle m| \partial_{\nu}\rangle+(\mu\leftrightarrow\nu)\] \[= \frac{1}{4}\partial_{\mu}{\bf n}\cdot\partial_{\nu}{\bf n}=\left(u _{\bf k}\partial_{\mu}v_{\bf k}-v_{\bf k}\partial_{\mu}u_{\bf k}\right)\left(u _{\bf k}\partial_{\nu}v_{\bf k}-v_{\bf k}\partial_{\nu}u_{\bf k}\right)\] \[= \frac{1}{4d^{4}}\left(d_{3}\partial_{\mu}d_{1}-d_{1}\partial_{\mu} d_{3}\right)\left(d_{3}\partial_{\nu}d_{1}-d_{1}\partial_{\nu}d_{3}\right), \tag{7}\]
where \(\partial_{\mu}\equiv\partial/\partial k^{\mu}\), and we have used Eqs. (4) and (5).
We see that either the derivative on the unit vector \(\partial_{\mu}{\bf n}/2\) or on the Bogoliubov coefficients \(\pm\left(u\partial_{\mu}v-v\partial_{\mu}u\right)\) can play the role of the vielbein. Equation (II) also implies that the volume form of the curved momentum space vanishes \(\sqrt{\det g}=0\) for \(D>1\), and consequently many geometrical quantities that involves integration over the curved momentum space would vanish at \(D>1\), such as the Hilbert action \(\int d^{D}{\bf k}\sqrt{\det g}R=0\) given by the integration of Ricci scalar \(R\).
There is a very intuitive way to visualize the quantum metric using Bogoliubov coefficients. Suppose from the formula of quasihole state in Eq. (II), one writes the Bogoliubov coefficients into a two-component unit vector field \({\bf w_{k}}=(v_{\bf k},-u_{\bf k})\) defined in a \(D\)-dimensional \({\bf k}\)-space, which can also be regarded as representing the quasihole state as a unit vector in the Hilbert space. Then Eq. (6) can be rewritten as the dot product between the neighboring vectors
\[\frac{1}{2}g_{\mu\nu}\delta k^{\mu}\delta k^{\nu}=1-|\langle n({ \bf k})|n({\bf k}+\delta{\bf k})\rangle|\] \[=1-|{\bf w_{k}}\cdot{\bf w_{k}}+\delta{\bf k}|. \tag{8}\]
Physically, this means that \(g_{\mu\nu}\) can be simply understood as how much the product \(|{\bf w_{k}}\cdot{\bf w_{k}}+\delta{\bf k}|\) deviates from unity, which is equivalently how much the unit vector in the Hilbert space \({\bf w_{k}}\) "twists" as one goes from \({\bf k}\) to \({\bf k}+\delta{\bf k}\). If the \({\bf w_{k}}\) is very uniform around \({\bf k}\), then \(g_{\mu\nu}\) is small. In contrast, if \({\bf w_{k}}\) changes its direction very dramatically around \({\bf k}\), meaning that \(u_{\bf k}\) and \(v_{\bf k}\) vary significantly near \({\bf k}\), then \(g_{\mu\nu}\) is large. We will demonstrate
this intuitive picture using concrete examples in the following sections.
Another geometrical quantity that we are interested is the momentum-integration of quantum metric
\[\mathcal{G}_{\mu\nu}=\int\frac{d^{D}\mathbf{k}}{(2\pi)^{D}}g_{\mu\nu}(\mathbf{k}), \tag{9}\]
which we call the fidelity number.[23] Physically, this quantity represents the _average_ distance between neighboring quasihole states \(|n(\mathbf{k})\rangle\) and \(|n(\mathbf{k}+\delta\mathbf{k})\rangle\), and hence serves as a characteristic quantum geometrical property of the BZ torus. Moreover, it is also shown to be equivalent to the gauge-invariant part of the spread of Wannier functions[44; 45; 46] (in our case the Wannier function of the quasihole state). This quantity can be mapped to lattice sites in real space as a fidelity marker by considering a lattice Bogoliubov-de Gennes (BdG) Hamiltonian that has been diagonalized \(H|E_{\ell}\rangle=E_{\ell}|E_{\ell}\rangle\). Introducing the projectors to the filled \(E_{n}<0\) and empty \(E_{m}>0\) lattice eigenstates from the projectors to the quasihole and quasiparticle states integrated over momentum
\[\hat{P} = \sum_{n}\int\frac{d^{D}\mathbf{k}}{(2\pi)^{D}}|\psi_{n\mathbf{k} }\rangle\langle\psi_{n\mathbf{k}}|\rightarrow\sum_{n}|E_{n}\rangle\langle E_ {n}|,\] \[\hat{Q} = \sum_{m}\int\frac{d^{D}\mathbf{k^{\prime}}}{(2\pi)^{D}}|\psi_{m \mathbf{k^{\prime}}}\rangle\langle\psi_{m\mathbf{k^{\prime}}}|\rightarrow \sum_{m}|E_{m}\rangle\langle E_{m}|, \tag{10}\]
where \(\langle\mathbf{r}|\psi_{n\mathbf{k}}\rangle=e^{i\mathbf{k}\cdot\mathbf{r}/ \hbar}\langle\mathbf{r}|n(\mathbf{k})\rangle\) is the full quasihole state wave function (and likewisely for \(|m(\mathbf{k})\rangle\)), it is found that the fidelity number can be written as
\[\mathcal{G}_{\mu\nu}=\frac{1}{2}\mathrm{Tr}\left[\hat{P}\hat{\mu}\hat{Q}\hat{ \nu}\hat{P}+\hat{P}\hat{\nu}\hat{Q}\hat{\mu}\hat{P}\right]=\sum_{\mathbf{r}} \mathcal{G}_{\mu\nu}(\mathbf{r}), \tag{11}\]
whose diagonal elements define a fidelity marker at \(\mathbf{r}\)
\[\mathcal{G}_{\mu\nu}(\mathbf{r})=\frac{1}{2}\sum_{\alpha}\langle \mathbf{r},\alpha|\left[\hat{P}\hat{\mu}\hat{Q}\hat{\nu}\hat{P}+\hat{P}\hat{ \nu}\hat{Q}\hat{\mu}\hat{P}\right]|\mathbf{r},\alpha\rangle\] \[\equiv\frac{1}{2}\langle\mathbf{r}|\left[\hat{P}\hat{\mu}\hat{Q} \hat{\nu}\hat{P}+\hat{P}\hat{\nu}\hat{Q}\hat{\mu}\hat{P}\right]|\mathbf{r}\rangle, \tag{12}\]
where the summation over \(\alpha\) stands for summing over the spin up particle and spin down hole at site \(\mathbf{r}\). The operator \(\hat{\mathcal{G}}_{\mu\nu}\equiv\left[\hat{P}\hat{\mu}\hat{Q}\hat{\nu}\hat{P}+ \hat{P}\hat{\nu}\hat{Q}\hat{\mu}\hat{P}\right]/2\) has been called fidelity operator. In the following sections, we shall see how the fidelity marker can be used to characterize the influence of real space inhomogeneity on the quantum geometry.
### Electromagnetic responses of singlet SCs
For a number of responses against external perturbations, such as a modulating scalar potential or electromagnetic wave, one often encounters the calculation of polarization operator. For singlet SCs, the polarization operator takes the general form[47]
\[P(\mathbf{k},\mathbf{q},i\omega)\] \[=-\sum_{\sigma\sigma^{\prime}}\int_{0}^{\beta}d\tau\,e^{i\omega \tau}\langle T_{\tau}c^{\dagger}_{\mathbf{k}+\mathbf{q}\sigma}(\tau)c_{ \mathbf{k}\sigma}(\tau)c^{\dagger}_{\mathbf{k}^{\prime}-\mathbf{q}\sigma^{ \prime}}(0)c_{\mathbf{k}^{\prime}\sigma^{\prime}}(0)\rangle\] \[=\frac{2}{\beta}\sum_{ip}\left[G(\mathbf{k},ip)G(\mathbf{k}+ \mathbf{q},ip+i\omega)\right.\] \[\left.+F(\mathbf{k},ip)F^{\dagger}(\mathbf{k}+\mathbf{q},ip+i \omega)\right], \tag{13}\]
where \(\mathbf{q}\) and \(\mathbf{k}\) are external and internal momenta, respectively, and \(i\omega\) and \(ip\) are Matsubara frequencies. The Green's functions are given by
\[G(\mathbf{k},ip)=\frac{u_{\mathbf{k}}^{2}}{ip-E_{\mathbf{k}}}+ \frac{v_{\mathbf{k}}^{2}}{ip+E_{\mathbf{k}}},\] \[F(\mathbf{k},ip)=F^{\dagger}(\mathbf{k},ip)\] \[=-u_{\mathbf{k}}v_{\mathbf{k}}\left(\frac{1}{ip-E_{\mathbf{k}}}- \frac{1}{ip+E_{\mathbf{k}}}\right). \tag{14}\]
We are interested in the retarded response at zero temperature, which is given by
\[P_{0}(\mathbf{k},\mathbf{q},\omega) = 2\left[\frac{u_{\mathbf{k}+\mathbf{q}}^{2}v_{\mathbf{k}}^{2}-u_{ \mathbf{k}}v_{\mathbf{k}}u_{\mathbf{k}+\mathbf{q}}v_{\mathbf{k}+\mathbf{q}}}{ \hbar\omega-E_{\mathbf{k}}-E_{\mathbf{k}+\mathbf{q}}+i\eta}\right. \tag{15}\] \[\left.-\frac{v_{\mathbf{k}+\mathbf{q}}^{2}u_{\mathbf{k}}^{2}-u_{ \mathbf{k}}v_{\mathbf{k}}u_{\mathbf{k}+\mathbf{q}}v_{\mathbf{k}+\mathbf{q}}}{ \hbar\omega+E_{\mathbf{k}}+E_{\mathbf{k}+\mathbf{q}}+i\eta}\right].\]
after an analytical continuation \(i\omega\rightarrow\hbar\omega+i\eta\), with \(\eta\) a small artificial broadening. Furthermore, there are two kinds of situations that one often encounters in practical applications. One is the optical absorption process that corresponds to excitation of quasiparticles, which corresponds to taking the imaginary part of the first term in Eq. (15) at finite frequency, yielding
\[-\frac{1}{\pi}\mathrm{Im}P_{0}(\mathbf{k},\mathbf{q},\omega) = 2\left[u_{\mathbf{k}+\mathbf{q}}^{2}v_{\mathbf{k}}^{2}-u_{ \mathbf{k}}v_{\mathbf{k}}u_{\mathbf{k}+\mathbf{q}}v_{\mathbf{k}+\mathbf{q}}\right] \tag{16}\] \[\times\delta(\hbar\omega-E_{\mathbf{k}}-E_{\mathbf{k}+\mathbf{q}}),\]
where the \(\delta\)-function ensures the energy and momentum conservation. The other typical response is the static response that corresponds to taking the real part of both terms in Eq. (15) in the \(\omega=0\) limit, yielding
\[\mathrm{Re}P_{0}(\mathbf{k},\mathbf{q},0)=-\frac{2(u_{\mathbf{k}+\mathbf{q}}v_{ \mathbf{k}}-v_{\mathbf{k}+\mathbf{q}}u_{\mathbf{k}})^{2}}{E_{\mathbf{k}}+E_{ \mathbf{k}+\mathbf{q}}}. \tag{17}\]
In what follows, we shall see some practical applications of these two situations, which turn out to both contain the integration of quantum metric. In particular, we will focus on the dynamic current-current correlator that is relevant to the infrared absorption, the static current-current correlator relevant to the paramagnetic current, and the static density-density correlator that is related to the dielectric function, and elaborate how quantum metric manifests in these quantities.
Dynamic current-current correlator: Infrared absorption
Consider a singlet SC subject to a transverse EM wave polarized in \(\hat{\mathbf{\mu}}\) direction and propagating along \(\hat{\mathbf{\nu}}\) direction with a small but finite wave vector \({\bf q}=q\hat{\mathbf{\nu}}\), so \(\hat{\mathbf{\mu}}\perp\hat{\mathbf{\nu}}\). In this situation, the current density operator in \(D\)-dimension flowing along \(\hat{\mathbf{\mu}}\) direction Fourier transformed along the propogation direction \(\hat{\mathbf{\nu}}\) of the EM wave is
\[j_{\mu}({\bf q})=\frac{e}{a^{D}}\sum_{\bf k}v_{\mu}({\bf k})c_{{\bf k}+{\bf q} \sigma}^{\dagger}c_{{\bf k}\sigma}. \tag{18}\]
where \(v_{\mu}({\bf k})=\partial_{\mu}\varepsilon_{\bf k}\) is the normal state group velocity at \({\bf k}\) (not to be confused with the Bogoliubov coefficient \(v_{\bf k}\)). The perturbation is described by
\[H^{\prime}=-a^{D}j_{\mu}({\bf q})A_{\mu}({\bf q},t), \tag{19}\]
where \(A_{\mu}({\bf q},t)=\sum_{\bf r}A_{\mu}({\bf r},t)e^{i{\bf q}\cdot{\bf r}}\) is the Fourier component of the time-dependent vector field polarized along \(\hat{\mathbf{\mu}}\). Defining the Matsubara current-current correlator by
\[\pi({\bf q},i\omega)=-\frac{a^{D}}{\hbar}\int_{0}^{\beta}d\tau\, e^{i\omega\tau}\langle T_{\tau}j_{\mu}({\bf q},\tau)j_{\mu}(-{\bf q},0)\rangle\] \[=\frac{e^{2}}{a^{D}}\sum_{\bf k}v_{\mu}^{2}P({\bf k},{\bf q},i \omega), \tag{20}\]
the optical conductivity \(\sigma_{\mu\mu}({\bf q},\omega)\equiv\sigma({\bf q},\omega)\) along the direction of polarization \(\hat{\mathbf{\mu}}\) for a clean SC at zero temperature can be calculated. According to the linear response theory, the conductivity corresponds to taking the imaginary part of the first term of Eq. (15) and correspondingly in Eq. (20) that represents the absorption process, and then integrated over momentum [47; 48]
\[\sigma({\bf q},\omega)=\frac{2\pi e^{2}}{\omega}\int\frac{d^{D}{ \bf k}}{(2\pi\hbar)^{D}}v_{\mu}^{2}\] \[\times\left[u_{{\bf k}+{\bf q}}^{2}v_{\bf k}^{2}-u_{\bf k}v_{\bf k }u_{{\bf k}+{\bf q}}v_{{\bf k}+{\bf q}}\right]\delta(\hbar\omega-E_{\bf k}-E _{{\bf k}+{\bf q}}). \tag{21}\]
We shall see below how the coherence factor in this expression should be treated.
#### ii.2.2 Static current-current correlator: Paramagnetic current
In the presence of a static vector field that modulates with a finite wave vector \({\bf q}=q\hat{\mathbf{\nu}}\), the zero temperature London equation (in SI unit) is modified by [43]
\[J_{\mu}({\bf q})=J_{\mu 1}({\bf q})+J_{\mu 2}({\bf q})=K_{1}({\bf q})A_{ \mu}({\bf q})+J_{\mu 2}({\bf q}), \tag{22}\]
where \(J_{\mu 2}({\bf q})\) is the usual diamagnetic current that gives the Meissner effect, and is determined by the penetration depth \(\lambda_{L}\) in 3D. The \(J_{\mu 1}({\bf q})\) is a paramagnetic current that acts against \(J_{\mu 2}({\bf q})\) and only occurs at \({\bf q}\neq{\bf 0}\). The paramagnetic current may be regarded as a response to the static vector field, described by the perturbation in Eq. (19) but with a static \(A_{\mu}({\bf q},t)=A_{\mu}({\bf q})\), and be cauculated by a linear response theory
\[J_{\mu 1}({\bf q})=\langle j_{\mu}({\bf q})\rangle=-{\rm Re}\,\pi({ \bf q},0)|_{T=0}A_{\mu}({\bf q}), \tag{23}\]
yielding the response coefficient
\[K_{1}({\bf q})={\rm Re}\,\pi({\bf q},0)|_{T=0}=\frac{e^{2}}{a^{D }}\sum_{\bf k}v_{\mu}^{2}{\rm Re}P_{0}({\bf k},{\bf q},0)\] \[=-2e^{2}\int\frac{d^{D}{\bf k}}{(2\pi\hbar)^{D}}\,v_{\mu}^{2}\, \frac{(u_{{\bf k}+{\bf q}}v_{{\bf k}}-v_{{\bf k}+{\bf q}}u_{{\bf k}})^{2}}{E_ {\bf k}+E_{{\bf k}+{\bf q}}}, \tag{24}\]
which agrees with the result directly calculated from applying first order perturbation theory to the BCS ground state. [43]
#### ii.2.3 Static density-density correlator: Linear screening
The static \(\omega=0\) dielectric function at zero temperature within random phase approximation (RPA) is given by [47]
\[\varepsilon({\bf q},\,0)=1-V_{\bf q}P_{0}({\bf q},\,0), \tag{25}\]
where \(V({\bf q})=\sum_{\bf r}e^{i{\bf q}\cdot{\bf r}}V({\bf r})\) is the Fourier transform of the Coulomb potential, and \(P_{0}({\bf q},\,0)\) is precisely that in Eq. (17) integrated over the internal momentum \({\bf k}\)
\[P_{0}({\bf q},\,0)=\int\frac{d^{D}{\bf k}}{(2\pi\hbar/a)^{D}}{ \rm Re}P_{0}({\bf k},{\bf q},\,0)\] \[=-2\int\frac{d^{D}{\bf k}}{(2\pi\hbar/a)^{D}}\frac{(u_{{\bf k}+{ \bf q}}v_{{\bf k}}-v_{{\bf k}+{\bf q}}u_{{\bf k}})^{2}}{E_{\bf k}+E_{{\bf k}+{ \bf q}}}. \tag{26}\]
The result is an expression very similar to that in Eq. (24).
## III S-wave superconductors
### Mean field theory for s-wave SCs
Our first concrete example concerns the clean, proto-type s-wave superconductor in which analytical results for quantum metric can be given. For simplicity, we consider the \(D\)-dimensional cubic lattice models with nearest-neighbor hopping \(t\) and chemical potential \(\mu\), in which the dispersion is given by
\[\varepsilon_{\bf k}=-2t\sum_{i=1}^{D}\cos k_{i}-\mu, \tag{27}\]
The gap is a constant \(d_{1}=\Delta\) and is treated as a parameter, hence \(\partial_{\mu}d_{1}=0\), and the derivative on the normal state dispersion \(\partial_{\mu}d_{3}=\partial_{\mu}\varepsilon_{\bf k}=v_{\mu}({\bf k})\equiv v _{\mu}\) just gives the normal state group velocity along \(\mu\)-direction at momentum \({\bf k}\). Using Eq. (7), the quantum metric is
\[g_{\mu\nu}=\frac{\Delta^{2}v_{\mu}v_{\nu}}{4E_{\bf k}^{3}}. \tag{28}\]
We see that \(\Delta v_{\mu}/2E_{\bf k}\) plays the role of vielbein. We are particularly interested in the region near the Fermi momentum \(k\approx k_{F}\) where the dispersion can be expanded by \(\varepsilon_{\bf k}\approx v_{F}(k-k_{F})\). In addition, the BCS coherence length in the mean field theory is given by
\[\xi=\frac{\hbar v_{F}}{\pi\Delta}, \tag{29}\]
Denoting the quantum metric exactly at the Fermi momentum as \(g_{\mu\nu}({\bf k}_{F})\), the quantum metric near the Fermi momentum takes the Lorentzian form
\[g_{\mu\nu}({\bf k}\approx{\bf k}_{F})\approx\frac{g_{\mu\nu}({ \bf k}_{F})}{1+2\pi^{2}(\xi/\hbar)^{2}(k-k_{F})^{2}},\] \[g_{\mu\nu}({\bf k}_{F})=\frac{v_{\mu}v_{\nu}}{4\Delta^{2}}. \tag{30}\]
This simple formula allows us to plot the profile of the quantum metric in momentum space.
To apply the optical conductivity formula in Eq. (21) to \(s\)-wave SCs, we expand the coherence factor to second order in \({\bf q}=q\hat{\mathbf{\nu}}\), yielding
\[u_{{\bf k}+{\bf q}}^{2}v_{{\bf k}}^{2}-u_{{\bf k}}v_{{\bf k}}u_{{ \bf k}+{\bf q}}v_{{\bf k}+{\bf q}}\] \[\approx(qu_{{\bf k}}v_{{\bf k}}+q^{2}v_{{\bf k}}\partial_{\nu}u_{ {\bf k}})(v_{{\bf k}}\partial_{\nu}u_{{\bf k}}-u_{{\bf k}}\partial_{\nu}v_{{ \bf k}})\] \[=\left(\frac{q\Delta}{2E_{\bf k}}+\frac{q^{2}\Delta^{3}v_{\nu}}{4 (E_{\bf k}+\varepsilon_{\bf k})E_{\bf k}^{3}}\right)\frac{\Delta v_{\nu}}{2E_{ \bf k}^{2}}. \tag{31}\]
Moreover, we will approximate the argument in the \(\delta\)-function by \(\hbar\omega-E_{\bf k}-E_{\bf k+q}\approx\hbar\omega-2E_{\bf k}\), which allows to replace the inverse frequency in the expression by \(1/\omega\approx\hbar/2E_{\bf k}\). These approximations allow to expand the optical conductivity to first and second order in \(q\). It should be reminded that this expansion in \(q\) is appropriate for clean SCs where momentum conservation is satisfied, in contrast to the seminal work of Mattis and Bardeen that discuss the limit of dirty SCs where \({\bf k}\) and \({\bf k}+{\bf q}\) are treated as two unrelated momenta.[47; 48]
To apply the paramagnetic current formula in Eq. (24) to \(s\)-wave SC, we observe that the expansion of the coherence factor to leading order in \({\bf q}=q\hat{\mathbf{\nu}}\) yields the diagonal element of the quantum metric along \({\bf q}\)
\[(u_{{\bf k}+{\bf q}}v_{{\bf k}}-v_{{\bf k}+{\bf q}}u_{{\bf k}})^{2}\] \[\approx q^{2}\left(v_{{\bf k}}\partial_{\nu}u_{{\bf k}}-u_{{\bf k }}\partial_{\nu}v_{{\bf k}}\right)^{2}=q^{2}g_{\nu\nu}, \tag{32}\]
according to Eq. (7). In addition, an analytical expression for these electromagnetic responses can be given for continuous models with a quadratic dispersion, which has the expressions of energy dispersion, quantum metric, and coherence length near the Fermi surface
\[E_{\bf k}=\left[\left(\frac{k^{2}}{2m}-\frac{k_{F}^{2}}{2m}\right) ^{2}+\Delta^{2}\right]^{1/2}\] \[\approx\Delta\left[1+\frac{1}{2}\left(\frac{\pi\xi}{\hbar}\right) ^{2}(k-k_{F})^{2}\right],\] \[g_{\nu\nu}\approx\frac{(k\cos\theta/m)^{2}}{4\Delta^{2}\left[1+ 2\left(\frac{\pi\xi}{\hbar}\right)^{2}(k-k_{F})^{2}\right]},\] \[\xi=\hbar k_{F}/\pi m\Delta. \tag{33}\]
Moreover, because the width of Lorentzians is extremely small, one may approximate them as \(\delta\)-functions
\[\frac{1}{\left[1+\frac{1}{2}\left(\frac{\pi\xi}{\hbar}\right)^{2 }(k-k_{F})^{2}\right]^{n}}\] \[\approx\frac{1}{\left[1+2n\pi^{4}\left(\frac{\xi}{\hbar}\right)^ {2}(x-x_{F})^{2}\right]}\] \[=\frac{\eta^{2}}{\eta^{2}+(x-x_{F})^{2}}\approx\pi\eta\delta(x-x_ {F}) \tag{34}\]
after a change of variable \(x=k/(2\pi\hbar/a)\) and defining \(\eta=a/\sqrt{2n}\pi^{2}\xi\), where \(n\) is any power in the calculation. These approximations allow to express the electromagnetic responses in terms of the fidelity number, as we shall see below for the 3D and 2D cases.
### Quantum geometry and electromagnetic response of 3D \(s\)-wave SCs
#### iii.2.1 Profile of the quantum metric in momentum space
To simulate the \(s\)-wave SC on a 3D cubic lattice, we use the tight-binding model in Eq. (27) with \(t=1\) and \(\mu=-0.2\), and a rather large gap \(\Delta=0.5\) just to visualize the effect. The profile of the Bogoliubov coefficients \((v_{{\bf k}},-u_{{\bf k}})\) plotted as a two-component vector field in the 3D momentum space, which can also be considered as the quasihole state \(|n\rangle\) as a unit vector in the Hilbert space at momentum \({\bf k}\), is shown in Fig. 1 (a). One sees that both below and above the Fermi surface, the vector field is fairly uniform, indicating that the wave function is either hole-like or electron-like. Only near the Fermi surface does the vector field start to twist dramatically in order to go from hole-like to electron-like. As a result, the quantum metric \(g_{xx}({\bf k})\) peaks at the Fermi surface of the normal state, in accordance with Eq. (8).
For the continuous model, by using the approximations in Eqs. (33) and (34), one can obtain the analytical
expression for the fidelity number
\[\mathcal{G}^{3D}_{\mu\mu}=\int\frac{d^{3}\mathbf{k}}{(2\pi)^{3}}g_{ \mu\mu}\] \[=\frac{\pi^{2}}{6\sqrt{2}}\left(\frac{\xi}{a}\right)\left(\frac{k_ {F}}{2\pi\hbar/a}\right)^{2}\left(\frac{\hbar}{a}\right). \tag{35}\]
The factor \(k_{F}/(2\pi\hbar/a)\) in this expression is of the order of unity, so the fidelity number is essentially given by the coherence length divided by the lattice constant \(\xi/a\) times the correct unit \(\hbar/a\). Remarkably, this result implies that the fidelity number, or equivalently the spread of Wannier function,[44; 45; 46] is synonymous to the coherence length, and hence any property of SCs that is proportional to the coherence length can as well be written in terms of the fidelity number.
For the typical coherence length \(\xi\sim\mu\)m and lattice constant \(a\sim\)nm of 3D \(s\)-wave SCs,[43] one obtains \(\mathcal{G}^{3D}_{\mu\mu}\sim 10^{3}(\hbar/a)\). This value can be compared with the fidelity number in 3D topological insulators (TIs) whose dimensionless part scales like \(|M|a/\hbar v+\text{const}\sim\mathcal{O}(1)\), where \(v\) is the Fermi velocity and \(M\) is the band gap, which yields a number that is of the order of unity.[23] Thus we see that the fidelity number of \(s\)-wave SCs is actually 2 to 3 orders of magnitude larger than that of a typical TI, indicating that the BZ manifold of an \(s\)-wave SC is much more distorted, or equivalently the Wannier functions are much more spread out,[44; 45; 46] in comparison with that of an 3D TI.
#### iii.1.2 Infrared absorption
For 3D \(s\)-wave SC, the optical conductivity in Eq. (21) expanded to first order in \(\mathbf{q}=q\hat{\mathbf{\nu}}\) vanishes
\[\sigma^{1\text{st}}(\mathbf{q},\omega)\approx\pi e^{2}\hbar q\int \frac{d^{3}\mathbf{k}}{(2\pi\hbar)^{3}}g_{\mu\mu}v_{\nu}\delta(\hbar\omega-2E_ {\mathbf{k}})\] \[=0, \tag{36}\]
since the quantum metric is even but the velocity is odd in \(\mathbf{k}\). Thus the first nonvanishing contribution is second order in \(q\)
\[\sigma^{2\text{nd}}(\mathbf{q},\omega) \approx 2\pi e^{2}\frac{\hbar q^{2}}{2m}\int\frac{d^{3}\mathbf{k}}{(2 \pi\hbar)^{3}}\left[\frac{mv_{\nu}^{2}\Delta^{2}}{E_{\mathbf{k}}^{2}(E_{ \mathbf{k}}+\varepsilon_{\mathbf{k}})}\right] \tag{37}\] \[\times g_{\mu\mu}\delta(\hbar\omega-2E_{\mathbf{k}}),\]
given by the integration of quantum metric \(g_{\mu\mu}\) weighted by the dimensionless factor \(mv_{\nu}^{2}\Delta^{2}/E_{\mathbf{k}}^{2}(E_{\mathbf{k}}+\varepsilon_{ \mathbf{k}})\) and the energy conservation condition. This expression is conceptually different from the optical conductivity in semiconductors, where the quantum metric is exactly the matrix element for the excitation of electrons from the valence to the conduction band.[9; 10; 12] In contrast, the Bogoliubov transformation renders a more complicated form for the matrix element. Nevertheless, the matrix element in Eq. (37) still contains the contribution from the quantum metric.
#### iii.1.3 Paramagnetic current
For 3D \(s\)-wave SC, using Eqs. (24) and (32) yields the response coefficient for the paramagnetic current
\[K_{1}^{3D}(\mathbf{q})\approx-e^{2}q^{2}\int\frac{d^{3}\mathbf{k}}{(2\pi\hbar )^{3}}v_{\mu}^{2}\frac{g_{\nu\nu}}{E_{\mathbf{k}}}. \tag{38}\]
For the continuous model, we may define the \(\hat{\mathbf{\nu}}\) direction to be along the solid angles \((\theta,\phi)\) that are to be integrated out, and the velocity factor to be \(v_{\mu}=\cos(\theta-\alpha)k/m\), where \(\alpha\) is the angle between the polarization \(\hat{\mathbf{\mu}}\) and the spatial modulation \(\hat{\mathbf{\nu}}\parallel\hat{\mathbf{q}}\) directions of the vector field \(\mathbf{A}\). The integration in the spherical coordinate can then be carried out using Eqs. (33) and (34), yielding
\[K_{1}^{3D}(\mathbf{q})\approx-e^{2}\frac{4\pi^{6}f(\alpha)}{\sqrt {10}ma^{3}}\left(\frac{\xi}{a}\right)^{2}\left(\frac{q}{2\pi\hbar/a}\right)^{ 2}\left(\frac{k_{F}}{2\pi\hbar/a}\right)^{3},\] \[f(\alpha)\equiv\frac{4}{15}+\frac{2}{15}\cos^{2}\alpha. \tag{39}\]
The \(k_{F}/(2\pi\hbar/a)\) is again of the order of unity, and \(q/(2\pi\hbar/a)\) is the spatial modulation of vector field measured in unit of Fermi wavelength, and \(e^{2}/ma^{3}\) is the correct unit for \(K_{1}(\mathbf{q})\) in 3D. Note that the factor \(\left[k_{F}/(2\pi\hbar/a)\right]^{3}\) essentially represents the volume of the Fermi sea measured in units of the BZ, which also roughly represents the electron density. For the most situations, the polarization and propagation of the vector field are perpendicular \(\hat{\mathbf{\mu}}\perp\hat{\mathbf{\nu}}\), yielding \(\alpha=\pi/2\) and the angular factor is just \(f(\alpha)=4/15\). Equation (39) means that \(K_{1}(\mathbf{q})\) is essentially given by the square of coherence length measured in units of lattice constant \((\xi/a)^{2}\sim 10^{6}\), which can reach a very large number. Moreover, it can be expressed in terms of the fidelity number \(\mathcal{G}_{\nu\nu}\) in eq. (35)
Figure 1: (a) The Bogoliubov coefficients plotted as a two-component vector field \((v_{\mathbf{k}},-u_{\mathbf{k}})\) in the momentum space of a 3D \(s\)-wave SC. Without loss of generality the two-component vectors are chosen to be lying on the \(xy\)-plane. (b) The magnitude of quantum metric \(g_{xx}\) of the 3D \(s\)-wave SC in momentum space, which coincides with the twisting of the vector field \((v_{\mathbf{k}},-u_{\mathbf{k}})\) in (a).
as
\[K_{1}^{3D}(\mathbf{q}) = -\frac{288\pi^{2}}{\sqrt{10}}\frac{f(\alpha)e^{2}}{ma^{3}}\left( \frac{q}{2\pi\hbar/a}\right)^{2} \tag{40}\] \[\times\left(\frac{k_{F}}{2\pi\hbar/a}\right)^{-1}\left(\frac{ \mathcal{G}_{\nu\nu}^{3D}}{\hbar/a}\right)^{2},\]
manifesting a quadratic dependence on the fidelity number. Note that the quadratic dependence on \(q\) is well-known in the literature,[43] and our calculation gives the prefactor of this dependence a quantum geometrical interpretation.
#### iii.2.4 Linear screening
For the linear screening, putting the expansion in Eq. (32) into the dielectric response in Eq. (26) and approximate \(E_{\mathbf{k}}+E_{\mathbf{k}+\mathbf{q}}\approx 2E_{\mathbf{k}}\) yield
\[P_{0}^{3D}(\mathbf{q},0)\approx-q^{2}\int\frac{d^{3}\mathbf{k}}{(2\pi\hbar/a) ^{3}}\,\frac{g_{\nu\nu}}{E_{\mathbf{k}}}. \tag{41}\]
Using the approximations in Eqs. (33) and (34), and also using the fidelity number in Eq. (35), for the continuous model we obtain
\[P_{0}^{3D}(\mathbf{q},0)\approx-\frac{4\pi^{4}}{3\sqrt{10}\Delta }\left(\frac{\xi}{a}\right)\left(\frac{k_{F}}{2\pi\hbar/a}\right)^{2}\left( \frac{q}{2\pi\hbar/a}\right)^{2}\] \[=-\frac{8\pi^{2}}{\sqrt{5}\Delta}\left(\frac{q}{2\pi\hbar/a} \right)^{2}\frac{\mathcal{G}_{\nu\nu}^{3D}}{\hbar/a}, \tag{42}\]
which may be used to extract \(\mathcal{G}_{\nu\nu}\) provided that the lattice constant \(a\) and gap \(\Delta\) are known.
#### iii.2.5 Fidelity marker near an impurity
In Fig. 2 (a) and (b), we show the central region of a cubic lattice in which we perform numerical calculation for the fidelity marker \(\mathcal{G}_{xx}(\mathbf{r})\) in the presence of a nonmagnetic impurity with local impurity potential \(U\). The marker is fairly constant for sites far away from the impurity, but it is locally suppressed on the impurity site. Increasing the impurity potential further suppresses the marker until it is completely diminished, as can be seen by comparing Fig. 2 (a) and (b). Through calculating the spatial average of the marker, we find that the average marker is suppressed by the impurity, indicating that nonmagnetic impurities reduce the average distance between the quasihole state in momentum space for a 3D \(s\)-wave SC.
### Quantum geometry and electromagnetic response of 2D \(s\)-wave SCs
#### iii.2.1 Profile of the quantum metric in momentum space
An SC may be considered 2D if its thickness is smaller than the in-plane coherence length.[49] We will consider strictly 2D systems with an \(s\)-wave pairing, and assume that the Mermin-Wigner theorem[50] can be overcome by some other factors not included in the mean field theory, such as weak coupling between the planes. The Bogoliubov coefficients as a 2D vector field is shown in Fig. 3 (a). At a momentum \(\mathbf{k}\), the twisting of this vector field under a small displacement \(\delta k_{x}\) along \(\hat{\mathbf{x}}\) direction gives the quantum metric \(g_{xx}(\mathbf{k})\) shown in Fig. 3. We obtain a profile of \(g_{xx}(\mathbf{k})\) that highly peaks at the Fermi surface, in agreement with Eq. (8).
For the continuous model of 2D \(s\)-wave SCs, analytically carrying out the polar integration using the approximations in Eq. (33) and (34) yields the fidelity number
\[\mathcal{G}_{\mu\mu}^{2D}=\int\frac{d^{2}\mathbf{k}}{(2\pi)^{2}}g_{\mu\mu} \approx\frac{\pi^{2}}{8\sqrt{2}}\left(\frac{\xi}{a}\right)\left(\frac{k_{F}}{ 2\pi\hbar/a}\right). \tag{43}\]
Once again the \(k_{F}/(2\pi\hbar/a)\) factor is of the order of unity, so we see that the fidelity number is a dimensionless
Figure 3: (a) The Bogoliubov coefficients as a 2D vector field for 2D \(s\)-wave SC in the first quartet of the BZ. The twisting of this vector field under a small displacement \(\delta k_{x}\) along \(\hat{\mathbf{x}}\) direction is essentially the quantum metric \(g_{xx}(\mathbf{k})\) shown in (b), which peaks at the Fermi surface (dotted line).
Figure 2: The fidelity marker in a 3D \(s\)-wave SC around a nonmagnetic impurity with (a) a weak impurity potential \(U=2\), where the largest circle represents magnitude \(0.332\), and (b) a strong impurity potential \(U=1000\), where the largest circle corresponds to \(0.339\).
number determined by the coherence length measured in units of lattice constant \(\xi/a\), just like in the 3D case. As a result, any property that is proportional to the coherence length is a direct measurement of the fidelity number. Note that various 2D SCs with evidence for \(s\)-wave pairing have been discovered,[51; 52; 53; 54; 55; 56] although not much information about their coherence length has been extracted. Nevertheless, within the BCS framework and estimating from their low critical temperatures, the coherence length of these materials should also be of the order of \(\mu\)m, yielding a fidelity number \(\sim 10^{3}\). This number is much larger than that in 2D TIs which is logarithmic to the band gap \(\sim\ln|M|a/\hbar v\) and hence of the order of unity, indicating a much more distorted BZ manifold in 2D \(s\)-wave SCs.
#### iii.2.2 Infrared absorption
The infrared absorption in 2D is precisely that in 3D given by Eq. (37) with a reduction of the dimension of integration \(\int d^{3}{\bf k}/(2\pi\hbar)^{3}\rightarrow\int d^{2}{\bf k}/(2\pi\hbar)^{2}\). As a result, the quantum metric still enters the integrand of the momentum-integration.
#### iii.2.3 Paramagnetic current
The paramagnetic current in 2D is given by that in Eq. (38) with a reduction in the integration \(\int d^{3}{\bf k}/(2\pi\hbar)^{3}\rightarrow\int d^{2}{\bf k}/(2\pi\hbar)^{2}\). The analytical result for the continuous model is
\[K_{1}^{2D}({\bf q})=-\frac{2\pi^{5}e^{2}\overline{f}(\alpha)}{ \sqrt{10}ma^{2}}\left(\frac{q}{2\pi\hbar/a}\right)^{2}\left(\frac{\xi}{a} \right)^{2}\left(\frac{k_{F}}{2\pi\hbar/a}\right)^{2}\] \[=-\frac{256\pi e^{2}\overline{f}(\alpha)}{\sqrt{10}ma^{2}}\left( \frac{q}{2\pi\hbar/a}\right)^{2}\left(\mathcal{G}_{\nu\nu}^{2D}\right)^{2},\] \[\overline{f}(\alpha)\equiv\frac{\pi}{4}+\frac{\pi}{2}\cos^{2}\alpha, \tag{44}\]
which is quadratic in the fidelity number.
#### iii.2.4 Linear screening
The linear screening in 2D can be calculated from replacing \(\int d^{3}{\bf k}/(2\pi\hbar/a)^{3}\rightarrow\int d^{2}{\bf k}/(2\pi\hbar/a) ^{2}\) in Eq. (41). Applying the approximations in Eqs. (33) and (34) yields the result for the continuous model
\[P_{0}^{2D}({\bf q},0)\approx-\frac{\pi^{4}}{\sqrt{10}\Delta} \left(\frac{\xi}{a}\right)\left(\frac{k_{F}}{2\pi\hbar/a}\right)\left(\frac{q} {2\pi\hbar/a}\right)^{2}\] \[=-\frac{8\pi^{2}}{\sqrt{5}\Delta}\left(\frac{q}{2\pi\hbar/a} \right)^{2}\mathcal{G}_{\nu\nu}^{2D} \tag{45}\]
which again implies that \(\mathcal{G}_{\nu\nu}^{2D}\) can be measured by detecting \(P_{0}^{2D}({\bf q},0)\) in 2D the \(q\to 0\) limit.
#### iii.2.5 Fidelity marker near an impurity
The fidelity marker around a nonmagnetic impurity in a square lattice of 2D \(s\)-wave SC is shown in Fig. 4 (a) and (b). We find a behavior similar to that of 3D \(s\)-wave shown in Sec. III.2.5, namely the marker is locally suppressed on the impurity site by the impurity potential, causing the average marker to be reduced. This implies that nonmagnetic impurities also reduce the average distance between the quasihole state in momentum space of a 2D \(s\)-wave SC.
## IV D-wave superconductors
### Mean field theory for \(d\)-wave SCs
Finally, we investigate the quantum geometrical properties of a \(d\)-wave SC within the context of mean field theory, which may be particularly relevant to the overdoped regime of the phase diagram.[57; 58] The energy dispersion
Figure 4: The fidelity marker in a 2D \(s\)-wave SC around a nonmagnetic impurity with (a) impurity potential \(U=2\), where the largest circle represents magnitude \(0.433\), and (b) \(U=1000\), where the largest circle corresponds to \(0.474\).
Figure 5: (a) The vector field of \((v_{\bf k},-u_{\bf k})\) in the momentum space of a \(d\)-wave SC, and (b) the quantum metric \(g_{xx}({\bf k})\) that corresponds to the twisting of this vector field.
and the gap are parametrized by
\[\varepsilon_{\mathbf{k}}=-2t(\cos k_{x}+\cos k_{y})+4t^{\prime}\cos k _{x}\cos k_{y}-\mu=d_{3},\] \[\Delta_{\mathbf{k}}=2\Delta_{0}(\cos k_{x}-\cos k_{y})=d_{1}, \tag{46}\]
and \(E_{\mathbf{k}}=\sqrt{\varepsilon_{\mathbf{k}}^{2}+\Delta_{\mathbf{k}}^{2}}\). For concreteness, we use (in units of eV) \(t=0.15\), \(t^{\prime}=0.04\), \(\mu=-0.13\), and \(\Delta_{0}=0.1\) that are appropriate for optimally doped to slightly overdoped [59] Bi\({}_{2}\)Sr\({}_{2}\)CaCu\({}_{2}\)O\({}_{8+x}\). The quantum metric calculated from Eq. (7) takes the vielbein form
\[g_{\mu\nu}=e_{\mu}e_{\nu},\] \[e_{x}=\frac{\Delta_{0}}{E_{\mathbf{k}}^{2}}\sin k_{x}\left(4t \cos k_{y}-4t^{\prime}\cos^{2}k_{y}+\mu\right),\] \[e_{y}=\frac{\Delta_{0}}{E_{\mathbf{k}}^{2}}\sin k_{y}\left(-4t \cos k_{x}+4t^{\prime}\cos^{2}k_{x}-\mu\right). \tag{47}\]
In Fig. 5 (a), we present the unit vector field \(\mathbf{w_{k}}=(v_{\mathbf{k}},-u_{\mathbf{k}})\) of the quasihole state, which exhibits a vortex like feature near the nodal point \(\mathbf{k}_{0}\) where the energy dispersion \(E_{\mathbf{k_{0}}}=0\), meaning that the quasihole state as a unit vector in the Hilbert space rotates very dramatically near the nodal point. As a result, the quantum metric shown in Fig. 5 (b) also displays a very singular behavior that has a pair of maxima around the nodal point.
To get a clear physical picture about the peculiar momentum profile of the metric, an analytical expression can be given for the pedagogical case when we manually turn off \(t^{\prime}=\mu=0\) such that the Fermi surface has a diamond shape and the nodal point is located at \(\mathbf{k}_{0}=(\pi/2,\pi/2)\). In this case, the corresponding bare quantum matrix \(\overline{g}_{\mu\nu}\) expanded around the nodal point \(\mathbf{k}=\mathbf{k}_{0}+\delta\mathbf{k}\) takes the form
\[\overline{g}_{xx} \approx\frac{\Delta_{0}^{2}\,t^{2}\delta k_{y}^{2}}{\left[(t^{2}+ \Delta_{0}^{2})(\delta k_{x}^{2}+\delta k_{y}^{2})+2(t^{2}-\Delta_{0}^{2}) \delta k_{x}\delta k_{y}\right]^{2}},\] \[\overline{g}_{yy} \approx\frac{\Delta_{0}^{2}\,t^{2}\delta k_{x}^{2}}{\left[(t^{2}+ \Delta_{0}^{2})(\delta k_{x}^{2}+\delta k_{y}^{2})+2(t^{2}-\Delta_{0}^{2}) \delta k_{x}\delta k_{y}\right]^{2}},\] \[\overline{g}_{xy} \approx\frac{-\Delta_{0}^{2}\,t^{2}6\delta k_{x}\delta k_{y}}{ \left[(t^{2}+\Delta_{0}^{2})(\delta k_{x}^{2}+\delta k_{y}^{2})+2(t^{2}-\Delta _{0}^{2})\delta k_{x}\delta k_{y}\right]^{2}}, \tag{48}\]
which matches fairly well with the numerical results. We see that approaching the nodal point \(\{\delta k_{x},\delta k_{y}\}\to 0\), the bare quantum metric diverges. In addition, by changing to polar coordinates \((\delta k_{x},\delta k_{y})=(k\cos\theta,k\sin\theta)\), the expansion in Eq. (48) becomes
\[\overline{g}_{xx} \approx\frac{1}{k^{2}}\times\frac{\Delta_{0}^{2}t^{2}\sin^{2} \theta}{\left[(t^{2}+\Delta_{0}^{2})+(t^{2}-\Delta_{0}^{2})\sin 2\theta \right]^{2}},\] \[\overline{g}_{yy} \approx\frac{1}{k^{2}}\times\frac{\Delta_{0}^{2}t^{2}\cos^{2} \theta}{\left[(t^{2}+\Delta_{0}^{2})+(t^{2}-\Delta_{0}^{2})\sin 2\theta \right]^{2}},\] \[\overline{g}_{xy} \approx\frac{1}{k^{2}}\times\frac{-\Delta_{0}^{2}t^{2}\sin\theta \cos\theta}{\left[(t^{2}+\Delta_{0}^{2})+(t^{2}-\Delta_{0}^{2})\sin 2\theta \right]^{2}}, \tag{49}\]
which after a polar integration \(\int k\,dk\,d\theta\,\overline{g}_{\mu\nu}\) diverges logarithmically, indicating that the fidelity number in Eq. (9) diverges for \(d\)-wave SCs, and therefore it may not be directly related to the electromagnetic responses we have discussed for \(s\)-wave SCs. This also implies that the average distance between Bloch states in the BZ of \(d\)-wave SCs diverges, owing to the very singular behavior near the nodal points.
### Topological charge and metric-curvature correspondence in \(d\)-wave SCs
The issue of topological charge in 2D \(d\)-wave SCs has been discussed within the context of nodal SCs. [60] In this section, we elaborate that the nodal points of a \(d\)-wave SC has a nontrivial winding number of the \(\mathbf{n}\)-field defined in Eq. (2) that can be related to the quantum metric. Our observation is that the non-Abelian Berry connection between quasihole and quasiparticle states of a singlet SC can generally be written as
\[\langle m|\partial_{\mu}n\rangle=-\text{sgn}(\Delta_{\mathbf{k}}) \langle n|\hat{C}\partial_{\mu}|n\rangle\] \[=-\text{sgn}(d_{1})\frac{d_{1}\partial_{\mu}d_{3}-d_{3}\partial_{ \mu}d_{1}}{2d^{2}}\] \[=-\frac{\text{sgn}(\Delta_{\mathbf{k}})}{2}\left(n_{1}\partial_{ \mu}n_{3}-n_{3}\partial_{\mu}n_{1}\right). \tag{50}\]
In the first line of this equation we have used the operator \(\hat{C}=-i\sigma_{2}K\) that implements the particle-hole (PH) symmetry \(\hat{C}H(\mathbf{k})\hat{C}^{-1}=-H(-\mathbf{k})\), which indicates that the non-Abelian Berry connection can as well be implemented as a kind of charge-conjugated Berry connection that is dressed by the PH operator \(\hat{C}\). In a 2D \(d\)-wave SC, one can introduce a winding number that counts how many times the \(\mathbf{n}\)-vector winds along a circle enclosing a nodal point. The integration of the non-Abelian Berry connection weighted by the sign of the gap along such a circle is equivalently this winding number (not to be confused with the PH operator \(\hat{C}\))
\[\mathcal{C}=\oint\frac{d\phi}{2\pi}(n_{1}\partial_{\phi}n_{3}-n_{ 3}\partial_{\phi}n_{1})\] \[=-2\oint\frac{d\phi}{2\pi}\text{sgn}(\Delta_{\mathbf{k}}) \langle m|\partial_{\phi}n\rangle=2\oint\frac{d\phi}{2\pi}\langle n|\hat{C} \partial_{\phi}|n\rangle\] \[\equiv\oint\frac{d\phi}{2\pi}J_{\mathbf{n}}, \tag{51}\]
where \(\phi\) is the polar angle along the circle. As shown in Fig. 6, by plotting the \(\mathbf{n}\)-vector in the momentum space, we see clearly that each nodal point corresponds to a nonzero winding number (or topological charge) \(\mathcal{C}=\pm 1\). Furthermore, the integrand of this topological charge may be written as a determinant
\[J_{\mathbf{n}}=\det\left(\mathbf{n},\partial_{\phi}\mathbf{n}\right)=\left| \begin{array}{cc}n_{1}&\partial_{\phi}n_{1}\\ n_{3}&\partial_{\phi}n_{3}\end{array}\right|\equiv\det E_{\mathbf{n}}. \tag{52}\]
As a result, the square of the integrand is equal to the azimuthal quantum metric
\[|J_{\mathbf{n}}|^{2}=\det E_{\mathbf{n}}^{T}E_{\mathbf{n}}=\det\left( \begin{array}{cc}\mathbf{n}\cdot\mathbf{n}&\mathbf{n}\cdot\partial_{\phi} \mathbf{n}\\ \partial_{\phi}\mathbf{n}\cdot\mathbf{n}&\partial_{\varphi}\mathbf{n}\cdot \partial_{\varphi}\mathbf{n}\end{array}\right)\] \[=\partial_{\varphi}\mathbf{n}\cdot\partial_{\varphi}\mathbf{n}=4 \,g_{\phi\phi}, \tag{53}\]
after using \(\mathbf{n}\cdot\mathbf{n}=1\) and Eq. (7). This relation between the integrand of the topological charge and the quantum metric has been referred to as the metric-curvature correspondence, which is found to be true in any TIs and topological superconductors described by a Dirac model, as well as 2D Dirac semimetals like graphene. In this sense, graphene and \(d\)-wave SCs actually have very similar topological and quantum geometrical properties.
## V Conclusions
In summary, we elaborate that the filled quasihole state \(|n\rangle\) of singlet SCs possesses nontrivial quantum geometrical properties. The quantum metric defined from the overlap of quasihole states at momenta \(\mathbf{k}\) and \(\mathbf{k}+\delta\mathbf{k}\) is nonzero, and can be simply understood as the twisting of the quasihole state as a unit vector in the Hilbert space that can be visualized from the Bogoliubov coefficients \(\mathbf{w_{k}}=(v_{\mathbf{k}},-u_{\mathbf{k}})\). In addition, the momentum integration of quantum metric yields a nonzero fidelity number, which is a measure of average distance between neighboring quasihole states in the BZ, and equivalently the spread of quasihole Wannier functions. For \(s\)-wave SCs, the fidelity number is essentially the coherence length measured in terms of the lattice constant and then multiplied by the correct unit. In other words, the coherence length is actually a measure of the quantum geometry in s-wave SCs. We further show that the quantum metric and fidelity number enter various electromagnetic responses such as infrared absorption, paramagnetic current, and dielectric function, indicating that these responses are directly related to the quantum geometry. The fidelity number can be further defined on lattice sites as a fidelity marker, and we find that nonmagnetic impurities locally suppress the marker, signifying the influence of disorder on the quantum geometrical properties of the \(s\)-wave SC. In contrast, for \(d\)-wave SCs, we find that the quantum metric exhibits a very singular profile near the nodal points, rendering a divergent fidelity number. Besides, the non-Abelian Berry connection that integrates to a topological charge of the nodal points is actually equivalent to the azimuthal quantum metric, satisfying a metric-curvature correspondence. Our theory thus clarifies the quantum geometrical properties of singlet SCs, the possibility of measuring them experimentally, as well as how disorder may influence these properties. Many related issues, such as whether the same aspects also applies to triplet SCs of various pairing symmetries, await to be explored.
###### Acknowledgements.
W. C. acknowledges the financial support from the productivity in research fellowship from CNPq, and D. P. is supported by the Mestrado Nota 10 fellowship from FAPERJ.
|
2305.01247 | Characterising transformations between quantum objects, 'completeness'
of quantum properties, and transformations without a fixed causal order | Many fundamental and key objects in quantum mechanics are linear mappings
between particular affine/linear spaces. This structure includes basic quantum
elements such as states, measurements, channels, instruments, non-signalling
channels and channels with memory, and also higher-order operations such as
superchannels, quantum combs, n-time processes, testers, and process matrices
which may not respect a definite causal order. Deducing and characterising
their structural properties in terms of linear and semidefinite constraints is
not only of foundational relevance, but plays an important role in enabling the
numerical optimisation over sets of quantum objects and allowing simpler
connections between different concepts and objects. Here, we provide a general
framework to deduce these properties in a direct and easy to use way. While
primarily guided by practical quantum mechanical considerations, we also extend
our analysis to mappings between general linear/affine spaces and derive their
properties, opening the possibility for analysing sets which are not explicitly
forbidden by quantum theory, but are still not much explored. Together, these
results yield versatile and readily applicable tools for all tasks that require
the characterisation of linear transformations, in quantum mechanics and
beyond. As an application of our methods, we discuss how the existence of
indefinite causality naturally emerges in higher-order quantum transformations
and provide a simple strategy for the characterisation of mappings that have to
preserve properties in a 'complete' sense, i.e., when acting non-trivially only
on parts of an input space. | Simon Milz, Marco Túlio Quintino | 2023-05-02T08:21:38Z | http://arxiv.org/abs/2305.01247v2 | # Transformations between arbitrary (quantum) objects and the emergence of indefinite causality
###### Abstract
Many fundamental and key objects in quantum mechanics are linear mappings between particular affine/linear spaces. This structure includes basic quantum elements such as states, measurements, channels, instruments, non-signalling channels and channels with memory, and also higher-order operations such as superchannels, quantum combs, n-time processes, testers, and process matrices which may not respect a definite causal order. Deducing and characterising their structural properties in terms of linear and semidefinite constraints is not only of foundational relevance, but plays an important role in enabling the numerical optimization over sets of quantum objects and allowing simpler connections between different concepts and objects. Here, we provide a general framework to deduce these properties in a direct and easy to use way. Additionally, while primarily guided by practical quantum mechanical considerations, we extend our analysis to mappings between _general_ linear/affine spaces and derive their properties, opening the possibility for analysing sets which are not explicitly forbidden by quantum theory, but are still not much explored. Together, these results yield versatile and readily applicable tools for all tasks that require the characterization of linear transformations, in quantum mechanics and beyond. As an application of our methods, we discuss the emergence of indefinite causality in higher-order quantum transformation.
###### Contents
* 1 Introduction
* 2 Warming up: quantum states and quantum channels
* 3 Linear transformations between quantum objects
* 3.1 Sets of quantum objects
* 3.2 Transformations between quantum objects
* 3.3 Map characterisation of quantum transformations
* 3.4 Choi characterisation of particular quantum transformations
* 4 Applications to particular quantum transformations and the emergence of indefinite causal order
* 5 Probabilistic quantum transformations
Measuring quantum objects: dual affine sets, POVMs, and testers * 6.1 Quantum measurement and its relationship with probabilistic transformations * 6.2 Non-signalling channels and multipartite process matrices
* 7 Link product and key concepts
* 7.1 Link Product
* 7.2 Linear operators in the link product
* 7.3 Proving statements using the link product
* 8 General approach
* 9 Applications for numerical computation and code availability
* 10 Discussions
* A Thm. 1 for \(\gamma_{i}=0\)
* B Thms. 2 and 5 for \(\gamma_{i}=0\)
* C Properties of linear maps
## 1 Introduction
Many fundamental objects in quantum mechanics can, at their most basic level, be understood as (linear) transformations of other basic objects. For example, measurements are transformations of states to probabilities, while quantum channels are transformations of quantum states to quantum states. Unsurprisingly, this simple understanding of quantum objects as transformations can straightforwardly be extended, leading to a whole host of _higher order_ transformations. To name but a few, transformations of channels to channels (yielding so-called superchannels [1]), sequence of quantum channels to quantum channels (yielding so-called quantum combs [2] and quantum strategies [3]), a sequence of channels to states (yielding so-called multi-time processes [4]) and collections channels to probabilities (yielding testers [5, 6, 7] and process matrices [8]) have been devised in recent years, each with their own respective physical motivation. On the other hand, such higher-order transformations can equivalently be motivated as the correct descriptor in many physical situations, where states, measurements and channels alone would prove to be insufficient practical tools. Consequently, they have, amongst others, found active use in the fields of open quantum system dynamics [4], quantum circuit architecture [2], the investigation of channels with memory [9], as well as the study of causal indefiniteness [8] and the dynamics of causal order [10].
Independent of the respective concrete motivation, in any of these investigations it is, as a first step, always necessary to deduce the properties of the considered transformations. For example, for the case of process matrices, one is interested in the structure of mappings that map pairs of independent quantum channels (or, equivalently, any two-party non-signalling channel) to unit probability, in order to analyse the set of processes that abide by causality locally, but not necessarily globally [8]. Having these properties at hand then does not only allow one to deduce that this latter set fundamentally differs from the set of causally ordered processes, but also enables numerical optimization over causally indefinite processes. On the more axiomatic side, recent works have discussed the properties of the hierarchy of transformations that emerges from starting at quantum states and "going up the ladder" of transformations, i.e., transformations of states, transformations of transformations of states, etc. [11, 12, 13, 14].
Here, we stay agnostic with respect to the origin of the respective transformations and provide a general framework to answer the question: What are the properties of linear transformations between affine/linear spaces? For example, this question directly captures the case of quantum channels - completely positive mappings of the affine space of matrices with unit trace onto itself - but also _all_ conceivable transformations between quantum objects alluded to before. Concretely, we phrase the properties of the respective spaces as well as the transformations between them in terms of linear projectors and employ their representation as matrices via the Choi isomorphism
for their explicit characterization. The "projector approach" and the methods presented here may be viewed as deeper analysis and a generalisation of the ideas first presented in Ref. [15], which were developed to present a basis-independent characterisation for process matrices and to study quantum processes which do not respect a definite causal order. A similar approach to the characterization of higher order maps has, for example, been taken in [10, 13, 14, 16]. Moreover, in-depth investigations into the structure and type theory of the hierarchy of conceivable higher order quantum maps can be found in Refs. [11, 12, 13, 14, 17], where, in particular Ref. [14] not only employs a similar approach to ours, but also provides a detailed analysis of the logical and set-theoretic structure of the projectors that define the properties of the transformations we analyse.
This current work is more modest in scope; leveraging the link product [5] and the linearity of the involved transformations, we provide a straight forward, systematic way to derive the properties of arbitrary transformations between sets that naturally emerge in quantum mechanics. In turn, this allows us to re-derive the properties of many different relevant quantum transformations appearing in different parts of the literature in a unified and direct way. We further demonstrate the effectiveness and ease of this framework by explicitly discussing affine dual sets as well as probabilistic quantum operations and the signalling structures of quantum maps. As an additional application of our methods, we analyse the emergence of indefinite causal order in higher-order quantum operations.
Owing to the simplicity of our approach, we are also able to drop the assumptions generally fulfilled in quantum mechanics - like, for example, the self-adjointness of the involved projectors, or the membership of the identity matrix to the input and output space - and derive the properties of transformation between general spaces, a result that might be of interest in its own right. Together, our results provide a unified framework to discuss and characterize _all_ (quantum) objects and linear transformations thereof, thus offering a versatile tool for a wide array of problems that naturally occur in the field of quantum mechanics in beyond.
## 2 Warming up: quantum states and quantum channels
The fundamental set of objects that quantum mechanics is concerned with are quantum states \(\rho\in\mathcal{L}(\mathcal{H})\), unit trace (i.e., \(\mathrm{tr}[\rho]=1\)), positive semidefinite (i.e., \(\rho\geq 0\)) linear operators acting on a Hilbert space \(\mathcal{H}\). Here, and throughout, we consider \(\mathcal{H}\) to be finite dimensional, such that \(\mathcal{H}\cong\mathbb{C}^{d}\) for some \(d\in\mathbb{N}\).
Transformations of quantum states are then described by _linear maps_\(\widetilde{T}:\mathcal{L}(\mathcal{H}_{\mathsf{i}})\to\mathcal{L}(\mathcal{H}_{ \mathsf{o}})\), where we adopt the convention of referring to the input space as \(\mathcal{H}_{\mathsf{i}}\), the output space as \(\mathcal{H}_{\mathsf{o}}\) and maps between operators with a tilde. For better bookkeeping, we always explicitly distinguish between input and output spaces, even if \(\mathcal{H}_{\mathsf{i}}\cong\mathcal{H}_{\mathsf{o}}\).1 A transformation \(\widetilde{T}\) is a valid _Quantum channel_, i.e., it represents a deterministic transformation between quantum states that can be physically implemented, if it is a completely positive (CP)2 and trace-preserving3 (TP) linear map.
Footnote 1: The only exception to this rule will be the projectors \(\widetilde{P}\) that we introduce below.
Footnote 2: A linear map \(\widetilde{T}:\mathcal{L}(\mathcal{H}_{\mathsf{i}})\to\mathcal{L}(\mathcal{H} _{\mathsf{o}})\) is TP when \(\mathrm{tr}[\widetilde{C}[\rho]]=\mathrm{tr}[\rho]\) for every linear operator \(\rho\in\mathcal{L}(\mathcal{H}_{\mathsf{i}})\).
Footnote 3: A linear map \(\widetilde{T}:\mathcal{L}(\mathcal{H}_{\mathsf{i}})\to\mathcal{L}(\mathcal{H} _{\mathsf{o}})\) is TP when \(\mathrm{tr}[\widetilde{C}[\rho]]=\mathrm{tr}[\rho]\) for every linear operator \(\rho\in\mathcal{L}(\mathcal{H}_{\mathsf{i}})\).
While the mathematical characterization of matrices that represent quantum states \(\rho\) is clear, it is, a priori unclear, what the corresponding properties of a representation of a quantum channel - a transformation between sets of quantum states - are. Here, we aim to provide a simple way of characterizing mappings between objects that routinely occur in quantum mechanics. To illustrate the general concept, we first provide an answer for the well-known case of CPTP maps. To this end, we exploit the fact that linear maps admit a convenient representation as linear operators via the _Choi-Jamiolkowski isomorphism_ (CJI) [18, 19, 20]: Let \(\{|j\rangle\}_{j}\) be the canonical computational basis for \(\mathcal{H}_{\mathsf{i}}\). The _Choi operator/matrix_\(T\in\mathcal{L}(\mathcal{H}_{\mathsf{i}}\otimes\mathcal{H}_{\mathsf{o}})\) of a linear map \(\widetilde{T}:\mathcal{L}(\mathcal{H}_{\mathsf{i}})\to\mathcal{L}(\mathcal{H} _{\mathsf{o}})\) is
then defined as
\[T:=\sum_{jk}|j\rangle\!\langle k|\otimes\widetilde{T}\big{[}|j\rangle\!\langle k| \big{]}. \tag{1}\]
Direct calculation shows that the action of \(\widetilde{T}\) can be written in terms of its Choi matrix \(T\) as
\[\widetilde{T}[\rho]=\operatorname{tr}_{\mathtt{i}}[(\rho^{\tau}\otimes \mathds{1}_{\circ})\;T] \tag{2}\]
where \(\rho^{\tau}\) is the transpose of \(\rho\) in the computational basis and \(\operatorname{tr}_{\mathtt{i}}\) is the partial trace over \(\mathcal{H}_{\mathtt{i}}\). To characterize the properties of \(T\), we note that a linear map \(\widetilde{T}:\mathcal{L}(\mathcal{H}_{\mathtt{i}})\to\mathcal{L}(\mathcal{H }_{\mathtt{o}})\) is TP if and only if, \(\operatorname{tr}_{\mathtt{o}}[T]=\mathds{1}_{\mathtt{i}}\) and CP if and only if \(T\geq 0\). Hence, CPTP maps (quantum channels) \(\widetilde{C}\) are described by a Choi matrix \(C\in\mathcal{L}(\mathcal{H}_{\mathtt{i}}\otimes\mathcal{H}_{\mathtt{o}})\) that satisfies
\[C\geq 0\quad\text{and}\quad\operatorname{tr}_{\mathtt{o}}[C]=\mathds{1}_{ \mathtt{i}}. \tag{3}\]
In anticipation of later considerations, we can phrase this equivalently as
\[C\geq 0 \tag{4}\] \[{}_{\mathtt{o}}C= {}_{\mathtt{i}\mathtt{o}}C\] (5) \[\operatorname{tr}[C]= d_{\mathtt{i}}, \tag{6}\]
where \({}_{\mathtt{x}}C:=\operatorname{tr}_{\mathtt{x}}[C]\otimes\frac{\mathds{1}_{ \mathtt{i}}}{d_{\mathtt{x}}}\) is the trace-and-replace map and \(d_{\mathtt{x}}\) is the dimension of \(\mathcal{H}_{\mathtt{x}}\). Notice that, for consistency, one should keep track of the ordering of the operators, for instance, if \(C\in\mathcal{L}(\mathcal{H}_{\mathtt{i}}\otimes\mathcal{H}_{\mathtt{o}})\), \({}_{\mathtt{i}}C=\frac{\mathds{1}_{\mathtt{i}}}{d_{\mathtt{x}}}\otimes \operatorname{tr}_{\mathtt{i}}[C]\) and \({}_{\mathtt{o}}C=\operatorname{tr}_{\mathtt{o}}[C]\otimes\frac{\mathds{1}_{ \mathtt{i}}}{d_{\mathtt{x}}}\). Whenever there is risk of ambiguity, or we desire to emphasise some property, we will use subscriptsrather than explicit ordering to indicate what space an object is defined/acts on.
The characterisation of quantum channels given by Eqs. (4)- (6) has an interesting structure, which will be the starting point to analyse the structure of more general transformations throughout this work. Eq. (4) is a positivity constraint, Eq. (5) is a linear constraint, and Eq. (6) is an affine constraint. Consequently, the set of linear operators satisfying Eq. (5) form a linear subspace of \(\mathcal{L}(\mathcal{H}_{\mathtt{i}}\otimes\mathcal{H}_{\mathtt{o}})\), and can thus be described by a projective map \(\widetilde{P}:\mathcal{L}(\mathcal{H}_{\mathtt{i}}\otimes\mathcal{H}_{\mathtt{ o}})\to\mathcal{L}(\mathcal{H}_{\mathtt{i}}\otimes\mathcal{H}_{\mathtt{o}})\), where
\[{}_{\mathtt{o}}C={}_{\mathtt{i}\mathtt{o}}C\iff C=\widetilde{P}[C]\quad\text{ with}\quad\widetilde{P}[C]:=C-{}_{\mathtt{o}}C+{}_{\mathtt{i}\mathtt{o}}C\,. \tag{7}\]
We can easily verify that \(\widetilde{P}\) is indeed a projective map, that is, \(\widetilde{P}^{2}:=\widetilde{P}\circ\widetilde{P}=\widetilde{P}\). Putting everything together, an operator \(C\in\mathcal{L}(\mathcal{H}_{1}\otimes\mathcal{H}_{2})\) is the Choi operator of a quantum channel if and only if
\[C \geq 0, (\text{Positive semidefinite}) \tag{8}\] \[C =\widetilde{P}[C], (\text{Linear subspace})\] (9) \[\operatorname{tr}[C] =d_{\mathtt{i}}, (\text{Fixed trace}). \tag{10}\]
Put differently, besides the positivity and overall trace constraint, the set of quantum channels is fully defined by the projector \(\widetilde{P}\).
While positivity of the Choi matrix simply follows from the requirement of complete positivity for the map (a property that we will assume throughout), for more general mappings, working out
Figure 1: **Quantum states and Quantum Channels. (a) Quantum state \(\rho\in\mathcal{L}(\mathcal{H}_{\mathtt{i}})\). (b) Quantum channel \(\widetilde{C}:\mathcal{L}(\mathcal{H}_{\mathtt{i}})\to\mathcal{L}(\mathcal{H }_{\mathtt{o}})\) that maps quantum states in \(\mathcal{L}(\mathcal{H}_{\mathtt{i}})\) onto quantum states in \(\mathcal{L}(\mathcal{H}_{\mathtt{o}})\). Throughout, lines are labelled by the Hilbert space/space of linear operators they correspond to.**
both the trace constraint and the correct projector can, a priori, be somewhat cumbersome. For example, it is a priori unclear, what properties, a mapping from quantum channels to quantum channels (a so-called supermap [1] or superchannel) would possess, and similar for any mapping 'higher up in the hierarchy'. In this manuscript, we extend the above concepts and methods to provide a direct and systematic way to derive the linear and affine constraints for transformations between general quantum objects (see Fig. 2 for a graphical depiction).
## 3 Linear transformations between quantum objects
### Sets of quantum objects
The characterization (7) of the set of quantum channels via a projector provides a potent way to derive the structural properties of mappings that occur frequently in quantum mechanics. We can use this structure to represent a very general class of (deterministic) quantum objects, such as quantum states, quantum channels, quantum superchannel [1], quantum combs [2, 5], channels with memory [9], quantum strategies [3], non-Markovian processes [21], causal quantum operations [22], non-signalling channels [23], process matrices[8, 15], and more generally mappings between _any_ kinds of linear spaces [11, 12, 13, 14, 17]. Before discussing the fully general case, i.e., mappings between general linear spaces (see Sec. 8), we start with a discussion of scenarios that are commonly encountered in quantum mechanics.
**Definition 1** (Set of quantum objects).: _Let \(\widetilde{P}:\mathcal{L}(\mathcal{H})\to\mathcal{L}(\mathcal{H})\) be a linear projective map, that is \(\widetilde{P}^{2}:=\widetilde{P}\circ\widetilde{P}=\widetilde{P}\). A set of linear operators \(\mathcal{S}\subseteq\mathcal{L}(\mathcal{H})\) is a quantum object set if it can be described
Figure 2: **General Transformations between affine (quantum) sets** All sets \(\mathcal{S}_{1}\) and \(\mathcal{S}_{o}\) we consider are defined by a linear constraint (given by \(\widetilde{P}_{1}\) and \(\widetilde{P}_{o}\), respectively) and in most cases an affine and a positivity constraint. Our aim is to characterize the set of linear transformations \(\widetilde{T}_{\text{io}}\) between them. Predominantly, this characterization will be carried out for the Choi matrices \(T\) of said transformations, and as it turns out, the corresponding set of matrices is, again, defined by a projector \(\widetilde{P}_{\text{io}}\), as well as an affine and a positivity constraint. The concrete construction of \(P_{1o}\) depends on the respective properties of \(\widetilde{P}_{1}\) and \(\widetilde{P}_{1}\). Thm. 2 provides this construction for the special case most often encountered in quantum mechanics, while the general case is discussed in Thm. 5. Likewise, the case where there are no affine constraints on \(\mathcal{S}_{1}\) and \(\mathcal{S}_{o}\) is discussed in Thms. 4 and 6.
by:_
\[\boxed{\begin{array}{l}\mbox{A linear operator $W\in\mathcal{L}(\mathcal{H})$ belongs to $\mathcal{S}$ if and only if:}\\ \mbox{$W\geq 0$ \quad\mbox{(Positive semidefinite),}}\\ \mbox{$\widetilde{P}[W]=W$ \quad\mbox{(Belongs to a particular linear subspace),}}\\ \mbox{$\mathrm{tr}[W]=\gamma$ \quad\mbox{(Fixed trace).}}\end{array}} \tag{11a}\] \[W\geq 0\] (11b) \[\mbox{$\mathrm{tr}[W]=\gamma$ \quad\mbox{(Belongs to a particular linear subspace).}} \tag{11c}\]
For examples, both quantum states, and the set of Choi matrices of quantum channels satisfy the above definition. For quantum states, we have \(\widetilde{P}=\widetilde{\mathbf{1}}\) (where \(\widetilde{\mathbf{1}}\) denotes the identity map4) and \(\gamma=1\), while for quantum channels, \(\widetilde{P}\) is given by Eq. (7) and \(\gamma=d_{1}\).
Footnote 4: The identity map is defined by \(\widetilde{\mathbf{1}}[X]=X\) for all \(X\in\mathcal{L}(\mathcal{H})\).
### Transformations between quantum objects
The main goal of this work is extending the concepts of state transformation presented in Sec. 2 to characterise the transformations between arbitrary general quantum objects in a convenient and systematic manner. Let us consider two arbitrary sets of quantum objects \(\mathcal{S}_{\mathbf{i}}\subseteq\mathcal{L}(\mathcal{H}_{\mathbf{i}})\) and \(\mathcal{S}_{\mathbf{o}}\subseteq\mathcal{L}(\mathcal{H}_{\mathbf{o}})\) where we use \(\mathtt{i}\) and \(\mathtt{o}\) as general placeholders for "input" and "output". Our main question then is:
_How to characterise the set of quantum transformation \(\widetilde{T}_{\mathtt{i}\mathtt{o}}\) mapping elements from \(\mathcal{S}_{\mathtt{i}}\) to \(\mathcal{S}_{\mathtt{o}}\)?_
Since we desire \(\widetilde{T}_{\mathtt{i}\mathtt{o}}\) to map elements from \(\mathcal{S}_{\mathtt{i}}\) to \(\mathcal{S}_{\mathtt{o}}\), we require that for every \(W\in\mathcal{S}_{\mathtt{i}}\), we have that \(\widetilde{T}_{\mathtt{i}\mathtt{o}}[W]\in\mathcal{S}_{\mathtt{o}}\), where we use additional subscripts on \(\widetilde{T}\) to signify its input and output space. Also, in order to be consistent with the linearity of quantum theory, the transformation \(\widetilde{T}_{\mathtt{i}\mathtt{o}}\) is required to be a linear map from the linear space spanned by \(\mathcal{S}_{\mathtt{i}}\) to the linear space spanned by \(\mathcal{S}_{\mathtt{o}}\). Additionally, since all elements of \(\mathcal{S}_{\mathtt{i}}\) and \(\mathcal{S}_{\mathtt{o}}\) are positive, we would at least require that \(\widetilde{T}_{\mathtt{i}\mathtt{o}}\) is positive on all \(W\in\mathcal{S}_{\mathtt{i}}\). In line with standard considerations in quantum mechanics, throughout, we go beyond this minimal requirement5 and demand that \(\widetilde{T}_{\mathtt{i}\mathtt{o}}\) is a positive map on all of \(\mathcal{L}(\mathcal{H}_{\mathtt{i}})\). Finally, similarly to quantum channels acting on quantum states, we desire \(\widetilde{T}_{\mathtt{i}\mathtt{o}}\) to be a valid transformation even when it is applied to only a part of a larger quantum object, which requires that \(\widetilde{T}_{\mathtt{i}\mathtt{o}}:\mathcal{L}(\mathcal{H}_{\mathtt{i}}) \rightarrow\mathcal{L}(\mathcal{H}_{\mathtt{o}})\) is completely positive. In turn, this implies that all Choi matrices we encounter throughout are positive semidefinite (see Sec. 3.4).
Footnote 5: Positivity can also be argued for in order to ensure that all _probabilistic_ quantum object (see Sec. 5 for more details) are mapped to positive objects as well. However, this argument requires the probabilistic quantum objects, i.e., all \(W^{\prime}\in\mathcal{L}(\mathcal{H}_{\mathtt{i}})\) which satisfy \(W^{\prime}\leq W\) for some \(W\in\mathcal{S}_{\mathtt{i}}\), to span the full space \(\mathcal{L}(\mathcal{H}_{\mathtt{i}})\). This is the case if \(\mathcal{S}_{\mathtt{i}}\) contains at least one full rank state, which we generally assume (see below).
**Definition 2** (Quantum Transformations).: _Let \(\widetilde{P}_{\mathtt{i}}:\mathcal{L}(\mathcal{H}_{\mathtt{i}})\rightarrow \mathcal{L}(\mathcal{H}_{\mathtt{i}})\) and \(\widetilde{P}_{\mathtt{o}}:\mathcal{L}(\mathcal{H}_{\mathtt{o}})\rightarrow \mathcal{L}(\mathcal{H}_{\mathtt{o}})\) be linear projective maps and \(\mathcal{S}_{\mathtt{i}}\subseteq\mathcal{L}(\mathcal{H}_{\mathtt{i}})\) and \(\mathcal{S}_{\mathtt{o}}\subseteq\mathcal{L}(\mathcal{H}_{\mathtt{o}})\) be sets of quantum objects defined by_
\[\boxed{\begin{array}{l}\mbox{$W\in\mathcal{L}(\mathcal{H}_{\mathtt{i}})$ belongs to $\mathcal{S}_{\mathtt{i}}$ iff}\\ \mbox{$W\geq 0$ \quad\mbox{$\widetilde{P}_{\mathtt{i}}[W]=W$ \quad\mbox{$\widetilde{\mathtt{i}}$}}\\ \mbox{$\mathrm{tr}[W]=\gamma_{\mathtt{i}}$}\end{array}}} \tag{12}\]
_A linear map \(\widetilde{T}_{\mathtt{i}\mathtt{o}}:\mathcal{L}(\mathcal{H}_{\mathtt{i}}) \rightarrow\mathcal{L}(\mathcal{H}_{\mathtt{o}})\) is a quantum transformation from \(\mathcal{S}_{\mathtt{i}}\) to \(\mathcal{S}_{\mathtt{o}}\) when:_
\[\boxed{\begin{array}{l}\mbox{$i:$ \quad\mbox{$\widetilde{T}_{\mathtt{i}\mathtt{o}}$ is completely positive}}\\ \mbox{$ii:$ \quad\mbox{$\forall W\in\mathcal{S}_{\mathtt{i}}$},\mbox{ we have that $\widetilde{T}_{\mathtt{i}\mathtt{o}}[W]\in\mathcal{S}_{\mathtt{o}}$} \end{array}}} \tag{13a}\] \[\mbox{$\mathrm{tr}[W]=\gamma_{\mathtt{o}}$} \tag{13b}\]
General linear mappings of this type have previously been employed in the quantum information literature, for example for the analysis of the dynamics of quantum causal structures [10] as well as, under the guise of "admissible adapters", in the resource theory of causal connection [16], or as
"structure-preserving maps" [14], in the study of transformations between general quantum objects. More detailed structural investigations of the hierarchy of transformations such maps engender have been carried out in [11, 12, 13, 14].
Importantly, for the concrete characterization of \(\widetilde{T}_{\mathsf{i}\circ}\) (or, equivalently, its Choi matrix \(T_{\mathsf{i}\circ}\)) only the linear and affine constraints on \(\mathcal{S}_{\mathsf{i}}\) and \(\mathcal{S}_{\mathsf{o}}\) will play a role. The positive semidefiniteness constraint on both sets on the other hand only enters the request for \(\widetilde{T}_{\mathsf{i}\circ}\) to be CP (or, equivalently, its Choi matrix \(T_{\mathsf{i}\circ}\) to be positive semidefinite). Concretely, this holds true, since in the cases we consider, the positivity restriction does not alter the span of the sets, i.e., both \(\mathcal{S}_{\mathsf{i}}\) and \(\mathcal{S}_{\mathsf{o}}\) span the same spaces \((\widetilde{P}_{\mathsf{i}}[\mathcal{L}(\mathcal{H}_{\mathsf{i}})]\) and \(\widetilde{P}_{\mathsf{o}}[\mathcal{L}(\mathcal{H}_{\mathsf{o}})]\), respectively) with or without the positivity constraints imposed on its elements. Consequently, positivity of the respective elements does not enter as an additional constraint on \(\widetilde{T}_{\mathsf{i}\circ}\). As a result, in what follows, we rarely ever explicitly assume positivity for the elements of \(\mathcal{S}_{\mathsf{i}}\) and \(\mathcal{S}_{\mathsf{o}}\) and mostly consider transformations between affine sets. Positivity of the respective elements, as well as complete positivity of the map between \(\mathcal{S}_{\mathsf{i}}\) and \(\mathcal{S}_{\mathsf{o}}\) can then always be imposed by hand without any added complications. On the other hand, while a similar argument could seemingly be made for the affine constraints - since they generally do not change the span of \(\mathcal{S}_{\mathsf{i}}\) and \(\mathcal{S}_{\mathsf{o}}\), either - they fix a rescaling factor, in the sense that \(\operatorname{tr}[\widetilde{T}_{\mathsf{i}\circ}[W]]=\gamma_{\mathsf{o}}/ \gamma_{\mathsf{i}}\operatorname{tr}[W]\) for all \(W\in\mathcal{S}_{\mathsf{i}}\), thus playing a crucial role for the specific properties of \(\widetilde{T}_{\mathsf{i}\circ}\).
### Map characterisation of quantum transformations
We now present our first theorem - which has, in slightly different form, already been derived in Refs. [10, 13, 14, 16] - to characterise quantum transformations. In this first characterisation, we aim to completely characterise the linear map \(\widetilde{T}_{\mathsf{i}\circ}\) without making reference to its Choi operator, but directly to its map properties.
**Theorem 1** (Transformation between affine sets: map version).: _Let \(\widetilde{P}_{\mathsf{i}}:\mathcal{L}(\mathcal{H}_{\mathsf{i}})\to\mathcal{L} (\mathcal{H}_{\mathsf{i}})\) and \(\widetilde{P}_{\mathsf{o}}:\mathcal{L}(\mathcal{H}_{\mathsf{o}})\to\mathcal{L} (\mathcal{H}_{\mathsf{o}})\) be linear projective maps and \(\mathcal{S}_{\mathsf{i}}\subseteq\mathcal{L}(\mathcal{H}_{\mathsf{i}})\) and \(\mathcal{S}_{\mathsf{o}}\subseteq\mathcal{L}(\mathcal{H}_{\mathsf{o}})\) be affine sets defined by_
(14)
_For \(\gamma_{\mathsf{i}}\neq 0\)6, a linear map \(\widetilde{T}_{\mathsf{i}\circ}:\mathcal{L}(\mathcal{H}_{\mathsf{i}})\to \mathcal{L}(\mathcal{H}_{\mathsf{o}})\) satisfies \(\widetilde{T}_{\mathsf{i}\circ}[W]\in\mathcal{S}_{\mathsf{o}}\), for all \(W\in\mathcal{S}_{\mathsf{i}}\) if and only if_
Footnote 6: We emphasize that assuming that \(\operatorname{tr}[W]\neq 0\) is not a strong restriction, and is quite natural for practical applications. Since all quantum objects are positive semidefinite, the only traceless object is the zero operator. In Sec. 8 we discuss a more general version of this theorem.
\[\widetilde{P}_{\mathsf{o}}\circ\widetilde{T}_{\mathsf{i}\circ}\circ\widetilde {P}_{\mathsf{i}}=\widetilde{T}_{\mathsf{i}\circ}\circ\widetilde{P}_{\mathsf{ i}},\] (15a) _and_ \[\operatorname{tr}\circ\widetilde{T}_{\mathsf{i}\circ}\circ\widetilde {P}_{\mathsf{i}}=\frac{\gamma_{\mathsf{o}}}{\gamma_{\mathsf{i}}}\operatorname {tr}\circ\widetilde{P}_{\mathsf{i}}\,. \tag{15b}\]
Proof.: We start by showing that if Eqs. (15a) and (15b) hold, then \(\widetilde{T}_{\mathsf{i}\circ}[W]\in\mathcal{S}_{\mathsf{o}}\), for all \(W\in\mathcal{S}_{\mathsf{i}}\). Let \(W\in\mathcal{S}_{\mathsf{i}}\). Then, by definition, we have \(\widetilde{P}_{\mathsf{i}}[W]=W\). Now, since Eq. (15a) holds, for all \(W\in\mathcal{S}_{\mathsf{i}}\) we have
\[\widetilde{P}_{\mathsf{o}}[\widetilde{T}_{\mathsf{i}\circ}[W]]=P_{\mathsf{o} }\circ\widetilde{T}_{\mathsf{i}\circ}\circ P_{\mathsf{i}}[W]=\widetilde{T}_{ \mathsf{i}\circ}\circ P_{\mathsf{i}}[W]=\widetilde{T}_{\mathsf{i}\circ}[W]= \widetilde{T}_{\mathsf{i}\circ}[W]. \tag{16}\]
Additionally, since \(\operatorname{tr}[W]=\gamma_{\mathsf{i}}\) for every \(W\in\mathcal{S}_{\mathsf{i}}\), from Eq. (15b) we obtain
\[\operatorname{tr}\circ\widetilde{T}_{\mathsf{i}\circ}[W]=\operatorname{tr} \circ\widetilde{T}_{\mathsf{i}\circ}\circ\widetilde{P}_{\mathsf{i}}[W]=\frac{ \gamma_{\mathsf{o}}}{\gamma_{\mathsf{i}}}\operatorname{tr}[W]=\gamma_{\mathsf{o}}, \tag{17}\]
and hence, Eqs. (15a) and (15b) together imply that \(T_{\mathsf{i}\circ}[W]\in\mathcal{S}_{\mathsf{o}}\), for all \(W\in\mathcal{S}_{\mathsf{i}}\).
For the converse direction, we note that since \(\gamma_{\mathsf{i}}\neq 0\), the affine constraint has no influence on the span of \(\mathcal{S}_{\mathsf{i}}\), such that \(\operatorname{span}(\mathcal{S}_{\mathsf{i}})=\widetilde{P}_{\mathsf{i}}[ \mathcal{L}(\mathcal{H}_{\mathsf{i}})]\). Since, by assumption, \(\widetilde{P}_{\mathsf{o}}\circ\widetilde{T}_{\mathsf{i}\circ}[W]=\widetilde{T }_{\mathsf{i}\circ}[W]\) for
all \(W\in\mathcal{S}_{\mathsf{i}}\), by linearity, we have \(\widetilde{P}_{\mathsf{o}}\circ\widetilde{T}_{\mathsf{i}\mathsf{o}}[M]= \widetilde{T}_{\mathsf{i}\mathsf{o}}[M]\) for all \(M\in\operatorname{span}(\mathcal{S}_{\mathsf{i}})\). For any arbitrary \(X\in\mathcal{L}(\mathcal{H}_{\mathsf{i}})\) we have \(M:=\widetilde{P}_{\mathsf{i}}[X]\in\mathcal{L}(\mathcal{H}_{\mathsf{i}})\), and thus
\[\widetilde{P}_{\mathsf{o}}\circ\widetilde{T}_{\mathsf{i}\mathsf{o}}\circ \widetilde{P}_{\mathsf{i}}[X]=\widetilde{T}_{\mathsf{i}\mathsf{o}}\circ \widetilde{P}_{\mathsf{i}}[X]. \tag{18}\]
Since this holds for arbitrary \(X\in\mathcal{L}(\mathcal{H}_{\mathsf{i}})\), we see that Eq. (15a) is satisfied. Similarly, if \(\widetilde{T}_{\mathsf{i}\mathsf{o}}\) is a map from \(\mathcal{S}_{\mathsf{i}}\) to \(\mathcal{S}_{\mathsf{o}}\), by linearity, we see that \(\operatorname{tr}[\widetilde{T}_{\mathsf{i}\mathsf{o}}[M]]=\gamma_{\mathsf{o} }/\gamma_{\mathsf{i}}\operatorname{tr}[M]\) for all \(M\in\operatorname{span}(\mathcal{S}_{\mathsf{i}})\). Thus, for arbitrary \(X\in\mathcal{L}(\mathcal{H}_{\mathsf{i}})\) we have
\[\operatorname{tr}\circ\widetilde{T}_{\mathsf{i}\mathsf{o}}\circ\widetilde{P}_ {\mathsf{i}}[X]=\frac{\gamma_{\mathsf{o}}}{\gamma_{\mathsf{i}}}\operatorname{ tr}\circ\widetilde{P}_{\mathsf{i}}[X]\,, \tag{19}\]
where, again, we have used that \(\widetilde{P}_{\mathsf{i}}[X]\in\operatorname{span}(\mathcal{S}_{\mathsf{i}})\). Since the above equation holds for all \(X\in\mathcal{L}(\mathcal{H}_{\mathsf{i}})\), we thus recover Eq. (15b), concluding the proof.
We emphasize that the above Theorem covers the case where \(\gamma_{\mathsf{o}}=0\), for which we have \(\widetilde{P}_{\mathsf{o}}\circ\widetilde{T}_{\mathsf{i}\mathsf{o}}\circ \widetilde{P}_{\mathsf{i}}=\widetilde{T}_{\mathsf{i}\mathsf{o}}\circ \widetilde{P}_{\mathsf{i}}\) and \(\operatorname{tr}\circ\widetilde{T}_{\mathsf{i}\mathsf{o}}\circ P_{\mathsf{i}}=0\). However, in this case, imposing positivity on the elements of \(\mathcal{S}_{\mathsf{o}}\) would, unlike in all cases we consider, lead to explicit further simplifications (see App. A). On the other hand, the scenario \(\gamma_{\mathsf{i}}=0\) is not directly covered by the above theorem and in principle requires special consideration. For this scenario, it is easy to see that \(\gamma_{\mathsf{i}}=0\) implies \(\gamma_{\mathsf{o}}=0\). Then, one can readily define a new projector \(\widetilde{P}^{\prime}_{\mathsf{i}}\) that projects onto a vector space of traceless matrices (thus incorporating the requirement \(\gamma_{\mathsf{i}}=0\)), such that \(W\in\mathcal{S}_{\mathsf{i}}\) iff \(\widetilde{P}[W]=W\). With this, a map \(\widetilde{T}_{\mathsf{i}\mathsf{o}}\) maps between \(\mathcal{S}_{\mathsf{i}}\) and \(\mathcal{S}_{\mathsf{o}}\) if and only if \(\widetilde{P}_{\mathsf{o}}\circ\widetilde{T}_{\mathsf{i}\mathsf{o}}\circ \widetilde{P}^{\prime}_{\mathsf{i}}=\widetilde{T}_{\mathsf{i}\mathsf{o}}\circ \widetilde{P}^{\prime}_{\mathsf{i}}\) and \(\operatorname{tr}\circ\widetilde{T}_{\mathsf{i}\mathsf{o}}\circ P^{\prime}_{ \mathsf{i}}=0\). The details can be found in App. A. From now on, whenever not explicitly mentioned, we will exclude both of these scenarios to avoid unnecessary technical complications and assume \(\gamma_{\mathsf{i}},\gamma_{\mathsf{o}}\neq 0\) throughout.
While providing necessary and sufficient conditions for quantum transformations between arbitrary quantum sets, Thm. 1 is not particularly insightful when it comes to the structural properties of \(\widetilde{T}_{\mathsf{i}\mathsf{o}}\) and does not easily allow for an incorporation of the properties that many projectors \(\widetilde{P}_{\mathsf{i}}\) encountered in quantum mechanics have. In the following subsection, we provide a specialised version of Thm. 1 in terms of the Choi state \(T_{\mathsf{i}\mathsf{o}}\) that takes commonly assumed properties of \(\widetilde{P}_{\mathsf{i}}\) into account and will be of more direct use.
### Choi characterisation of particular quantum transformations
For most of practical cases, the projectors associated to the sets of quantum objects respect additional properties which allow us to present a more specialised and used characterisation of quantum transformations. In particular, there are three properties which the projector \(\widetilde{P}\) associated to a quantum set \(\mathcal{S}\subseteq\mathcal{L}(\mathcal{H})\) often respect
1. Unitality: \(\widetilde{P}[\mathds{1}]=\mathds{1}\)
2. Self-adjointness7\(\widetilde{P}=\widetilde{P}^{\dagger}\)
Footnote 7: Let \(\widetilde{P}:\mathcal{L}(\mathcal{H}_{\mathsf{g}})\to\mathcal{L}(\mathcal{H}_{ \mathsf{g}})\) be a linear operator. Its adjoint is the unique map \(\widetilde{P}^{\dagger}:\mathcal{L}(\mathcal{H}_{\mathsf{g}})\to\mathcal{L}( \mathcal{H}_{\mathsf{z}})\) respecting \(\operatorname{tr}[B^{\dagger}\widetilde{P}[A]]=\operatorname{tr}[\widetilde{P} ^{\dagger}[B]^{\dagger}A]\) for all \(A\in\mathcal{L}(\mathcal{H}_{\mathsf{k}})\) and all \(B\in\mathcal{L}(\mathcal{H}_{\mathsf{g}})\).
3. Commutation with the transposition: \(\widetilde{P}[W^{\tau}]=\widetilde{P}[W]^{\tau}\), for every \(W\in\mathcal{L}(\mathcal{H})\)
We notice that these three properties are respected by the projectors onto the sets of quantum states, quantum channels, superchannels, quantum combs, process matrices, and non-signalling channels, to name but a few. We now present a characterisation theorem tailored for this particular case in terms of Choi matrices, while the more general case with no extra assumptions is discussed in Sec. 8.
As a first step for this characterization, we recall that the action of a map \(\widetilde{T}_{\mathsf{i}\mathsf{o}}:\mathcal{L}(\mathcal{H}_{\mathsf{i}})\to \mathcal{L}(\mathcal{H}_{\mathsf{o}})\) in terms of its Choi matrix \(T_{\mathsf{i}\mathsf{o}}:\mathcal{L}(\mathcal{H}_{\mathsf{i}}\otimes\mathcal{H }_{\mathsf{o}})\) can be written as (see Eq. (2))
\[\widetilde{T}_{\mathsf{i}\mathsf{o}}[X_{\mathsf{i}}]=:\operatorname{tr}_{ \mathsf{i}}[(X_{\mathsf{i}}^{\tau}\otimes\mathds{1}_{\mathsf{o}})T_{\mathsf{i} \mathsf{o}}]=:T_{\mathsf{i}\mathsf{o}}\star X_{\mathsf{i}}, \tag{20}\]
where we have defined the _link product_\(\star\) (see Sec. 7.2 for more details) and added subscripts to emphasize what spaces the respective elements are defined on. As mentioned above, complete positivity of \(\widetilde{T}_{\mathsf{i}\mathsf{o}}\) is equivalent to \(T_{\mathsf{i}\mathsf{o}}\geq 0\)[5]. With this, the characterization of the map \(\widetilde{T}_{\mathsf{i}\mathsf{o}}\) amounts to a characterization of the matrix \(T_{\mathsf{i}\mathsf{o}}\), which can be obtained via a projector, denoted by \(\widetilde{P}_{\mathsf{i}\mathsf{o}}\). This characterization has also been given in Refs. [10, 13, 14, 16] and is provided here in the notation we employ.8
Footnote 8: We remark that the characterisation presented in Ref. [10]differs from ours since it misses two terms that do not necessarily cancel out.
**Theorem 2** (Transformation between affine sets: specialised Choi version).: _Let \(\widetilde{P}_{\mathsf{i}}:\mathcal{L}(\mathcal{H}_{\mathsf{i}})\to\mathcal{L} (\mathcal{H}_{\mathsf{i}})\) and \(\widetilde{P}_{\mathsf{o}}:\mathcal{L}(\mathcal{H}_{\mathsf{o}})\to\mathcal{L} (\mathcal{H}_{\mathsf{o}})\) be linear projective maps and \(\mathcal{S}_{\mathsf{i}}\subseteq\mathcal{L}(\mathcal{H}_{\mathsf{i}})\) and \(\mathcal{S}_{\mathsf{o}}\subseteq\mathcal{L}(\mathcal{H}_{\mathsf{o}})\) be affine sets defined by_
\[\begin{array}{c}\framebox{$W\in\mathcal{L}(\mathcal{H}_{\mathsf{i}})$ belongs to $\mathcal{S}_{\mathsf{i}}$ iff}\\ \widetilde{P}_{\mathsf{i}}[W]=W\\ \operatorname{tr}[W]=\gamma_{\mathsf{i}}\end{array}\qquad\begin{array}{c} \framebox{$W^{\prime}\in\mathcal{L}(\mathcal{H}_{\mathsf{o}})$ belongs to $\mathcal{S}_{\mathsf{o}}$ iff}\\ \widetilde{P}_{\mathsf{o}}[W^{\prime}]=W^{\prime}\\ \operatorname{tr}[W^{\prime}]=\gamma_{\mathsf{o}}\end{array} \tag{21}\]
_Additionally, we assume that the maps \(\widetilde{P}_{\mathsf{i}}\) and \(\widetilde{P}_{\mathsf{o}}\) are self-adjoint and unital, and that \(\widetilde{P}_{\mathsf{i}}\) commutes with the transposition map, i.e.,_
\[\framebox{$\widetilde{P}_{\mathsf{i}}=\widetilde{P}_{\mathsf{i}}^{ \dagger},$}\qquad\widetilde{P}_{\mathsf{o}}=\widetilde{P}_{\mathsf{o}}^{ \dagger},$} \tag{22a}\] \[\widetilde{P}_{\mathsf{i}}[\mathds{1}]=\mathds{1},\quad \widetilde{P}_{\mathsf{o}}[\mathds{1}]=\mathds{1},\] (22b) \[\widetilde{P}_{\mathsf{i}}[W^{\tau}]=\widetilde{P}_{\mathsf{i}}[W ]^{\tau},\quad\forall W\in\mathcal{L}(\mathcal{H}_{\mathsf{i}}). \tag{22c}\]
_For \(\gamma_{\mathsf{i}}\neq 0\), a linear map \(\widetilde{T}_{\mathsf{i}\mathsf{o}}:\mathcal{L}(\mathcal{H}_{\mathsf{i}})\to \mathcal{L}(\mathcal{H}_{\mathsf{o}})\) satisfies \(\widetilde{T}_{\mathsf{i}\mathsf{o}}[W]\in\mathcal{S}_{\mathsf{o}},\) for all \(W\in\mathcal{S}_{\mathsf{i}}\) if and only if_
\[\framebox{$\widetilde{P}_{\mathsf{i}\mathsf{o}}[T_{\mathsf{i} \mathsf{o}}]:=\!T_{\mathsf{i}\mathsf{o}}-(\widetilde{P}_{\mathsf{i}}\otimes \widetilde{\mathsf{I}}_{\mathsf{o}})[T_{\mathsf{i}\mathsf{o}}]+(\widetilde{P} _{\mathsf{i}}\otimes\widetilde{P}_{\mathsf{o}})[T_{\mathsf{i}\mathsf{o}}]-( \widetilde{P}_{\mathsf{i}}\otimes\widetilde{\mathsf{I}}_{\mathsf{o}})[_{ \mathsf{o}}T_{\mathsf{i}\mathsf{o}}]+{}_{\mathsf{i}\mathsf{o}}T_{\mathsf{i} \mathsf{o}}=T_{\mathsf{i}\mathsf{o}}$} \tag{23a}\] \[\operatorname{tr}[T_{\mathsf{i}\mathsf{o}}]= \frac{\gamma_{\mathsf{o}}}{\gamma_{\mathsf{i}}}d_{\mathsf{i}}\,, \tag{23b}\]
_holds for its Choi matrix \(T_{\mathsf{i}\mathsf{o}}\), where \(\widetilde{\mathsf{I}}_{\mathsf{o}}\) is the identity map, \(d_{\mathsf{i}}\) is the dimension of \(\mathcal{H}_{\mathsf{i}}\), and \(\widetilde{P}_{\mathsf{i}\mathsf{o}}:\mathcal{L}(\mathcal{H}_{\mathsf{i}} \otimes\mathcal{H}_{\mathsf{o}})\to\mathcal{L}(\mathcal{H}_{\mathsf{i}} \otimes\mathcal{H}_{\mathsf{o}})\) is a self-adjoint, unital projector that commutes with the transposition._
Proof.: The derivation of Eqs. (23a) and (23b) can be found in Sec. 7.3 where we discuss the link product and the relevant mathematical tools to easily and systematically deal with Choi matrices of general linear transformations. Here, we show the remaining properties of the projector \(\widetilde{P}_{\mathsf{i}\mathsf{o}}\), i.e., that it is self-adjoint, unital, commutes with the transposition and is, indeed a projector. To see this latter property, first note that a self-adjoint, unital projector \(\widetilde{P}_{\mathsf{x}}\) is trace-preserving, since \(\operatorname{tr}[\widetilde{P}_{\mathsf{x}}[M]]=\operatorname{tr}[\widetilde{ P}_{\mathsf{x}}[1_{\mathsf{x}}]M]=\operatorname{tr}[M]\) for all \(M\in\mathcal{L}(\mathcal{H}_{\mathsf{x}})\) and \(\mathsf{x}\in\{\mathsf{i},\mathsf{o}\}\). Consequently, \({}_{\mathsf{x}}(\widetilde{P}_{\mathsf{x}}[M])={}_{\mathsf{x}}M\) for all \(M\in\mathcal{L}(\mathcal{H}_{\mathsf{x}})\), and thus \({}_{\mathsf{x}}\circ\widetilde{P}_{\mathsf{x}}[M]=\widetilde{P}_{\mathsf{x}} \circ{}_{\mathsf{x}}[M]={}_{\mathsf{x}}M\). Additionally, \({}_{\mathsf{x}\mathsf{x}}M={}_{\mathsf{x}}M\), and, by assumption \(\widetilde{P}_{\mathsf{x}}^{2}=\widetilde{P}_{\mathsf{x}}\) for \(\mathsf{x}\in\{\mathsf{i},\mathsf{o}\}\). With this using Eq. (23a), it is easy to see that
\[\widetilde{P}_{\mathsf{i}\mathsf{o}}^{2}=(\widetilde{\mathsf{I}}-\widetilde{P}_ {\mathsf{i}}\otimes\widetilde{\mathsf{I}}_{\mathsf{o}}+\widetilde{P}_{\mathsf{ i}}\otimes\widetilde{P}_{\mathsf{o}}-\widetilde{P}_{\mathsf{i}}\otimes{}_{ \mathsf{o}}+{}_{\mathsf{i}}\otimes{}_{\mathsf{o}})^{2}=\widetilde{P}_{\mathsf{ i}\mathsf{o}} \tag{24}\]
holds, i.e., \(\widetilde{P}_{\mathsf{i}\mathsf{o}}^{2}=\widetilde{P}_{\mathsf{i}\mathsf{o}}\). Finally, since both \({}_{\mathsf{i}}\bullet\) and \({}_{\mathsf{o}}\bullet\) are self-adjoint, unital and commute with the transposition, these properties also hold for \(\widetilde{P}_{\mathsf{i}\mathsf{o}}\) whenever they hold for \(\widetilde{P}_{\mathsf{i}}\) and \(\widetilde{P}_{\mathsf{o}}\).
Naturally, the above Theorem is not as general as the one for maps given in Thm. 1, since it requires additional properties of \(\widetilde{P}_{\mathsf{i}}\) and \(\widetilde{P}_{\mathsf{o}}\). However, it allows for a direct characterization of the properties of a concrete representation of linear mappings, and applies to most scenarios that are relevant in quantum mechanics. Its generalization, which is equivalent to Thm. 1, can be found in Sec. 8. Also in Sec. 8, we provide a version of Thm. 2 for mappings that are _not_ trace-rescaling, that is, we discuss transformations between _linear_ subspaces instead of affine subspaces, which is
both of independent interest and highlights the role that the affine constraints on the sets \(\mathcal{S}_{\mathfrak{i}}\) and \(\mathcal{S}_{\mathfrak{o}}\) play for the properties of \(T_{\mathfrak{i}\mathfrak{o}}\). As was the case for Thm. 1, the case \(\gamma_{\mathfrak{i}}=0\) is explicitly excluded in the above Theorem. It is discussed in detail in App. B, as a special instance of the general case (i.e., where we impose no restrictions on \(\widetilde{P}_{\mathfrak{i}}\) and \(\widetilde{P}_{\mathfrak{o}}\)). Before discussing this general case in detail and providing the technical details for the derivation of the above Theorem, we now first show its concrete application for commonly encountered scenarios in quantum mechanics.
## 4 Applications to particular quantum transformations and the emergence of indefinite causal order
We now apply Thm. 2 to obtain a quantum set characterisation for several quantum transformations used in the literature. Later in this section we also discuss the simplest quantum transformation which may disrespect a standard notion of causal order.
**Example 1** (Quantum states to quantum states (Quantum Channels)).: In Sec. 2, we derived the properties of quantum channels \(\widehat{C}\) that map quantum states \(\rho\in\mathcal{L}(\mathcal{H}_{\mathfrak{i}})\) onto quantum states \(\rho^{\prime}\in\mathcal{L}(\mathcal{H}_{\mathfrak{o}})\). Since quantum states are unit trace, we have \(\gamma_{\mathfrak{i}}=\gamma_{\mathfrak{o}}=1\), and there are no linear constraints on quantum states, such that \(\widetilde{P}_{\mathfrak{i}}=\widetilde{\mathbf{I}}_{\mathfrak{i}}\) and \(\widetilde{P}_{\mathfrak{o}}=\widetilde{\mathbf{I}}_{\mathfrak{o}}\) (i.e., the identity channel). Naturally, \(\widetilde{\mathbf{I}}_{\mathbf{x}}\) is unital, self-adjoint, and commutes with the transposition, such that Thm. 2 applies. Employing Eqs. (23a) and (23a), we directly obtain (for less cluttered notation, we omit the subscripts on \(C_{\mathfrak{i}\mathfrak{o}}\)):
(25)
which coincides exactly with the properties (5) and (6) derived in Sec. 2. Additionally, demanding that \(\widetilde{C}\) is completely positive then imposes \(C_{\mathfrak{i}\mathfrak{o}}\geq 0\), i.e., Eq. (4). For ease of notation, in the subsequent Examples, we denote the projector of Eq. (26b) by \(\widetilde{P}^{(C)}\) and add subscripts whenever we want to clarify what spaces it acts on.
**Example 2** (Quantum channels to states).: The next simplest transformation one could consider is a mapping from quantum channels to quantum states, i.e., transformations of the form \(\widetilde{T}[C_{12}]=\rho_{3}\) (see Fig. 4).9 Such transformations are particular types of quantum combs [2, 5], and have been considered amongst others in the study of open quantum system dynamics with initial correlations under the name of \(\mathcal{M}\)-maps [24]. Keeping track of the involved spaces, for this case, we have to identify \(\mathcal{H}_{\mathfrak{i}}\cong\mathcal{H}_{1}\otimes\mathcal{H}_{2}\) and \(\mathcal{H}_{\mathfrak{o}}\cong\mathcal{H}_{3}\). Since the resulting quantum states \(\rho_{3}\in\mathcal{L}(\mathcal{H}_{3})\) are unit
Figure 3: **Quantum Channels** A quantum channel \(\widetilde{C}\) which maps quantum states \(\rho\) onto quantum states \(\rho^{\prime}\).
trace, while \(\operatorname{tr}[C_{12}]=d_{1}\), we have \(\gamma_{\mathrm{i}}=1\) and \(\gamma_{\mathrm{o}}=d_{1}\). Additionally, the role of \(\widetilde{P}_{\mathrm{i}}\) is now played by the projector \(\widetilde{P}_{12}^{(C)}\) of Eq. (26b), while \(\widetilde{P}_{\mathrm{o}}\) is again given by \(\widetilde{\mathsf{I}}_{3}\) (since there are no linear restrictions on quantum states). Given that all involved projectors are self-adjoint, unital and commute with the transposition, Thm. 2 applies. With this, using Eqs. (23a) and (23b), we obtain for the Choi state \(T\in\mathcal{L}_{123}(\mathcal{H}_{1}\otimes\mathcal{H}_{2}\otimes\mathcal{H }_{3})\) of the map \(\widetilde{T}\):
The above coincides exactly with \({}_{3}T={}_{23}T\) and \(\operatorname{tr}[T]=d_{2}\) which, in turn, are exactly the causality/trace constraints of a one-slot comb with a final output line [5] (we discuss causality constraints in more detail below). Additionally, choosing \(\mathcal{H}_{1}\) to be trivial, _i.e._, \(\mathcal{H}_{1}\cong\mathbb{C}\), we recover the characterization of quantum channels from the above conditions. As before, demanding complete positivity from \(\widetilde{T}\) translates to the additional requirement \(T\geq 0\).
**Example 3** (Quantum channels to quantum channels (Quantum Superchannels)).: Let us now consider the question raised at the end of Sec. 2, namely the characterization of transformations \(\widetilde{T}[C_{23}]=C_{14}^{\prime}\) that map quantum channels \(C_{23}\in\mathcal{L}(\mathcal{H}_{2}\otimes\mathcal{H}_{3})\) onto quantum channels \(C_{14}^{\prime}\in\mathcal{L}(\mathcal{H}_{1}\otimes\mathcal{H}_{4})\) (see Fig. 5). In this case, we identify \(\mathcal{H}_{4}\cong\mathcal{H}_{2}\otimes\mathcal{H}_{3}\) and \(\mathcal{H}_{\mathrm{o}}\cong\mathcal{H}_{1}\otimes\mathcal{H}_{4}\). The projectors on the input and output space of \(\widetilde{T}\) are, respectively, given by the projectors \(\widetilde{P}_{23}^{(C)}\) and \(\widetilde{P}_{14}^{(C)}\) of Eq. (26b), which are self-adjoint, unital, and commute with the transposition, such that Thm. 2 applies. In addition, for channels, we have \(\gamma_{\mathrm{i}}=\operatorname{tr}[C_{23}]=d_{2}\) and \(\gamma_{\mathrm{o}}=\operatorname{tr}[C_{14}^{\prime}]=d_{1}\). Thus, employing Eqs. (23a) and (23b), we obtain for the properties of \(T\in\mathcal{L}(\mathcal{H}_{1}\otimes\mathcal{H}_{2}\otimes\mathcal{H}_{3} \otimes\mathcal{H}_{4})\):
\[\begin{array}{c}\includegraphics[width=142.26378pt]{Fig3}\end{array}\quad \begin{array}{c}\includegraphics[width=142.26378pt]{Fig4}\end{array}\quad \begin{array}{c}\includegraphics[width=142.26378pt]{Fig5}\end{array}\quad \begin{array}{c}\includegraphics[width=142.26378pt]{Fig6}\end{array}\quad \begin{array}{c}\includegraphics[width=142.26378pt]{Fig5}\end{array}\quad \begin{array}{c}\includegraphics[width=142.26378pt]{Fig6}\end{array}\quad \begin{array}{c}\includegraphics[width=142.26378pt]{Fig7}\end{array}\quad \begin{array}{c}\includegraphics[width=142.26378pt]{Fig8}\end{array}\quad \begin{array}{c}\includegraphics[width=142.26378pt]{Fig9}\end{array}\quad \begin{array}{c}\includegraphics[width=142.26378pt]{Fig10}\end{array}\quad \begin{array}{c}\includegraphics[width=142.26378pt]{Fig11}\end{array}\quad \begin{array}{c}\includegraphics[width=142.26378pt]{Fig12}\end{array}\quad \begin{array}{c}\includegraphics[width=142.26378pt]{Fig13}\end{array}\quad \begin{array}{c}\includegraphics[width=142.26378pt]{Fig14}\end{array}\quad \begin{array}{c}\includegraphics[width=142.26378pt]{Fig15}\end{array}\quad \begin{array}{c}\includegraphics[width=142.26378pt]{Fig16}\end{array}\quad \begin{array}{c}\includegraphics[width=142.26378pt]{Fig17}\end{array}\quad \begin{array}{c}\includegraphics[width=142.26378pt]{Fig18}\end{array}\quad \begin{array}{c}\includegraphics[width=142.26378pt]{Fig19}\end{array}\quad \begin{array}{c}\includegraphics[width=142.26378pt]{Fig19}\end{array}\quad \begin{array}{c}\includegraphics[width=142.
\[\begin{array}{|c|}\hline\text{\bf(Quantum superchannel)}\\ \hline T\geq 0\\ \widetilde{P}_{1\circ}[T]:=& T-{}_{4}T+{}_{34}T-{}_{234}T+{}_{1234}T=T\\ \operatorname{tr}[T]=& d_{1}d_{3}\,,\end{array}\] (30a) (30b) \[\operatorname{tr}[T]= d_{1}d_{3}\,, \tag{30c}\]
It is easy to see that the above is, in addition to \(\operatorname{tr}[T]=d_{2}d_{3}\), equivalent to \({}_{4}T={}_{34}T\) and \({}_{234}T={}_{1234}T\), which, in the ordering of spaces we have chosen, coincides with the causality/trace constraints of a quantum comb with one slot (corresponding to the spaces labelled by 2 and 3), and an initial input (labelled by 1) and final output (labelled by 4) [25]. This, in turn, reflects the well-known fact that there are no causally disordered superchannels [1]. Additionally, choosing \(\mathcal{H}_{1}\) to be trivial, we recover the conditions (30) on transformations of channels to states from above.
Finally, here, it is insightful to discuss in what way the properties of \(T\) would change if the trace conditions on the elements of \(\mathcal{S}_{\mathfrak{i}}\) and \(\mathcal{S}_{\mathfrak{o}}\) were dropped. Then, the transformation \(\widetilde{T}^{\prime}:\mathcal{L}(\mathcal{H}_{2}\otimes\mathcal{H}_{3}) \rightarrow\mathcal{L}(\mathcal{H}_{1}\otimes\mathcal{H}_{4})\) wouldstill have to satisfy \((\widetilde{P}_{14}^{(C)}\circ\widetilde{T}^{\prime})[C_{23}]=\widetilde{T}^ {\prime}[C_{23}]\) for all \(C_{23}=P_{23}^{(C)}[C_{23}]\), but it is not necessarily trace-rescaling. The corresponding characterization for this case will be given in Thm. 4. Using Eq. (91) from Thm. 4, one obtains
\[\begin{array}{l}T^{\prime}=T^{\prime}-\widetilde{P}_{23}^{(C)}[T^{\prime}]+ (\widetilde{P}_{23}^{(C)}\otimes\widetilde{P}_{14}^{(C)})[T^{\prime}]\\ =T^{\prime}-{}_{4}T^{\prime}+{}_{14}T^{\prime}+{}_{34}T^{\prime}-{}_{134}T^{ \prime}-{}_{234}T^{\prime}+{}_{1234}T^{\prime},\end{array} \tag{31}\]
with no additional restriction on the trace of \(T^{\prime}\). Even setting aside the absence of an additional trace constraint on \(T^{\prime}\), the above Equation is significantly different from Eq. (30b), underlining the importance of the affine constraints on \(\mathcal{S}_{\mathfrak{i}}\) and \(\mathcal{S}_{\mathfrak{o}}\) for the properties of the transformations between them. \(\blacksquare\)
**Example 4** (Non-signalling channels to unit probability (Process Matrix)).: As another pertinent example, let us consider the well-studied case of process matrices [8, 15], i.e., the set of transformations \(\widetilde{T}\) that map pairs of CPTP maps to unit probability (this, in turn, implies that they constitute the dual affine set of the set of non-signalling CPTP maps, see Sec. 6). Specifically, let \(R_{12}\) and \(N_{34}\) be the Choi states of CPTP maps and \(\widetilde{N}:\mathcal{L}(\mathcal{H}_{3})\rightarrow\mathcal{L}(\mathcal{H}_ {4})\). Then, the set of process matrices is given by all linear maps such that for all CPTP maps \(R_{12},N_{34}\). In this case, the input space is given by \(\mathcal{H}_{\mathfrak{i}}=\mathcal{H}_{1}\otimes\mathcal{H}_{2}\otimes \mathcal{H}_{3}\otimes\mathcal{H}_{4}\), while \(\mathcal{H}_{\mathfrak{o}}=\mathbb{C}\). The corresponding projectors simply follow from the previous examples as \(\widetilde{P}_{\mathfrak{i}}=\widetilde{P}_{12}^{(C)}\otimes\widetilde{P}_{34} ^{(C)}\) and \(\widetilde{P}_{\mathfrak{o}}=\widetilde{\mathds{1}}\) (these are, again, self-adjoint and unital projectors that commute with the transpose, such that Thm. 2 can be applied). More explicitly, we have
\[\widetilde{P}_{\mathfrak{i}}[M]= \widetilde{P}_{12}^{(C)}\otimes\widetilde{P}_{34}^{(C)}[M] \tag{32}\] \[= \widetilde{P}_{12}^{(C)}\otimes\widetilde{\mathds{1}}[M-{}_{4}M+ {}_{34}M]\] (33) \[= (M-{}_{4}M+{}_{34}M)-{}_{2}(M-{}_{4}M+{}_{34}M)+{}_{12}(M-{}_{4}M+ {}_{34}M)\] (34) \[= M-{}_{4}M+{}_{34}M-{}_{2}M+{}_{24}M-{}_{234}M+{}_{12}M-{}_{124}M+ {}_{1234}M. \tag{35}\]
With this, we can employ Eqs. (23a) and (23b) to obtain the properties of process matrices that send pairs of CPTP maps to unit probability.
\[\begin{array}{|c|}\hline\text{\bf(Non-signalling channel)}\\ \hline M\in\mathcal{L}(\mathcal{H}_{1}\otimes\mathcal{H}_{2}\otimes\mathcal{H}_ {3}\otimes\mathcal{H}_{4})\text{ belongs to }\mathcal{S}_{\mathfrak{i}}\text{ iff}\\ M\geq 0\\ {}_{234}W+{}_{12}M-{}_{124}M+{}_{1234}M=M\\ \operatorname{tr}[M]=d_{1}d_{3}\end{array} \tag{36}\]
\[\boxed{(Process matrix)} \tag{37a}\] \[\widetilde{P}_{\mathrm{1a}}[T] := {}_{2}T+{}_{4}T-{}_{24}T-{}_{34}T+{}_{234}T-{}_{12}T+{}_{124}T=T\] (37b) \[\mathrm{tr}[T] = {}_{d}2{}_{d}4\,, \tag{37c}\]
The aboveproperties of \(T\) exactly coincide with the characterization of process matrices given in Ref. [15].
In particular this latter result is of interest, since the set of process matrices can be considered the dual affine set of the set of all tensor products of CPTP maps, where the dual affine of a set are all operators that map the elements of the set to \(1\).10 Such dual affine sets play an important role in quantum mechanics (and more generally, linear algebra), and evidently, the projectors we introduced can be used to characterize them comprehensively. In the next section, we will analyse the characterization of dual sets (affine or not) in more detail. Before doing so, we provide an alternative characterization of process matrices.
Footnote 10: More generally, any process matrix \(T\) will map any matrix of the form \(\sum_{i}\lambda_{i}M^{(i)}\otimes N^{(i)}\), where \(\sum_{i}\lambda_{i}=1\), and \(M^{(i)}\), \(N^{(i)}\) CPTP to \(1\). Since the set of all valid CPTP maps that can be decomposed in this way is the set of non-signalling maps [26, 27], process matrices form exactly the dual set of non-signalling maps. We will further investigate this connection in Sec. 6.
**Example 5** (Quantum channels to superchannels without initial input and final output (Process matrices revisited)).: As a penultimate example, let us consider process matrices from a different perspective. Interestingly, they can be characterized in an alternative (yet equivalent) way, namely as mappings that map quantum channels (say, \(\widetilde{R}:\mathcal{L}(\mathcal{H}_{1})\to\mathcal{L}(\mathcal{H}_{2})\)) to one-slot combs with no initial input and final output, i.e., superchannels with trivial input and output space (see Fig. 6(b)). Concretely, this requirement reads \(\widetilde{T}[R]=\widetilde{Q}^{\prime}\), where \(\widetilde{Q}^{\prime}\) is a one-slot comb whenever \(\widetilde{R}\) is a CPTP map. Since such one-slot combs are special cases of superchannels, they are characterized by the projector of Eq. (30b) and they are causally ordered. This latter fact, in turn, chimes nicely with the intuitive definition of process matrices as mappings that obey local but not necessarily global causality [8]; considering the slot corresponding to \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\) as Alice's laboratory, independent of what deterministic operation (i.e., CPTP map) she performs locally, Bob (i.e., the slot corresponding to \(\mathcal{H}_{3}\) and \(\mathcal{H}_{4}\)) will always encounter a causally ordered scenario (given by the one-slot comb \(\widetilde{Q}^{\prime}\)). Naturally, one would obtain the same definition of process matrices with the roles of Alice and Bob reversed.
Let us now show that this alternative definition of process matrices indeed leads to the same characterization as the one provided in the previous Example. First, since \(R_{12}\in\mathcal{L}(\mathcal{H}_{1}\otimes\mathcal{H}_{2})\) and \(Q^{\prime}_{34}\in\mathcal{L}(\mathcal{H}_{3}\otimes\mathcal{H}_{4})\), we identify \(\mathcal{H}_{1}\cong\mathcal{L}(\mathcal{H}_{1}\otimes\mathcal{H}_{2})\) and \(\mathcal{H}_{\circ}\cong\mathcal{L}(\mathcal{H}_{3}\otimes\mathcal{H}_{4})\). The projector \(\widetilde{P}_{1}\) is given by the projector \(\widetilde{P}_{12}^{(C)}\) on the space of channels, while \(\widetilde{P}_{\circ}\) follows directly from the projector onto the set of superchannels provided in Eq. (30b) by setting \(\mathcal{H}_{1}\cong\mathcal{H}_{4}\cong\mathbb{C}\), such that \(\widetilde{P}_{\circ}[T]={}_{4}T\). Additionally, we have that \(\gamma_{1}=\mathrm{tr}[R]=d_{2}\), while \(\gamma_{\circ}=\mathrm{tr}[Q^{\prime}]=d_{3}\) (see Eq. (30)). Since all involved
Figure 6: (a) **Process Matrix. As a mapping from two independent channels \(\widetilde{R}\) and \(\widetilde{N}\) to the number \(1\) (b) Process Matrix. As a mapping from a channel \(\widetilde{R}\) to a one-slot superchannel \(\widetilde{Q}^{\prime}\) without past and future.**
projectors are self-adjoint, unital, and commute with the transposition, Thm. 2 applies, and we obtain the characterization of T as
(38)
which coincides exactly with the characterization of process matrices given in Eq. (37). Besides yielding an equivalent characterization of process matrices, the above derivation also sheds an interesting light on the emergence of causal indefiniteness; graphically, a mapping from CPTP maps to one-slot combs is very similar to a mapping from CPTP maps to CPTP maps (i.e., superchannels), with the only difference that the incoming and outgoing wires of the former case are inverted with respect to the latter (to see this, compare Figs. 5 and 6(b)). This graphical similarity notwithstanding, **all** superchannels are causally ordered, while process matrices can be causally non-separable and even violate causal inequalities [8, 28, 29].
Finally, let us remark that the equivalence between the two characterizations of process matrices ceases to hold if the trace-rescaling property is dropped. In this case, the requirement that process matrices map non-signalling maps to \(\mathbb{C}\)(i.e., the case considered in the previous Example) yields no restrictions on the corresponding map \(T^{\prime}\), i.e., \(\widetilde{P}_{\mathbf{i}\mathbf{o}}=\widetilde{\mathbf{I}}_{\mathbf{i}\mathbf{o}}\) (as can be seen by direct insertion into Eq. (91)). On the other hand, dropping the trace-rescaling conditions on maps that map CPTP maps to the space spanned by one-slot combs (i.e., the ones considered in this example), we obtain, using Thm. 4:
\[T^{\prime}={}_{2}T^{\prime}+{}_{4}T^{\prime}-{}_{24}T^{\prime}-{}_{12}T^{ \prime}+{}_{124}T^{\prime}, \tag{40}\]
which is a non-trivial constraint on the map \(T\).
**Example 6** (Superchannels to superchannels).: As a last example, let us discuss an a priori more involved case that features less prominently in the literature: mappings from superchannels to superchannels (see Fig. 7). Above, we already derived the projector onto the space of superchannels as well as their trace (see Eqs. (30)).
Here, proper bookkeeping of the involved spaces becomes slightly messy, but the respective properties of mappings from superchannels to superchannels can be readily deduced, using Thm. 2. Specifically, following the labelling convention of Fig. 7, we set
\(\mathcal{H}_{1}:=\mathcal{H}_{1}\otimes\mathcal{H}_{2}\otimes\mathcal{H}_{3} \otimes\mathcal{H}_{4}\) and \(\mathcal{H}_{0}:=\mathcal{H}_{0}\otimes\mathcal{H}_{5}\otimes\mathcal{H}_{6} \otimes\mathcal{H}_{7}\). Consequently, for the Choi matrices \(S\) (\(S^{\prime}\)) of the input (output) superchannels we have \(S\in\mathcal{L}(\mathcal{H}_{1})\) (\(S^{\prime}\in\mathcal{L}(\mathcal{H}_{0})\) ), while the Choi matrix \(T\) of the transformation between them lies acts on \(\mathcal{H}_{1}\otimes\mathcal{H}_{0}\). Now, using Thm. 2, we obtain
\[\begin{array}{c}\includegraphics[width=142.367913pt]{images/(Superchannel)}\\ S\in\mathcal{L}(\mathcal{H}_{1}\otimes\mathcal{H}_{2}\otimes\mathcal{H}_{3} \otimes\mathcal{H}_{4})\in\mathcal{S}_{1}\text{ iff}\\ S\geq 0\\ \widetilde{P}_{1}[S]:=S-{}_{4}S+{}_{34}S\\ -{}_{234}S+{}_{1234}S=S\\ \operatorname{tr}[S]=d_{1}d_{3}\end{array}\quad\begin{array}{c} \includegraphics[width=142.367913pt]{images/(Superchannel)}\\ S^{\prime}\in\mathcal{L}(\mathcal{H}_{0}\otimes\mathcal{H}_{5}\otimes \mathcal{H}_{6}\otimes\mathcal{H}_{7})\in\mathcal{S}_{0}\text{ iff}\\ S^{\prime}\geq 0\\ \widetilde{P}_{0}[S^{\prime}]:=S^{\prime}-{}_{7}S^{\prime}+{}_{67}S^{\prime}\\ -{}_{567}S^{\prime}+{}_{0567}S^{\prime}=S^{\prime}\\ \operatorname{tr}[S^{\prime}]=d_{0}d_{6}\end{array} \tag{41}\]
\[\begin{array}{c}\includegraphics[width=142.367913pt]{images/(Mapping between superchannels)}\\ T\geq 0\\ \widetilde{P}_{1\circ}[T]:=T-{}_{7}T+{}_{47}T+{}_{67}T-{}_{347}T-{}_{467}T-{}_{567}T+ {}_{2347}T\\ +{}_{3467}T+{}_{4567}T-{}_{12347}T-{}_{23467}T-{}_{34567}T+{}_{123467}T\\ +{}_{234567}T-{}_{1234567}T+{}_{01234567}T=T\end{array} \tag{42b}\] \[\operatorname{tr}[T]= d_{0}d_{6}d_{2}d_{4}\,. \tag{42c}\]
While not a priori particularly insightful (albeit indispensable when numerically optimizing over transformations of superchannels) in its own right, Eq. (42b) allows one to directly deduce that transformations from superchannels to superchannels are not necessarily causally ordered. For example, for the space \(\mathcal{H}_{7}\) to be the final output space, \(T\) would have to satisfy \({}_{7}T={}_{\mathfrak{x}7}T\), where \(\mathfrak{x}\in\{0,2,4,6\}\). From Eq. (42b), we directly see that this is not the case for any \(\mathfrak{x}\), and, for instance, we have \({}_{7}T-{}_{47}T={}_{67}T-{}_{467}T-{}_{567}T+{}_{4567}T\neq 0\) and analogously for \(\mathfrak{x}=0,2,6\). In a similar vein, this can be checked for the other potential final output spaces \(\{1,5,3\}\), with the same result. Consequently, there exists valid general maps from superchannels to superchannels which do not have to be causally ordered. \(\blacksquare\)
## 5 Probabilistic quantum transformations
In the previous sections, we have - except for short comments on the consequences of dropping trace rescaling conditions - only addressed _deterministic_ quantum transformations, i.e., transformations that occur with unit probability. Specifically, these are transformations that are 'built up' from quantum states (which can be prepared with unit probability); then, CPTP maps (transformations from states to states), superchannels (transformations from quantum channels to quantum channels), process matrices (transformations from channels to number 1) are all deterministic, since they have a deterministic element as their "base object". More abstractly, here, we consider all transformations deterministic that map between affine quantum sets \(\mathcal{S}_{1}\) and \(\mathcal{S}_{0}\) with \(\gamma_{1},\gamma_{0}\neq 0\).
However, quantum theory also admits probabilistic quantum transformations. For example, when considering quantum states, probabilistic transformations are described by quantum instruments [30, 31]. Concretely, let \(\rho\in\mathcal{L}(\mathcal{H}_{1})\) be a quantum state, then a quantum instrument is a set of CP maps \(\{\widetilde{C}^{(i)}\}_{i}\) - each of them corresponding to a possible measurement outcome - with \(\widetilde{C}^{(i)}:\mathcal{L}(\mathcal{H}_{1})\rightarrow\mathcal{L}( \mathcal{H}_{0})\) which add up to a quantum channel, that is, \(\widetilde{C}:=\sum_{i}\widetilde{C}^{(i)}\) is CPTP. When the quantum instrument \(\{\widetilde{C}^{(i)}\}_{i}\) is applied on the state \(\rho\), with probability \(\operatorname{tr}\left[\widetilde{C}^{(i)}[\rho]\right]\), the classical outcome \(i\) is obtained and the state \(\rho\) is transformed to
\[\rho^{\prime}:=\frac{\widetilde{C}^{(i)}[\rho]}{\operatorname{tr}\left[ \widetilde{C}^{(i)}[\rho]\right]}. \tag{43}\]
In a similar vein, _all_ deterministic quantum transformations (in particular, all the quantum transformations we discussed above) have their probabilistic counterpart, given by sets of CP maps that add up to a deterministic quantum transformation.
**Definition 3** (Probabilistic Quantum Transformations).: _Let \(\widetilde{P}_{1}:\mathcal{L}(\mathcal{H}_{i})\rightarrow\mathcal{L}(\mathcal{H}_{i})\) and \(\widetilde{P}_{\mathfrak{o}}:\mathcal{L}(\mathcal{H}_{\mathfrak{o}}) \rightarrow\mathcal{L}(\mathcal{H}_{\mathfrak{o}})\) be linear projective maps and \(\mathcal{S}_{\mathfrak{i}}\subseteq\mathcal{L}(\mathcal{H}_{\mathfrak{i}})\) and \(\mathcal{S}_{\mathfrak{o}}\subseteq\mathcal{L}(\mathcal{H}_{\mathfrak{o}})\) be sets of quantum objects defined by_
\[\begin{array}{|c|c|c|}\hline W\in\mathcal{L}(\mathcal{H}_{\mathfrak{i}})& \text{belongs to }\mathcal{S}_{\mathfrak{i}}\text{ iff}\\ W\geq 0&\text{\ \
number for every positive operator11). \(C\), we need \(T^{(i)}\geq 0\) for all \(i\), and in order to ensure normalisation for every quantum channel \(C\), we need that \(\sum_{i}\operatorname{tr}[T^{(i)}C]=1\), which is equivalent to imposing \(\operatorname{tr}[TC]=1\) for every channel \(C\), where \(T:=\sum_{i}T^{(i)}\). The set of all operators \(T\) respecting \(\operatorname{tr}[TC]=1\) for every channel \(C\) is the _dual affine set_ of the set of quantum channels, and a set of operators \(\{T^{(i)}\}_{i}\) respecting
Footnote 11: Here, the operator \(C\) is assumed to be an _arbitrary_ positive semidefinite operator, which may not satisfy the constraints of a quantum channel. This is because we require the quantity \(\operatorname{tr}[T^{(i)}C]\) to be non-negative not only on channels, but also instrument elements or when acting non-trivially only on a part of a quantum channel (this is similar to a complete positivity argument for the case of quantum channels
\[T^{(i)}\geq 0 \tag{47}\] \[\sum_{i}\operatorname{tr}[T^{(i)}C]= 1,\quad\text{ for every channel }C \tag{48}\]
is called a tester. Interestingly, all quantum testers may be realised within standard quantum circuits, that is, for any tester \(\{T^{(i)}\}_{i}\), \(T^{(i)}\in\mathcal{L}(\mathcal{H}_{i}\otimes\mathcal{H}_{\mathfrak{o}})\) there always exist a state \(\rho\in\mathcal{L}(\mathcal{H}_{i}\otimes\mathcal{H}_{\text{aux}})\) and a POVM \(\{M^{(i)}\}_{i}\), \(M^{(i)}\in\mathcal{L}(\mathcal{H}_{\text{aux}}\otimes\mathcal{H}_{\mathfrak{o}})\) such that \(\operatorname{tr}[T^{(i)}C]=\operatorname{tr}\left[M^{(i)}\,\left(\widetilde{C }\otimes\mathbbm{1}_{\text{aux}}[\rho]\right)\right]\). Although we might not always have a quantum circuit realisation for other quantum objects (such as process matrices), the concept of dual affine imposes the minimal normalisation constraint required by measuring general quantum objects and plays a fundamental role in general quantum measurements [7, 14, 32] and general quantum assemblages [33].
**Definition 4** (Dual Affine set).: _Let \(\mathcal{S}\subseteq\mathcal{L}(\mathcal{H})\). An operator \(\overline{W}\in\mathcal{L}(\mathcal{H})\) belongs to \(\overline{\mathcal{S}}\), the **dual affine** set of \(\mathcal{S}\) if12_
Footnote 12: In this work we are mostly interested in self-adjoint operators, hence, when \(W=W^{\dagger}\), we have \(\operatorname{tr}[\overline{W}^{\dagger}\,W]=\operatorname{tr}[\overline{W}\,W]\).
\[\operatorname{tr}[\overline{W}^{\dagger}\,W]=1,\quad\forall W\in\mathcal{S}. \tag{49}\]
Naturally, for _any_ set \(\mathcal{S}\), its dual affine set is indeed affine, since \(\sum_{i}\lambda_{i}\operatorname{tr}[\overline{W}^{(i)}W]=1\) for all \(\overline{W}^{(i)}\in\overline{\mathcal{S}}\) and \(W\in\mathcal{S}\) if \(\sum_{i}\lambda_{i}=1\). If the set \(\mathcal{S}\) itself is affine, then we can derive the properties of elements in \(\overline{\mathcal{S}}\) in a straightforward way.
We now present a Theorem - also obtained in an independent way in Ref. [14] - that allows us to obtain a simple characterisation for dual affine sets of quantum objects.
**Theorem 3**.: _Let \(\widetilde{P}:\mathcal{L}(\mathcal{H})\to\mathcal{L}(\mathcal{H})\) be a linear projective map and \(\mathcal{S}\subseteq\mathcal{L}(\mathcal{H})\) be an affine set defined by \(W\in\mathcal{S}\) via_
\[W=\widetilde{P}[W] \tag{50a}\] \[\operatorname{tr}[W]=\gamma. \tag{50b}\]
_where \(\widetilde{P}\) is a self-adjoint, unital, commutes with the transposition i.e.,_
\[\widetilde{P}= \widetilde{P}^{\dagger} \tag{51a}\] \[\widetilde{P}[\mathds{1}]= \mathds{1}\] (51b) \[\widetilde{P}[W^{\tau}]= \widetilde{P}[W]^{\tau},\quad\forall W\in\mathcal{L}(\mathcal{H})\,, \tag{51c}\]
_and \(\gamma\neq 0\). An operator \(\overline{W}\in\mathcal{L}(\mathcal{H})\) belongs to the dual affine set \(\overline{\mathcal{S}}\) if and only if_
\[\overline{W}= \overline{W}-\widetilde{P}[\overline{W}]+\operatorname{tr}\left( \overline{W}\right)\frac{\mathds{1}}{d} \tag{52a}\] \[\operatorname{tr}[\overline{W}]= \frac{d}{\gamma}\,, \tag{52b}\]
_where \(d=\text{dim}(\mathcal{H})\)._
Proof.: This Theorem can be shown in two separate ways. On the one hand, since \(\widetilde{P}\) satisfies the requirements of Thm. 2, we can directly use it to prove the above Theorem. Secondly, we can show it directly. Since this proof has merit in its own right, we start with this latter approach. To this end, we first note that, as we discuss in detail in Sec. (7.2), if a linear operator \(\widetilde{P}\) is self-adjoint and commutes with the transposition, then \(\operatorname{tr}[A\widetilde{P}[B]]=\operatorname{tr}[\widetilde{P}[A]B]\) for all \(A,B\). Thus, for any \(\overline{W}\) that satisfies Eqs. (52a) and (52b), we have
\[\operatorname{tr}[\overline{W}W]=\operatorname{tr}\left[\left( \overline{W}-\widetilde{P}[\overline{W}]+\frac{1}{\gamma}\right)W\right]= \operatorname{tr}[(\overline{W}W)-\operatorname{tr}[\overline{W}\widetilde{P} [W]]+\frac{1}{\gamma}\operatorname{tr}W=1, \tag{53}\]
where we have used \(\widetilde{P}[W]=W\) and \(\operatorname{tr}W=\gamma\) for all \(W\in\mathcal{S}\).
To prove the converse, first note that, since \(\widetilde{P}\) is a self-adjoint and unital projector, it is also trace-preserving, and we have \(\widetilde{P}[M]\in\mathcal{S}\) for all \(M\in\mathcal{L}(\mathcal{H})\) which satisfy \(\operatorname{tr}[M]=\gamma\). The set of all such matrices \(M\) spans \(\mathcal{L}(\mathcal{H})\). Now, for any \(\overline{W}\) that satisfies \(\operatorname{tr}[\overline{W}W]=1\) holds for all \(W\in\mathcal{S}\), we have
\[\operatorname{tr}[\overline{W}\widetilde{P}[M]]=\operatorname{tr }[\widetilde{P}[\overline{W}]M]=1=\frac{1}{\gamma}\operatorname{tr}[M]\,, \tag{54}\]
where we have used \(\operatorname{tr}[M]=\gamma\). Now, since the above equation holds for a full basis of \(\mathcal{L}(\mathcal{H})\), we have
\[\widetilde{P}[\overline{W}]=\frac{1}{\gamma}\mathbbm{1}\quad \Rightarrow\quad\operatorname{tr}[\overline{W}]=\frac{d}{\gamma}. \tag{55}\]
Together, the two statements in the above equation yield Eqs. (52a) and (52b), completing the proof.
As mentioned, and as already implicitly done in Ex. 4, we can also prove this statement by directly employing Thm. 2. To do so, we note that in the considered case, the output space \(\mathcal{H}_{\mathsf{o}}\) and output projector \(\widetilde{P}_{\mathsf{o}}\) are trivial, while we have the re-scaling factor \(\gamma_{\mathsf{o}}/\gamma_{\mathsf{i}}=1/\gamma\), such that Eqs. (23a) and (23b) of Thm. 2 are directly equivalent to Eqs. (52a) and (52b) of the above Theorem.
We emphasise that adding a positivity constraint to the objects in the set \(\mathcal{S}\), as is often naturally the case in quantum mechanics, would yield the same characterization of the dual set \(\overline{\mathcal{S}}\). As already outlined, the characterization of the dual affine set is simply a special case of the overall characterization of trace-rescaling linear maps between spaces that are defined by projectors \(\widetilde{P}_{\mathsf{i}}\) and \(\widetilde{P}_{\mathsf{o}}\). In particular, denoting the corresponding map by \(\widetilde{T}[W]=1\), we have \(\overline{W}^{\prime}=T\), where \(T\) is the Choi matrix of \(\widetilde{T}\) and the additional transposition \(\cdot^{\tau}\) appears due to the convention we chose for the Choi formalism. Since dual sets play a prominent role in quantum mechanics, here we chose to discuss this important case explicitly.
While the above Theorem only applies for self-adjoint, unital projectors that commute with the transposition, it can easily be phrased for more general situations. The general case is discussed in Sec. 8
### Quantum measurement and its relationship with probabilistic transformations
From the above discussion, we can now consider quantum measurements on general quantum objects in a more general way. We start by presenting their definition.
**Definition 5**.: _Let \(\mathcal{S}_{\mathsf{i}}\subseteq\mathcal{L}(\mathcal{H})\) be a set of quantum objects and \(\overline{\mathcal{S}}_{\mathsf{i}}\) its dual affine. A general quantum measurement on \(\mathcal{S}_{\mathsf{i}}\) is given by a set of operators \(\{M^{(i)}\}_{i}\), with \(M^{(i)}\in\mathcal{L}(\mathcal{H})\) respecting,_
\[M^{(i)}\geq 0 \tag{56}\] \[\sum_{i}M^{(i)}\in\overline{\mathcal{S}}_{\mathsf{i}}, \tag{57}\]
_and the probability of obtaining an outcome \(i\) when measuring the object \(W\in\mathcal{S}_{\mathsf{i}}\) is \(p(i)=\operatorname{tr}[M^{(i)}W]\)._
General measurements are the largest set of measurements which is in principle allowed by quantum theory, and may be used to perform measurements on general quantum objects such as process matrices, as in Ref. [34] where the authors used general measurements to discriminate between process matrices with indefinite causal order. Similarly to other general transformations discussed in this manuscript, it may be the case that a general measurements may not be realised by quantum circuits (due to indefinite causality), or even it might be the case that we can never obtain a "fair" physical implementation for some general measurements (due to some other physical principle, e.g., reversibility preserving principle [35, 36] or logically consistence process [37, 38]. However, any set greater than the one defined above is certainly forbidden by quantum theory.
We remark that the set of general quantum measurements is closely related to the set of probabilistic transformations. Similarly to quantum instruments, a probabilistic transformation may be viewed as a description of a quantum measurement and a post-measurement state. Hence, every probabilistic transformation corresponds to a quantum measurement. More precisely, if \(\{C^{(i)}\}_{i}\) is a probabilistic quantum transformation from \(\mathcal{S}_{\mathfrak{i}}\) to \(\mathcal{S}_{\mathfrak{o}}\), its associated general measurement operators are given by \(M^{(i)}:=\widetilde{C}^{(i)\dagger}[\mathds{1}_{\mathfrak{o}}]\). Indeed,
\[\operatorname{tr}[M^{(i)}W] =\operatorname{tr}[\widetilde{C}^{(i)\dagger}[\mathds{1}_{ \mathfrak{o}}]W] \tag{58}\] \[=\operatorname{tr}[\mathds{1}_{\widetilde{C}}\widetilde{C}^{(i)} [W]]\] (59) \[=\operatorname{tr}[\widetilde{C}^{(i)}[W]]\] (60) \[=p(i), \tag{61}\]
which is precisely the probability of obtaining the outcome \(i\). Also, every quantum measurement may be viewed as a probabilistic transformation from a quantum object set \(\mathcal{S}_{\mathfrak{i}}\) to the trivial set \(\mathcal{S}_{\mathfrak{o}}=\{1\}\subseteq\mathcal{L}(\mathbb{C})\), which contains only the scalar number one. More precisely, if \(M^{(i)}\) is a general quantum measurement on \(\mathcal{S}_{\mathfrak{i}}\), one can always define the probabilistic transformation \(\widetilde{C}^{(i)}[W]:=\operatorname{tr}[M^{(i)}W]\), where \(\widetilde{C}^{(i)}:\mathcal{L}(\mathcal{H})\rightarrow\mathcal{L}(\mathbb{C})\). It is immediate to verify that the map \(\widetilde{C}^{(i)}[W]=\operatorname{tr}[M^{(i)}W]\) is completely positive and respects \(\sum_{i}\widetilde{C}^{(i)}\in\{1\}\).
### Non-signalling channels and multipartite process matrices
We now use the concept of dual affines to return a final time to the case of process matrices as the dual affine set of the set of non-signalling channels. Here, for generality, we consider the \(k\)-party case (also considered in Ref. [15]). To this end, let us first define non-signalling channels.
Informally, non-signalling channels are multipartite quantum channels which cannot be used for exchanging information. Let \(k\in\mathbb{N}\) be the number, the Hilbert space corresponding to the total input and output, respectively, of such non-signalling map are then given by
\[\mathcal{H}_{\mathfrak{i}}:= \mathcal{H}_{\mathfrak{i}_{1}}\otimes\mathcal{H}_{\mathfrak{i}_ {2}}\otimes\ldots\otimes\mathcal{H}_{\mathfrak{i}_{k}} \tag{62}\] \[\mathcal{H}_{\mathfrak{o}}:= \mathcal{H}_{\mathfrak{o}}\otimes\mathcal{H}_{\mathfrak{o}_{2}} \otimes\ldots\otimes\mathcal{H}_{\mathfrak{o}_{k}}. \tag{63}\]
Figure 8: **Multipartite non-signalling channels.** Each input party \(\mathfrak{i}\) can at most send signals to its corresponding output party \(\mathfrak{o}_{\ell}\). Here, this is depicted for the party \(\mathfrak{i}_{1}\), where a blue arrow denotes the possibility to send a signal, while the red lines signify that no signal can be sent. Since party \(\mathfrak{i}_{1}\) can only send signals to \(\mathfrak{o}_{1}\), discarding said outcome then amounts to directly discarding the input \(\mathfrak{i}_{1}\) (depicted in the figure). In terms of the Choi state \(C\) of \(\widetilde{C}\), this then corresponds to the requirement \(\mathfrak{o}_{1}C=\mathfrak{i}_{1}\mathfrak{o}_{1}C\) of Eq. (64) and analogously for all other pairs \(\{\mathfrak{i}_{\ell},\mathfrak{o}_{\ell}\}\).
Then, a multipartite quantum channel \(\widetilde{C}:\mathcal{L}(\mathcal{H}_{\mathfrak{i}})\to\mathcal{L}(\mathcal{H}_{ \mathfrak{o}})\)] is non-signalling if its Choi state \(C\) respects,
\[\mathfrak{o}_{\ell}(C)=\mathfrak{i}_{\ell\circ\ell}\,C,\quad\forall\ell\in\{1,2,\ldots,k\}. \tag{64}\]
Intuitively, the above property says that, discarding the output of party \(\ell\) amounts to directly discarding its input, which implies that the only signalling of party \(\ell\) happens from \(\mathfrak{i}_{\ell}\) to \(\mathfrak{o}\ell\), but not to any other output \(\mathfrak{o}_{\ell^{\prime}}\) (see Fig. 8 for a graphical depiction).
The requirements of Eq.(64) are equivalent to stating that the map \(\widetilde{C}:\mathcal{L}(\mathcal{H}_{\mathfrak{i}})\to\mathcal{L}(\mathcal{ H}_{\mathfrak{o}})\) can be written as an affine combination of independent channels, that is \(\widetilde{C}=\sum_{\alpha}\gamma^{(\alpha)}\widetilde{C}_{1}^{(\alpha)} \otimes\widetilde{C}_{2}^{(\alpha)}\otimes\ldots\otimes\widetilde{C}_{k}^{( \alpha)}\), where \(\gamma^{(\alpha)}\in\mathbb{R}\), \(\sum_{\alpha}\gamma^{(\alpha)}=1\), and all maps \(\widetilde{C}_{\ell}^{(\alpha)}:\mathcal{L}(\mathcal{H}_{\mathfrak{i}_{\ell} })\to\mathcal{L}(\mathcal{H}_{\mathfrak{o}_{\ell}})\) are quantum channels [26, 27].
It then follows that we have a simple characterisation of non-signalling quantum channels. For that, we define the projectors:
\[\widetilde{P}_{\ell} :\mathcal{L}(\mathcal{H}_{\mathfrak{i}_{\ell}}\otimes\mathcal{H }_{\mathfrak{o}_{\ell}})\to\mathcal{L}(\mathcal{H}_{\mathfrak{i}_{\ell}} \otimes\mathcal{H}_{\mathfrak{o}_{\ell}}), \ell\in\{1,2,\ldots,k\}, \tag{65}\] \[\widetilde{P}_{\ell}[C] :=C-\mathfrak{o}_{\ell}\,C+\mathfrak{i}_{\mathfrak{o}_{\ell}}\,C, \ell\in\{1,2,\ldots,k\},\] (66) \[\widetilde{P}_{NS} :\mathcal{L}(\mathcal{H}_{\mathfrak{i}}\otimes\mathcal{H}_{ \mathfrak{o}})\to\mathcal{L}(\mathcal{H}_{\mathfrak{i}}\otimes\mathcal{H}_{ \mathfrak{o}}),\] (67) \[\widetilde{P}_{NS} :=\widetilde{P}_{k}\circ\ldots\widetilde{P}_{2}\circ\widetilde{P }_{1}. \tag{68}\]
We emphasize that, here, the order in which the projectors \(\widetilde{P}_{\ell}\) are applied in Eq. (68) does not matter, since they all commute (making a construction of \(\widetilde{P}_{NS}\) via concatenation possible in the first place). Hence, a linear operator \(C\in\mathcal{L}(\mathcal{H}_{\mathfrak{i}}\otimes\mathcal{H}_{\mathfrak{o}})\) is a non-signalling quantum channel if and only if
\[C\geq 0 \tag{69}\] \[\widetilde{P}_{NS}[C]=C\] (70) \[\operatorname{tr}[C]=d_{\mathfrak{o}}. \tag{71}\]
Since the multipartite process matrices lie in the dual affine set of non-signalling channels, Thm. 3 provides a simple characterisation of multipartite process matrices for an arbitrary number of parties.
**Example 7** (Multipartite process matrices).: Using the projectors for multipartite non-signalling channels defined in Eqs. (66) and (68) and Thm. 3, we obtain a simple characterisation for multipartite process matrices for any numbers of parties \(k\).
(Multipartite process matrix) \[W\geq 0\] (73a) \[W=W-\widetilde{P}_{NS}[W]+\operatorname{tr}[W]\frac{\mathds{1}_{ \mathfrak{i}\circ}}{d_{4}d_{\mathfrak{o}}}\] (73b) \[\operatorname{tr}[W]= d_{\mathfrak{o}}\,,\] (73c)
We emphasize that this characterization of multi-partite process matrices has also been provided in equivalent form in App. B3 of Ref. [15]. Here, it follows straight forwardly from the (readily derived) properties of non-signalling channels, and the fact that process matrices form their dual affine set.
Link product and key concepts
The proofs of Thm. 2 as well as its generalizations rely on only a handful of simple mathematical concept, which we now discuss. Predominantly, we will rely on three main ingredients: The CJI, which allows us to phrase all statements on maps in terms of matrices; the link product, which translates the concatenation of maps to the corresponding manipulation on the level of Choi matrices; and the fact that linear operators with particular properties can be moved around freely in the link product.
### Link Product
We start by discussing the link product \(\star\), already informally introduced in Eq. (20), which captures the action of maps in terms of the CJI [5]. Concretely, for any linear maps \(\widetilde{T}_{\mathtt{xy}}:\mathcal{L}(\mathcal{H}_{\mathtt{x}})\to \mathcal{L}(\mathcal{H}_{\mathtt{y}})\), \(\widetilde{V}_{\mathtt{yz}}:\mathcal{L}(\mathcal{H}_{\mathtt{y}})\to \mathcal{L}(\mathcal{H}_{\mathtt{z}})\) and arbitrary matrices \(M_{\mathtt{x}}\in\mathcal{L}(\mathcal{H}_{\mathtt{z}})\), we have
\[\mathrm{Choi}[\widetilde{T}_{\mathtt{yz}}\circ\widetilde{V}_{\mathtt{xy}}]=: V_{\mathtt{yz}}\star T_{\mathtt{xy}}\in\mathcal{L}(\mathcal{H}_{\mathtt{x}} \otimes\mathcal{H}_{\mathtt{z}})\quad\text{and}\quad\widetilde{T}_{\mathtt{ xy}}[M_{\mathtt{z}}]=T_{\mathtt{xy}}\star M_{\mathtt{z}}\in\mathcal{L}( \mathcal{H}_{\mathtt{y}})\,, \tag{74}\]
where \(\mathrm{Choi}[\cdot]\) transforms a map to its corresponding Choi matrix. In particular, the link product of two arbitrary matrices \(T_{\mathtt{xy}}:\mathcal{L}(\mathcal{H}_{\mathtt{x}}\otimes\mathcal{H}_{ \mathtt{y}})\) and \(V_{\mathtt{yz}}:\mathcal{L}(\mathcal{H}_{\mathtt{y}}\otimes\mathcal{H}_{ \mathtt{z}})\) is given by a trace over the spaces they are both defined on and a partial transpose over the same space 13, i.e.,
Footnote 13: The concrete form of the link product – in particular the presence of partial transposes – depends on the convention of the CJI one employs. The form of the link product we present here is in line with the convention chosen in Eq. (1).
\[T_{\mathtt{xy}}\star V_{\mathtt{yz}}:=\mathrm{tr}_{\mathtt{z}}[(T_{\mathtt{ xy}}\otimes\mathbb{1}_{\mathtt{z}})(\mathbb{1}_{\mathtt{z}}\otimes V_{ \mathtt{yz}}^{\gamma})]\,, \tag{75}\]
where \(\cdot^{\gamma}\) denotes the partial transpose with respect to the computational basis of \(\mathcal{H}_{\mathtt{y}}\). As has been shown in Refs. [5], the link product of positive semidefinite (Hermitian) matrices is again positive semidefinite (Hermitian), and it is both associative and - for all cases we consider - commutative (up to a re-ordering of tensor factors, which we always tacitly assume). Additionally, it is easy to see that the link product satisfies
\[A\star B=A^{\prime}\star B\quad\forall B\quad\Leftrightarrow\quad A=A^{\prime}, \tag{76}\]
since \(A\star B\) and \(A^{\prime}\star B\) are equal to \(\widetilde{A}[B]\) and \(\widetilde{A}^{\prime}[B]\), respectively, and if two linear maps agree on all elements they act on, they coincide, i.e., \(\widetilde{A}=\widetilde{A}^{\prime}\) and thus \(A=A^{\prime}\) (the converse direction in Eq. (76) holds trivially).
Importantly, the link product allows us to re-phrase the question of finding the properties of (trace-rescaling) mappings \(\widetilde{T}_{\mathtt{i}\mathtt{o}}\) between sets \(\mathcal{S}_{\mathtt{i}}\) and \(\mathcal{S}_{\mathtt{o}}\) defined by projectors \(\widetilde{P}_{\mathtt{i}}\) and \(\widetilde{P}_{\mathtt{o}}\), respectively, in terms of Choi matrices. The requirements that \(\widetilde{T}_{\mathtt{i}\mathtt{o}}[W_{\mathtt{i}}]\in\mathcal{S}_{\mathtt{o}}\) and \(\mathrm{tr}[\widetilde{T}_{\mathtt{i}\mathtt{o}}[W_{\mathtt{i}}]]=\gamma_{ \mathtt{o}}/\gamma_{\mathtt{i}}\,\mathrm{tr}[W_{\mathtt{i}}]\) for all \(W_{\mathtt{i}}\in\mathcal{S}_{\mathtt{i}}\) can now be phrased as
\[(\widetilde{\mathbb{1}}_{\mathtt{i}}\otimes\widetilde{P}_{\mathtt{o}})[T_{ \mathtt{i}\mathtt{o}}\star W_{\mathtt{i}}]=T_{\mathtt{i}\mathtt{o}}\star W_{ \mathtt{i}}\quad\text{and}\quad\mathrm{tr}[T_{\mathtt{i}\mathtt{o}}\star W_{ \mathtt{i}}]=\frac{\gamma_{\mathtt{o}}}{\gamma_{\mathtt{i}}}\,\mathrm{tr}[W_{ \mathtt{i}}] \tag{77}\]
for all \(W_{\mathtt{i}}=\widetilde{P}[W_{\mathtt{i}}]\) and \(\mathrm{tr}[W_{\mathtt{i}}]=\gamma_{\mathtt{i}}\). In order to deduce the structural properties these two Equations engender for \(T_{\mathtt{i}\mathtt{o}}\), all constraints need to be'moved onto' \(T_{\mathtt{i}\mathtt{o}}\). Consequently, we now discuss how to'move around' linear maps in the link product.
### Linear operators in the link product
The final property of the link product that we will make frequent use of is the fact that linear maps that act on one of the factors in the link product can be'moved around'(this is akin to finding their adjoint action). In order to obtain simplifications for the special case of self-adjoint, unital maps that commute with the transposition - the case most frequently encountered in quantum mechanics - we first recall some (well-known) pertinent properties of such maps:
**Lemma 1** (Properties of linear maps).: _Let \(\widetilde{P}:\mathcal{L}(\mathcal{H})\to\mathcal{L}(\mathcal{H})\) be a linear map. The following statements hold:_
1. _If_ \(\widetilde{P}\) _is self-adjoint, then it is Hermiticity preserving._
2. _If_ \(\widetilde{P}\) _is self-adjoint and unital, then it is trace-preserving._
3. _If_ \(\widetilde{P}\) _is self-adjoint, then it commutes with the transposition iff it commutes with complex conjugation (with respect to the same basis)._
4. _If_ \(\widetilde{P}\) _is self-adjoint and commutes with the transposition (or complex conjugation), then_ \(\operatorname{tr}[M^{\prime}\widetilde{P}[M]]=\operatorname{tr}[\widetilde{P} [M^{\prime}]M]\) _for all_ \(M^{\prime},M\in\mathcal{L}(\mathcal{H})\)_._
5. _If_ \(\widetilde{P}\) _is Hermiticity preserving, then_ \(\operatorname{tr}[H^{\prime}\widetilde{P}[H]]=\operatorname{tr}[\widetilde{P }[H^{\prime}]H]\) _for all Hermitian_ \(H^{\prime},H\in\mathcal{L}(\mathcal{H})\)_._
All the proofs follow by direct insertion and are provided in App. Cfor completeness. With these properties of linear maps \(\widetilde{P}\) in hand, we can now investigate how linear maps can be'moved around' in the link product.
**Lemma 2**.: _Let \(A_{\mathsf{i}\circ}\in\mathcal{L}(\mathcal{H}_{\mathsf{i}}\otimes\mathcal{H} _{\mathsf{o}})\) and \(B_{\mathsf{i}}\in\mathcal{L}(\mathcal{H}_{\mathsf{i}})\) and let \(\widetilde{P}_{\mathsf{i}}:\mathcal{L}(\mathcal{H}_{\mathsf{i}})\to\mathcal{L} (\mathcal{H}_{\mathsf{i}})\) be a linear operator. Then_
\[A_{\mathsf{i}\circ}\star\widetilde{P}_{\mathsf{i}}[B_{\mathsf{i}}]=\widetilde {P}_{\mathsf{i}}^{\dagger}[A_{\mathsf{i}\circ}^{*}]^{*}\star B_{\mathsf{i}}=: \widetilde{P}_{\mathsf{i}}^{\tau}[A_{\mathsf{i}\circ}]\star B_{\mathsf{i}}. \tag{78}\]
_If \(\widetilde{P}_{\mathsf{i}}\) is self-adjoint and commutes with the transposition (or complex conjugation), then_
\[A_{\mathsf{i}\circ}\star\widetilde{P}_{\mathsf{i}}[B_{\mathsf{i}}]=\widetilde {P}_{\mathsf{i}}[A_{\mathsf{i}\circ}]\star B_{\mathsf{i}} \tag{79}\]
_holds._
Proof.: First, we note that, if Eq. (78) holds, then, due to the properties provided in Lem. 1, Eq. (79) follows directly when \(\widetilde{P}\) is self-adjoint and commutes with the transposition (or complex conjugation). For the proof of Eq. (78), we first recall that the action of any linear operator \(\widetilde{P}_{\mathsf{i}}:\mathcal{L}(\mathcal{H}_{\mathsf{i}})\to\mathcal{L} (\mathcal{H}_{\mathsf{i}})\) can be written as \(\widetilde{P}_{\mathsf{i}}[\cdot]=\sum_{\alpha}L_{\mathsf{i}}^{(\alpha)} \cdot R_{\mathsf{i}}^{(\alpha)\dagger}\) for some matrices \(L_{\mathsf{i}}^{(\alpha)},R_{\mathsf{i}}^{(\alpha)\dagger}\in\mathcal{H}_{ \mathsf{i}}\). With this, from the definition of the adjoint, it is easy to see that \(\widetilde{P}_{\mathsf{i}}^{\dagger}[\cdot]=\sum_{\alpha}L_{\mathsf{i}}^{( \alpha)\dagger}\cdot R_{\mathsf{i}}^{(\alpha)}\) holds. Now, using the definition of the link product, we obtain
\[A_{\mathsf{i}\circ}\star\widetilde{P}_{\mathsf{i}}[B_{\mathsf{ i}}] =\operatorname{tr}_{\mathsf{i}}[A_{\mathsf{i}\circ}(\widetilde{P}_{ \mathsf{i}}[B_{\mathsf{i}}]^{\tau}\otimes\mathbb{1}_{\mathsf{o}})]=\sum_{ \alpha}\operatorname{tr}_{\mathsf{i}}[A_{\mathsf{i}\circ}(R_{\mathsf{i}}^{( \alpha)*}B_{\mathsf{i}}^{\tau}L_{\mathsf{i}}^{(\alpha)\tau}\otimes\mathbb{1}_{ \mathsf{o}})] \tag{80}\] \[=\sum_{\alpha}\operatorname{tr}_{\mathsf{i}}[(L_{\mathsf{i}}^{( \alpha)\dagger}A_{\mathsf{i}\circ}^{*}R_{\mathsf{i}}^{(\alpha)})^{*}(B_{ \mathsf{i}}^{\tau}\otimes\mathbb{1}_{\mathsf{o}})]=\widetilde{P}_{\mathsf{i} }^{\dagger}[A_{\mathsf{i}\circ}^{*}]^{*}\star B_{\mathsf{i}}\,, \tag{81}\]
We note that it is easy to see that, if \(\widetilde{P}_{\mathsf{i}}[\cdot]=\sum_{\alpha}L_{\mathsf{i}}^{(\alpha)}\cdot R _{\mathsf{i}}^{(\alpha)\dagger}\), then \(\widetilde{P}_{\mathsf{i}}^{\tau}[\cdot]:=P_{\mathsf{i}}^{\dagger}[\cdot^{*}]^{ *}=L_{\mathsf{i}}^{(\alpha)\tau}\cdot R_{\mathsf{i}}^{(\alpha)*}\). Importantly for our purposes, Eqs. (78) and (79) allow us to move linear operators around freely in the link product, which we will now exploit to deduce the properties of \(T_{\mathsf{i}\circ}\).
### Proving statements using the link product
Now, using the link product, we can easily provide the proofs for the statements made in Secs. 3 and 6. As mentioned, for any linear map \(\widetilde{T}_{\mathsf{i}\circ}\) that maps \(\mathcal{S}_{\mathsf{i}}\) onto \(\mathcal{S}_{\mathsf{o}}\), we have
\[(\widetilde{\mathbf{1}}_{\mathsf{i}}\otimes\widetilde{P}_{\mathsf{o}})[T_{ \mathsf{i}\circ}\star W_{\mathsf{i}}]=T_{\mathsf{i}\circ}\star W_{\mathsf{i}} \quad\text{and}\quad\operatorname{tr}[T_{\mathsf{i}\circ}\star W_{\mathsf{i}}]= \frac{\gamma_{\mathsf{o}}}{\gamma_{\mathsf{i}}}\operatorname{tr}[W_{\mathsf{ i}}]\quad\forall W_{\mathsf{i}}\in\mathcal{S}_{\mathsf{i}} \tag{82}\]
Let us start by providing the structural properties of \(T_{\mathsf{i}\circ}\) for the case of self-adjoint, unital projectors \(\widetilde{P}_{\mathsf{i}}\) and \(\widetilde{P}_{\mathsf{o}}\) that commute with the transposition. To do so, we first make use of \((\widetilde{\mathbf{1}}_{\mathsf{i}}\otimes\widetilde{P}_{\mathsf{o}})[T_{ \mathsf{i}\circ}\star W_{\mathsf{i}}]=(\widetilde{\mathbf{1}}_{\mathsf{i}} \otimes\widetilde{P}_{\mathsf{o}})[T_{\mathsf{i}\circ}]\star W_{\mathsf{i}}=T_{ \mathsf{i}\circ}\star W_{\mathsf{i}}\) for all \(W_{\mathsf{i}}\in\mathcal{S}_{\mathsf{i}}\). Importantly, since \(\operatorname{span}(\mathcal{S}_{\mathsf{i}})\) does generally _not_ coincide with the full space \(\mathcal{L}(\mathcal{H}_{\mathsf{i}})\), this equation does _not_ allow us to deduce that \((\widetilde{\mathbf{1}}_{\mathsf{i}}\otimes\widetilde{P}_{\mathsf{o}})[T_{ \mathsf{i}\circ}]=T_{\mathsf{i}\circ}\)
However, it is easy to see (since \(\widetilde{P}_{\mathsf{i}}\) is a projection) that for every \(M\in\mathcal{L}(\mathcal{H}_{\mathsf{i}})\) with \(\operatorname{tr}[W]\neq 0\) (for the case \(\gamma_{\mathsf{i}}=0\) see App. B) we have \(\widetilde{P}_{\mathsf{i}}[M]\in\operatorname{span}(\mathcal{S}_{\mathsf{i}})\). Consequently, we obtain
\[(\widetilde{\mathsf{1}}_{\mathsf{i}}\otimes\widetilde{P}_{ \mathsf{o}})[T_{\mathsf{i}\mathsf{o}}]\star\widetilde{P}_{\mathsf{i}}[M]=T_{ \mathsf{i}\mathsf{o}}\star\widetilde{P}_{\mathsf{i}}[M]\quad\forall M\in \mathcal{L}(\mathcal{H}_{\mathsf{i}}) \tag{83}\]
Now, we can use the second part of Lem. 2 to move the projector \(\widetilde{P}_{\mathsf{i}}\) inside the link product, such that
\[(\widetilde{P}_{\mathsf{i}}\otimes\widetilde{P}_{\mathsf{o}})[T_{ \mathsf{i}\mathsf{o}}]\star M=\widetilde{P}_{\mathsf{i}}[T_{\mathsf{i}\mathsf{o }}]\star M\quad\forall M\in\mathcal{L}(\mathcal{H}_{\mathsf{i}})\,, \tag{84}\]
which, since it holds for all \(M\in\mathcal{L}(\mathcal{H}_{\mathsf{i}})\) implies \((\widetilde{P}_{\mathsf{i}}\otimes\widetilde{P}_{\mathsf{o}})[T_{\mathsf{i} \mathsf{o}}]=\widetilde{P}_{\mathsf{i}}[T_{\mathsf{i}\mathsf{o}}]\). This, in turn, can be phrased in terms of a projector on \(T_{\mathsf{i}\mathsf{o}}\) as
\[T_{\mathsf{i}\mathsf{o}}=T_{\mathsf{i}\mathsf{o}}-\widetilde{P }_{\mathsf{i}}[T_{\mathsf{i}\mathsf{o}}]+(\widetilde{P}_{\mathsf{i}}\otimes \widetilde{P}_{\mathsf{o}})[T_{\mathsf{i}\mathsf{o}}]\,, \tag{85}\]
where the signs in the above definition are chosen such that \(\widetilde{P}_{\mathsf{i}}[T_{\mathsf{i}\mathsf{o}}]=(\widetilde{P}_{ \mathsf{i}}\otimes\widetilde{P}_{\mathsf{o}})[T_{\mathsf{i}\mathsf{o}}]\) still holds (which can be seen by direct insertion into (85) and using that \(\widetilde{P}_{\mathsf{i}}\) is a projector).
In a similar vein, we can analyse the trace-rescaling property \(\operatorname{tr}[T_{\mathsf{i}\mathsf{o}}\star W_{\mathsf{i}}]=\gamma_{ \mathsf{o}}/\gamma_{\mathsf{i}}\operatorname{tr}[W]\) for all \(W\in\mathcal{S}_{\mathsf{i}}\). Following the same argument (and using the fact that \(\mathbbm{1}_{\mathsf{i}}\) is the Choi state of \(\operatorname{tr}_{\mathsf{i}}\)), we obtain
\[\widetilde{P}_{\mathsf{i}}[\operatorname{tr}_{\mathsf{o}}T_{ \mathsf{i}\mathsf{o}}]\star M=\frac{\gamma_{\mathsf{o}}}{\gamma_{\mathsf{i}}} \mathbbm{1}_{\mathsf{i}}\star M\,. \tag{86}\]
Again, this equality holds for all \(M\in\mathcal{L}(\mathcal{H}_{\mathsf{i}})\), and thus implies
\[\widetilde{P}_{\mathsf{i}}[\operatorname{tr}_{\mathsf{o}}T_{ \mathsf{i}\mathsf{o}}]=\frac{\gamma_{\mathsf{o}}}{\gamma_{\mathsf{i}}} \mathbbm{1}_{\mathsf{i}}\,. \tag{87}\]
Since \(\widetilde{P}_{\mathsf{i}}\) is unital and self-adjoint, it is trace-preserving (see Lem. 1), and we see that \(\operatorname{tr}[T_{\mathsf{i}\mathsf{o}}]=\gamma_{\mathsf{o}}/\gamma_{ \mathsf{i}}\cdot d_{\mathsf{i}}\). With this, by taking the tensor product of Eq. (87) with \(\mathbbm{1}_{\mathsf{o}}/d_{\mathsf{o}}\), we obtain \(\widetilde{P}_{\mathsf{i}}[_{\mathsf{o}}T_{\mathsf{i}\mathsf{o}}]=\mathsf{i} _{\mathsf{o}}T_{\mathsf{i}\mathsf{o}}\), such that Eqs (87) and (86) can equivalently be written as
\[T_{\mathsf{i}\mathsf{o}}=T_{\mathsf{i}\mathsf{o}}-\widetilde{P}_ {\mathsf{i}}[_{\mathsf{o}}T_{\mathsf{i}\mathsf{o}}]+\mathsf{i}_{\mathsf{o}}T_ {\mathsf{i}\mathsf{o}}\quad\text{and}\quad\operatorname{tr}[T_{\mathsf{i} \mathsf{o}}]=\frac{\gamma_{\mathsf{o}}}{\gamma_{\mathsf{i}}}d_{\mathsf{i}}\,. \tag{88}\]
Now, inserting this into Eq. (85), we obtain
\[T_{\mathsf{i}\mathsf{o}}=T_{\mathsf{i}\mathsf{o}}-\widetilde{P}_ {\mathsf{i}}[T_{\mathsf{i}\mathsf{o}}]+(\widetilde{P}_{\mathsf{i}}\otimes \widetilde{P}_{\mathsf{o}})[T_{\mathsf{i}\mathsf{o}}]-\widetilde{P}_{\mathsf{ i}}[_{\mathsf{o}}T_{\mathsf{i}\mathsf{o}}]+\mathsf{i}_{\mathsf{o}}T_{ \mathsf{i}\mathsf{o}}=:\widetilde{P}_{\mathsf{i}\mathsf{o}}[T_{\mathsf{i} \mathsf{o}}]\quad\text{and}\quad\operatorname{tr}[T_{\mathsf{i}\mathsf{o}}]= \frac{\gamma_{\mathsf{o}}}{\gamma_{\mathsf{i}}}d_{\mathsf{i}}\,, \tag{89}\]
which coincides exactly with Eqs. (23a) and (23b) of Thm. 2. For the converse direction, we first note that a self-adjoint, unital projector \(\widetilde{P}_{\mathsf{o}}\) is trace-preserving, such that \(\widetilde{P}_{\mathsf{o}}[_{\mathsf{o}}M]=\mathsf{{}_{\mathsf{o}}\circ} \widetilde{P}_{\mathsf{o}}[M]=\mathsf{{}_{\mathsf{o}}M}\) holds. With this, by direct insertion, it is easy to see that Eq. (89) implies \(\widetilde{P}_{\mathsf{i}}^{\mathsf{i}}[_{\mathsf{o}}T_{\mathsf{i}\mathsf{o}}]= \mathsf{i}_{\mathsf{o}}T_{\mathsf{i}\mathsf{o}}\) and thus Eqs. (88) and (85); together with Eq. (89), these latter two equations directly lead to Eq. (82), thus proving Thm. 2. We emphasize, that this converse direction crucially requires the properties of \(\widetilde{P}_{\mathsf{o}}\) (i.e., self-adjointness and unitality), while the forward direction also holds without these assumptions on \(\widetilde{P}_{\mathsf{o}}\).
Finally, if we dropped the trace-rescaling property on \(\widetilde{T}_{\mathsf{i}\mathsf{o}}\) (and thus \(T_{\mathsf{i}\mathsf{o}}\)), such that we only demand \(\widetilde{P}_{\mathsf{o}}[T_{\mathsf{i}\mathsf{o}}\star W_{\mathsf{i}}]=T_{ \mathsf{i}\mathsf{o}}\star W_{\mathsf{i}}\) for all \(W_{\mathsf{i}}=\widetilde{P}_{\mathsf{i}}[W_{\mathsf{i}}]\)(but _not_\(\operatorname{tr}[T_{\mathsf{i}\mathsf{o}}\star W_{\mathsf{i}}]=\frac{ \gamma_{\mathsf{i}}}{\gamma_{\mathsf{i}}}\operatorname{tr}[W_{\mathsf{i}}]\)), following the above derivation, we arrive at
**Theorem 4** (Transformation between linear spaces: specialised Choi version).: _Let \(\widetilde{P}_{\mathsf{i}}:\mathcal{L}(\mathcal{H}_{\mathsf{i}})\to\mathcal{L}( \mathcal{H}_{\mathsf{i}})\) and \(\widetilde{P}_{\mathsf{o}}:\mathcal{L}(\mathcal{H}_{\mathsf{o}})\to\mathcal{L}( \mathcal{H}_{\mathsf{o}})\) be linear projective, self-adjoint and unital maps that commute with the transposition (or conjugation) and \(\mathcal{S}_{\mathsf{i}}\subseteq\mathcal{L}(\mathcal{H}_{\mathsf{i}})\) and \(\mathcal{S}_{\mathsf{o}}\subseteq\mathcal{L}(\mathcal{H}_{\mathsf{o}})\) be linear spaces defined by_
\[\begin{array}{|c|c|}\hline W\in\mathcal{L}(\mathcal{H}_{ \mathsf{i}})\text{ belongs to }\mathcal{S}_{\mathsf{i}}\text{ iff}\\ \widetilde{P}_{\mathsf{i}}[W]=W\end{array}&\quad\begin{array}{|c|c|}\hline W^{ \prime}\in\mathcal{L}(\mathcal{H}_{\mathsf{o}})\text{ belongs to }\mathcal{S}_{\mathsf{o}}\text{ iff}\\ \widetilde{P}_{\mathsf{o}}[W^{\prime}]=W^{\prime}\end{array}&\end{array} \tag{90}\]
_A linear map \(\widetilde{T}_{\mathfrak{io}}:\mathcal{L}(\mathcal{H}_{\mathfrak{i}})\to\mathcal{L} (\mathcal{H}_{\mathfrak{o}})\) satisfies \(\widetilde{T}_{\mathfrak{io}}[W]\in\mathcal{S}_{\mathfrak{o}}\), for all \(W\in\mathcal{S}_{\mathfrak{i}}\) if and only if_
\[\boxed{T_{\mathfrak{io}}=T_{\mathfrak{io}}-(\widetilde{P}_{\mathfrak{i}} \otimes\widetilde{\mathsf{I}}_{\mathfrak{o}})[T_{\mathfrak{io}}]+( \widetilde{P}_{\mathfrak{i}}\otimes\widetilde{P}_{\mathfrak{o}})[T_{ \mathfrak{io}}]=:\widetilde{P}_{\mathfrak{io}}^{(\mathrm{ntr})}[T_{\mathfrak{ i}\mathfrak{o}}]\,,} \tag{91}\]
_holds for its Choi matrix \(T_{\mathfrak{io}}\), and \(\widetilde{P}_{\mathfrak{io}}^{(\mathrm{ntr})}:\mathcal{L}(\mathcal{H}_{ \mathfrak{i}}\otimes\mathcal{H}_{\mathfrak{o}})\to\mathcal{L}(\mathcal{H}_{ \mathfrak{i}}\otimes\mathcal{H}_{\mathfrak{o}})\) is a self-adjoint, unital projector that commutes with the transposition._
Proof.: The proof proceeds along the same line as the previous one, minus the additional requirement of a trace-rescaling property, i.e., it stops at Eq. (88), which coincides with Eq. (91) of the Theorem. Conversely, it is easy to see that the above equality implies \(\widetilde{P}_{\mathfrak{o}}[T_{\mathfrak{io}}\star W_{\mathfrak{i}}]=T_{ \mathfrak{io}}\star W_{\mathfrak{i}}\) for all \(W_{\mathfrak{i}}=\widetilde{P}_{\mathfrak{i}}[W_{\mathfrak{i}}]\), independent of the properties of \(\widetilde{P}_{\mathfrak{o}}\) (besides being a linear projector), proving Thm. 4.
Importantly, the above Theorem in not simply a special case of Thm. 2, particularly, it does _not_ coincide with it up to the affine constraint, but the respective constraints on \(T_{\mathfrak{io}}\) are structurally different. This is a generalization of the structural differences of, e.g., CP and CPTP maps; the former are not just equal to the latter minus a trace condition, but CPTP maps have an additional _structural_ property, that is absent in CP maps (namely, that the trace over the output degrees of freedom yields the identity matrix on the input space.).
In addition, we note that Thm. 4 covers the case \(\gamma_{\mathfrak{i}}=0\) of Thm. 2. As detailed in App. B, in this case, _both_\(\gamma_{\mathfrak{i}}\) and \(\gamma_{\mathfrak{o}}\) are equal to zero, such that both spaces \(\mathcal{S}_{\mathfrak{i}}\) and \(\mathcal{S}_{\mathfrak{o}}\) are entirely defined by linear projectors onto a vector space of traceless operators, which is a special instance of the scenario discussed in the above theorem.
With this, we have considered all pertinent scenarios including projectors that are self-adjoint, unital, and commute with the transposition. We conclude this paper with the general case, where we impose _no_ constraints on \(\widetilde{P}_{\mathfrak{i}}\) and \(\widetilde{P}_{\mathfrak{o}}\), besides them beinglinear projectors.
## 8 General approach
While physically the most relevant case, it is not necessary that the projectors \(\widetilde{P}_{\mathfrak{i}}\) and \(\widetilde{P}_{\mathfrak{o}}\) defining the sets \(\mathcal{S}_{\mathfrak{i}}\) and \(\mathcal{S}_{\mathfrak{o}}\), respectively, are self-adjoint, unital, and commute with the transposition. As a guiding example for a case where these properties fail to hold, consider the case where \(\widetilde{P}_{\mathfrak{i}}\) is given by a projector on the off-diagonal element \(|m\rangle\langle n|\) (with \(m\neq n\)), i.e., it acts as \(\widetilde{P}_{\mathfrak{i}}[B]=\langle m|B_{\mathfrak{i}}|n\rangle|m\rangle \langle n|\). Naturally, the thusly defined \(\widetilde{P}_{\mathfrak{i}}\) is a projector (since it satisfies \(\widetilde{P}_{\mathfrak{i}}^{2}=\widetilde{P}_{\mathfrak{i}}\)), but it is neither self-adjoint, unital, nor does it commute with the transposition.
To derive the properties of maps \(\widetilde{T}_{\mathfrak{io}}\) between sets defined by such general projectors \(\widetilde{P}_{\mathfrak{x}}\), we can directly employ Lem. 2, which informs us how to move general linear operators around in the link product. With this, we deduce the concrete form of transformations \(T_{\mathfrak{io}}\) in the same vein as the derivation for Thm. 2 provided in Sec. 7.3.
**Theorem 5**.: _Let \(\widetilde{P}_{\mathfrak{i}}:\mathcal{L}(\mathcal{H}_{\mathfrak{i}})\to \mathcal{L}(\mathcal{H}_{\mathfrak{i}})\) and \(\widetilde{P}_{\mathfrak{o}}:\mathcal{L}(\mathcal{H}_{\mathfrak{o}})\to \mathcal{L}(\mathcal{H}_{\mathfrak{o}})\) be linear projections and \(\mathcal{S}_{\mathfrak{i}}\subseteq\mathcal{L}(\mathcal{H}_{\mathfrak{i}})\) and \(\mathcal{S}_{\mathfrak{o}}\) be affine spaces of matrices defined by_
\[\boxed{W\in\mathcal{L}(\mathcal{H}_{\mathfrak{i}})\text{ belongs to }\mathcal{S}_{\mathfrak{i}}\text{ iff}} \tag{92}\]
_For \(\gamma_{\mathfrak{i}}\neq 0\), a linear map \(\widetilde{T}_{\mathfrak{io}}:\mathcal{L}(\mathcal{H}_{\mathfrak{i}})\to \mathcal{L}(\mathcal{H}_{\mathfrak{o}})\) satisfies \(\widetilde{T}_{\mathfrak{io}}[W]\in\mathcal{S}_{\mathfrak{o}}\) for all \(W\in\mathcal{S}_{\mathfrak{i}}\) if and only if_
\[T_{\mathfrak{io}}=T_{\mathfrak{io}}-\widetilde{P}_{\mathfrak{i}}^{\tau}[T_{ \mathfrak{io}}]+(\widetilde{P}_{\mathfrak{i}}^{\tau}\otimes\widetilde{P}_{ \mathfrak{o}})[T_{\mathfrak{io}}]=:\widetilde{P}_{\mathfrak{io}}[T_{\mathfrak{io }}] \tag{93a}\] \[\widetilde{P}_{\mathfrak{i}}^{\tau}[(\mathfrak{tr}_{\mathfrak{o}} \,T_{\mathfrak{io}})]=\frac{\gamma_{\mathfrak{o}}}{\gamma_{\mathfrak{i}}} \widetilde{P}_{\mathfrak{i}}^{\tau}[\mathbb{1}_{\mathfrak{i}}]\,. \tag{93b}\]
Before providing a proof, we emphasize that the fact that we allow for non-unital, non-self-adjoint projectors implies that the respective sets \(\mathcal{S}_{\mathfrak{i}}\) and \(\mathcal{S}_{\mathfrak{o}}\) do _not_ have to contain an element that is proportional to the identity matrix. The membership of the identity matrix facilitates many considerations in the literature when dealing with transformations between quantum objects (see, for example, Ref. [14]). The relative ease with which the link product can be manipulated allows us beyond this case without much added difficulty.
Furthermore, we stress that the above Theorem exactly coincides with Thm. 2 for the case of self-adjoint, unital projectors \(\widetilde{P}_{\mathfrak{i}}\) that commute with the transposition. In this case, it is easy to see that Eq. (93a) amounts to \(T_{\mathfrak{i}\mathfrak{o}}=T_{\mathfrak{i}\mathfrak{o}}-\widetilde{P}_{ \mathfrak{i}}[T_{\mathfrak{i}\mathfrak{o}}]+(\widetilde{P}_{\mathfrak{i}} \otimes\widetilde{P}_{\mathfrak{o}})[T_{\mathfrak{i}\mathfrak{o}}]\), while Eq. (93b) implies \(\widetilde{P}_{\mathfrak{i}}[\operatorname{tr}_{\mathfrak{o}}T_{\mathfrak{i} \mathfrak{o}}]=\gamma_{\mathfrak{o}}/\gamma_{\mathfrak{i}}\mathbbm{1}_{ \mathfrak{i}}\), which are exactly the properties we used in the proof of Thm. 2.
Proof.: The proof of Thm. 5 proceeds along the same lines as that of Thm. 2 with the difference that now the assumptions on the involved projectors are weaker. First, we note that, since \(\gamma_{\mathfrak{i}}\neq 0\), we have \(\operatorname{span}(\mathcal{S}_{\mathfrak{i}})=\widetilde{P}_{\mathfrak{i}} [\mathcal{L}(\mathcal{H}_{\mathfrak{i}})]\). Then, from \(\widetilde{P}_{\mathfrak{o}}[T_{\mathfrak{i}\mathfrak{o}}\star W_{\mathfrak{ i}}]=T_{\mathfrak{i}\mathfrak{o}}\star W_{\mathfrak{i}}\) for all \(W_{i}\in\mathcal{S}_{\mathfrak{i}}\) we obtain
\[\widetilde{P}_{\mathfrak{o}}[T_{\mathfrak{i}\mathfrak{o}}\star\widetilde{P}_ {\mathfrak{i}}[M]]=(\widetilde{P}_{\mathfrak{i}}^{\tau}\otimes\widetilde{P}_{ \mathfrak{o}})[T_{\mathfrak{i}\mathfrak{o}}]\star M=T_{\mathfrak{i}\mathfrak{o }}\star\widetilde{P}_{\mathfrak{i}}[M]=\widetilde{P}_{\mathfrak{i}}^{\tau}[T_{ \mathfrak{i}\mathfrak{o}}]\star M \tag{94}\]
for all \(M\in\mathcal{L}(\mathcal{H}_{\mathfrak{i}})\), where \(\widetilde{P}_{\mathfrak{i}}^{\tau}\) has been defined in Eq. (78). From this, we directly obtain Eq. (93a). From the fact that \(\operatorname{tr}[T_{\mathfrak{i}\mathfrak{o}}\star W_{\mathfrak{i}}]=\gamma_ {\mathfrak{o}}\) for all \(W_{\mathfrak{i}}\in\mathcal{S}_{\mathfrak{i}}\), it then follows that \(\operatorname{tr}[T_{\mathfrak{i}\mathfrak{o}}\star\widetilde{P}_{\mathfrak{i }}[M]]=\gamma_{\mathfrak{o}}/\gamma_{\mathfrak{i}}\operatorname{tr}[ \widetilde{P}_{\mathfrak{i}}[M]]\) for all \(M\in\mathcal{L}(\mathcal{H}_{\mathfrak{i}})\). Using the fact that \(\mathbbm{1}_{\mathfrak{z}}\) is the Choi matrix of \(\operatorname{tr}_{\mathfrak{z}}\), this can be written as \(\operatorname{tr}[T_{\mathfrak{i}\mathfrak{o}}\star\widetilde{P}_{\mathfrak{ i}}[M]]=\mathbbm{1}_{\mathfrak{i}}\star\widetilde{P}_{\mathfrak{i}}[M]\). Employing Lem. 2 and using the fact that this equality holds for all \(M\in\mathcal{L}(\mathcal{H}_{\mathfrak{i}})\) then directly yields Eq. (93b). The fact that the resulting linear operator \(\widetilde{P}_{\mathfrak{i}\mathfrak{o}}=\mathbbm{1}_{\mathfrak{i}\mathfrak{o} }-\widetilde{P}_{\mathfrak{i}}^{\tau}+\widetilde{P}_{\mathfrak{i}}^{\tau} \otimes\widetilde{P}_{\mathfrak{o}}\) is indeed a projector can be seen by direct insertion and using the fact that, \(\widetilde{P}_{\mathfrak{i}}=\widetilde{P}_{\mathfrak{i}}^{2}\) implies \(\widetilde{P}_{\mathfrak{i}}^{\tau}=(\widetilde{P}_{\mathfrak{i}}^{\tau})^{2}\).
In the converse direction, using \(\widetilde{P}_{\mathfrak{i}}^{\tau}=(\widetilde{P}_{\mathfrak{i}}^{\tau})^{2}\), by direct insertion, it is easy to see that Eq. (93a) implies \(\widetilde{P}_{\mathfrak{o}}[T_{\mathfrak{i}\mathfrak{o}}\star W_{\mathfrak{ i}}]=T_{\mathfrak{i}\mathfrak{o}}\star W_{\mathfrak{i}}\) for all \(W_{\mathfrak{i}}\in\mathcal{S}_{\mathfrak{i}}\).
As for the previous Theorems, the case \(\gamma_{\mathfrak{i}}=0\) needs to be discussed in slightly more detail and is provided in App. B.
Similarly to Thm. 4, one can also drop the trace-rescaling property (i.a., the trace constraints on the elements of \(\mathcal{S}_{\mathfrak{i}}\) and \(\mathcal{S}_{\mathfrak{o}}\)) for general projectors. In this case, one would simply have to drop Eq. (93b) in the above theorem to obtain the properties of \(T_{\mathfrak{i}\mathfrak{o}}\):
**Theorem 6** (Transformation between linear spaces: Choi version).: _Let \(\widetilde{P}_{\mathfrak{i}}:\mathcal{L}(\mathcal{H}_{\mathfrak{i}})\to \mathcal{L}(\mathcal{H}_{\mathfrak{i}})\) and \(\widetilde{P}_{\mathfrak{o}}:\mathcal{L}(\mathcal{H}_{\mathfrak{o}})\to \mathcal{L}(\mathcal{H}_{\mathfrak{o}})\) be linear projections and \(\mathcal{S}_{\mathfrak{i}}\subseteq\mathcal{L}(\mathcal{H}_{\mathfrak{i}})\) and \(\mathcal{S}_{\mathfrak{o}}\) be linear spaces of matrices defined by_
(95)
_A linear map \(\widetilde{T}_{\mathfrak{i}\mathfrak{o}}:\mathcal{L}(\mathcal{H}_{\mathfrak{i}}) \to\mathcal{L}(\mathcal{H}_{\mathfrak{o}})\) satisfies \(\widetilde{T}_{\mathfrak{i}\mathfrak{o}}[W]\in\mathcal{S}_{\mathfrak{o}}\) for all \(W\in\mathcal{S}_{\mathfrak{i}}\) if and only if_
\[\boxed{T_{\mathfrak{i}\mathfrak{o}}=T_{\mathfrak{i}\mathfrak{o}}-(\widetilde{P}_ {\mathfrak{i}}^{\tau}\otimes\widetilde{\mathbbm{1}}_{\mathfrak{o}})[T_{ \mathfrak{i}\mathfrak{o}}]+(\widetilde{P}_{\mathfrak{i}}^{\tau}\otimes \widetilde{P}_{\mathfrak{o}})[T_{\mathfrak{i}\mathfrak{o}}]=:\widetilde{P}_{ \mathfrak{i}\mathfrak{o}}[T_{\mathfrak{i}\mathfrak{o}}].} \tag{96}\]
As was the case for Thm. 4, we note again, that this theorem also covers the case \(\gamma_{\mathfrak{i}}=0\) for the case where additional trace constraints are required of \(\mathcal{S}_{\mathfrak{i}}\) and \(\mathcal{S}_{\mathfrak{o}}\)(see App. B).
**Example 8** (Projectors on off-diagonal terms).: To provide a concrete example for the above Theorem, let us return to the simple case mentioned at the beginning of this Section, where \(\widetilde{P}_{\mathfrak{i}}\) is given by a projection on the off-diagonal term \(|m\rangle\!\langle n|\) (where \(m\neq n\)), i.e., it acts as \(\widetilde{P}_{\mathfrak{i}}[M]=\langle m|M|n\rangle|m\rangle\!\langle n|\), and let \(\widetilde{P}_{\mathfrak{o}}\) be a projector on the off-diagonal term \(|\alpha\rangle\!\langle\beta|\in\mathcal{L}(\mathcal{H}_{\mathfrak{o}})\). With this, the set \(\mathcal{S}_{\mathfrak{i}}\) consists of all matrices \(W_{\mathfrak{i}}\) that are proportional to \(|m\rangle\!\langle n|\) and the output space \(\mathcal{S}_{\mathfrak{o}}\) consists of all matrices \(W_{\mathfrak{o}}\) that are proportional to \(|\alpha\rangle\!\langle\beta|\) (by construction, all elements of \(\mathcal{S}_{\mathfrak{i}}\) and \(\mathcal{S}_{\mathfrak{o}}\) are traceless automatically due to the properties of \(\widetilde{P}_{\mathfrak{i}}\) and \(\widetilde{P}_{\mathfrak{o}}\)). It is easy to see(assuming that \(\{|m\rangle\}_{m}\) and \(\{|\alpha\rangle\}_{\alpha}\) constitute the canonical computational basis of \(\mathcal{H}_{\mathfrak{i}}\) and \(\mathcal{H}_{\mathfrak{o}}\), respectively) that
the action of \(\widetilde{P}_{1}^{\tau}\) is given by \(\widetilde{P}_{1}^{\tau}[M]=|m\rangle\langle m|M|n\rangle\langle n|\), while \(\widetilde{P}_{\mathsf{e}}[M^{\prime}]=|\alpha\rangle\langle\alpha|M^{\prime}| \beta\rangle\langle\beta|\). Then, the properties of the Choi matrix \(T_{\mathsf{i}\mathsf{o}}\) of a transformation \(\widetilde{T}_{\mathsf{i}\mathsf{o}}:\mathcal{S}_{\mathsf{i}}\to\mathcal{S}_ {\mathsf{o}}\) follow directly from Eq. (96) of Thm. 6 as
\[T_{\mathsf{i}\mathsf{o}}=T_{\mathsf{i}\mathsf{o}}-|m\rangle\langle m|T_{ \mathsf{i}\mathsf{o}}|n\rangle\langle n|+|m\alpha\rangle\langle m\alpha|T_{ \mathsf{i}\mathsf{o}}|n\beta\rangle\langle n\beta|\,. \tag{97}\]
With this, for any \(\lambda|m\rangle\langle n|\in\mathcal{S}_{\mathsf{i}}\) (where \(\lambda\in\mathbb{C}\)), we have
\[T_{\mathsf{i}\mathsf{o}}\star\lambda|m\rangle\langle n| =\lambda\,\mathrm{tr}_{\mathsf{i}}[T_{\mathsf{i}\mathsf{o}}|n \rangle\langle m|]\] \[=\lambda\,\mathrm{tr}_{\mathsf{i}}[(T_{\mathsf{i}\mathsf{o}}-|m \rangle\langle m|T_{\mathsf{i}\mathsf{o}}|n\rangle\langle n|+|m\alpha\rangle \langle m\alpha|T_{\mathsf{i}\mathsf{o}}|n\beta\rangle\langle n\beta|)|n \rangle\langle m|] \tag{98}\] \[=\langle m\alpha|T_{\mathsf{i}\mathsf{o}}|n\beta\rangle|\alpha \rangle\langle\beta|\in\mathcal{S}_{\mathsf{o}}\,,\]
i.e., \(\widetilde{T}_{\mathsf{i}\mathsf{o}}\) maps any element of \(\mathcal{S}_{\mathsf{i}}\) onto an element of \(\mathcal{S}_{\mathsf{o}}\).
To finish this section, we provide a characterization of the dual set \(\overline{\mathcal{S}}\) for the case of general projectors, i.e., the generalized version of Thm. 3:
**Theorem 7** (Dual affine for arbitrary projectors).: _Let \(\mathcal{S}\) be a set defined by \(W\in\mathcal{S}\) via_
\[W= \widetilde{P}[W] \tag{99}\] \[\mathrm{tr}[W]= \gamma\,, \tag{100}\]
_where \(\widetilde{P}\) is a projector and \(\gamma\neq 0\). An operator \(\overline{W}\in\mathcal{L}(\mathcal{H})\) belongs to the dual affine set \(\overline{\mathcal{S}}\) if and only if it satisfies_
\[\boxed{\widetilde{P}^{\tau}[\overline{W}^{\tau}]=\frac{1}{\alpha}\widetilde{ P}^{\tau}[1]\,,} \tag{101}\]
_where \(\widetilde{P}^{\tau}[\overline{W}^{\tau}]:=P_{\mathsf{i}}^{\dagger}[\overline{ W}^{\dagger}]^{*}\)._
Proof.: The proof follows directly from Thm. 5 in Sec. 8, where we derive the property of trace rescaling mappings \(\widetilde{T}:\mathcal{L}(\mathcal{H}_{\mathsf{i}})\to\mathcal{L}(\mathcal{H}_ {\mathsf{o}})\) (with re-scaling factor \(\gamma_{\mathsf{o}}/\gamma_{\mathsf{o}}\)) between spaces defined by general linear projector \(\widetilde{P}_{\mathsf{i}}\) and \(\widetilde{P}_{\mathsf{o}}\). For this case, we have (see Eqs. (93a) and (93b))
\[T_{\mathsf{i}\mathsf{o}}=T_{\mathsf{i}\mathsf{o}}-\widetilde{P}_ {\mathsf{i}}^{\tau}[T_{\mathsf{i}\mathsf{o}}]+(\widetilde{P}_{\mathsf{i}}^{ \tau}\otimes\widetilde{P}_{\mathsf{o}})[T_{\mathsf{i}\mathsf{o}}] \tag{102}\] \[\text{and}\quad\widetilde{P}_{\mathsf{i}}^{\tau}[(\mathrm{tr}_{ \mathsf{o}}\,T_{\mathsf{i}\mathsf{o}})]=\frac{\gamma_{\mathsf{o}}}{\gamma_{ \mathsf{i}}}\widetilde{P}_{\mathsf{i}}^{\tau}[1_{\mathsf{i}}]\,. \tag{103}\]
For the case considered in the above Theorem, we have \(\gamma_{\mathsf{o}}/\gamma_{\mathsf{i}}=1/\gamma\), \(\widetilde{P}_{\mathsf{i}}=\widetilde{P}\), and \(\mathcal{H}_{\mathsf{o}}=\mathbb{C}\), such that Eq. (102) becomes the trivial statement \(T_{\mathsf{i}\mathsf{o}}=T_{\mathsf{i}\mathsf{o}}\). As mentioned above, for the convention of the CJI that we choose, we have \(\overline{W}=T^{\tau}\). With this, Eq. (103) coincides exactly with Eq. (101) of the Theorem.
Naturally, Thm. 7 contains Thm. 3, where the properties of dual matrices for the case of self-adjoint, unital projectors that commute with the transposition were presented as special cases. To see this, recall that \(\widetilde{P}\) is self-adjoint, unital, and commutes with the transposition it is also trace-preserving, such that Eq. (101) of Thm. 7 implies \(\mathrm{tr}[\overline{W}]=\frac{d}{\alpha}\) [i.e., Eq. (52b)]. Together with Eq. (101), this yields Eq. (52a) and we thus recover Thm. 3.
## 9 Applications for numerical computation and code availability
As discussed previously, the projective characterisation of quantum objects analysed in this manuscript is also useful for tackling several problems by means of semidefinite programming. This approach was first presented at Ref. [15], where the authors derive an SDP for witnessing and quantifying indefinite causality in quantum theory. Since then, such methods have been employed in various other works and contexts, which range from detecting indefinite causality [29, 39], analysing quantum causal relations [40] and transforming quantum operations[41, 42, 43, 44], to the quantification of causal connection [16] and channel discrimination[7, 45].
We have implemented all projective maps discussed in this manuscript and various other useful functions in Matlab, and all our code is publicly available in an online repository [46]. Our code may be directly use for SDP problems on higher-order quantum problems and other related SDP problems involving transformations between linear and affine sets.
## 10 Discussions
In this work, we have provided a systematic way to derive the properties of transformations between quantum sets. While a priori an abstract endeavour, such characterizations play an important role for many questions in quantum information theory - in particular the study of causal order - and our results offer a handy tool to deal with such problems in a simple and streamlined manner. We have demonstrated the versatility of our approach by explicitly showing its usefulness for a wide array of concrete examples of higher order quantum maps, as well as the derivations of the properties of affine sets and probabilistic quantum operations.
Importantly, our results solely rely on the properties of the link product, and do not require the respective sets we transform between (and, in particular, the projectors that define them) to have _any_ particular properties. Owing to this simplicity, we not only recovered structural properties of objects frequently encountered in quantum mechanics, but our results can readily be applied to any situation where the properties of a linear transformation are to be deduced from those of its input and output space.
Inferring such properties is a generic task when dealing with higher order maps and/or trying to optimize an objective function over them. As such, the Theorems we derived in this work are of direct use to a whole host of problems in this field and substantially simplify the associated considerations. Additionally, the manipulation of the link product we introduce in order to derive the dual action of a map is a fruitful technique in its own right and can readily be employed to obtain more intuitive insights into a problem via its dual version, whenever its primal is somewhat opaque. Together, our results thus provide a powerful toolbox that is of direct applicability in a wide array of fields.
## Acknowledgments
We would like to thank Esteban Castro-Ruiz for insightful discussions, and Timothee Hoffreumon and Ognyan Oreshkov for helpful clarifications on their work [14]. S.M. acknowledges funding from the Austrian Science Fund (FWF): ZK3 (Zukunftkolleg) and Y879-N27 (START project), and from the European Union's Horizon Europe research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 101068332. This project/research was supported by grant number FQXi-RFP-IPW-1910 from the Foundational Questions Institute and Fetzer Franklin Fund, a donor advised fund of Silicon Valley Community Foundation.
|
2303.03906 | Compositional Confluence Criteria | We show how confluence criteria based on decreasing diagrams are generalized
to ones composable with other criteria. For demonstration of the method, the
confluence criteria of orthogonality, rule labeling, and critical pair systems
for term rewriting are recast into composable forms. We also show how such a
criterion can be used for a reduction method that removes rewrite rules
unnecessary for confluence analysis. In addition to them, we prove that
Toyama's parallel closedness result based on parallel critical pairs subsumes
his almost parallel closedness theorem. | Kiraku Shintani, Nao Hirokawa | 2023-03-07T14:05:40Z | http://arxiv.org/abs/2303.03906v5 | # Compositional confidence criteria
###### Abstract.
We show how confluence criteria based on decreasing diagrams are generalized to ones composable with other criteria. For demonstration of the method, the confluence criteria of orthogonality, rule labeling, and critical pair systems for term rewriting are recast into composable forms. We also show how such a criterion can be used for a reduction method that removes rewrite rules unnecessary for confluence analysis. In addition to them, we prove that Toyama's parallel closedness result based on parallel critical pairs subsumes his almost parallel closedness theorem.
Key words and phrases:term rewriting, confluence, decreasing diagrams The research described in this paper is supported by JSPS KAKENHI Grant Numbers JP22K11900.
## 1. Introduction
In this paper, we study the \(\mathcal{R}\)-algebra of \(\mathcal{R}\)-algebras of dimension \(d\). We consider the \(\mathcal{R}\)-algebra of dimension \(d\). We consider the \(\mathcal{R}\)-algebra of dimension \(d\).
and \(q\) are _parallel_ if \(p\nleq q\) and \(q\nleq p\). A set of positions is called _parallel_ if all its elements are so.
Terms are built from a signature \(\mathcal{F}\) and a countable set \(\mathcal{V}\) of variables satisfying \(\mathcal{F}\cap\mathcal{V}=\varnothing\). The set of all terms (over \(\mathcal{F}\)) is denoted by \(\mathcal{T}(\mathcal{F},\mathcal{V})\). Let \(t\) be a term. The set of all variables in \(t\) is denoted by \(\mathcal{V}\mathsf{ar}(t)\), and the set of all function symbols in a term \(t\) by \(\mathcal{F}\mathsf{un}(t)\). The set of all function positions and the set of variable positions in \(t\) are denoted by \(\mathcal{P}\mathsf{os}_{\mathcal{F}}(t)\) and \(\mathcal{P}\mathsf{os}_{\mathcal{V}}(t)\), respectively. The _subterm_ of \(t\) at position \(p\) is denoted by \(t|_{p}\). It is a _proper_ subterm if \(p\neq\epsilon\). By \(t[u]_{p}\) we denote the term that results from replacing the subterm of \(t\) at \(p\) by a term \(u\). The size \(|t|\) of \(t\) is the number of occurrences of functions symbols and variables in \(t\). A term \(t\) is said to be _linear_ if every variable in \(t\) occurs exactly once.
A _substitution_ is a mapping \(\sigma:\mathcal{V}\to\mathcal{T}(\mathcal{F},\mathcal{V})\) whose _domain_\(\mathcal{D}\mathsf{om}(\sigma)\) is finite. Here \(\mathcal{D}\mathsf{om}(\sigma)\) stands for the set \(\{x\in\mathcal{V}\mid\sigma(x)\neq x\}\). The term \(t\sigma\) is defined as \(\sigma(t)\) for \(t\in\mathcal{V}\), and \(f(t_{1}\sigma,\ldots,t_{n}\sigma)\) for \(t=f(t_{1},\ldots,t_{n})\). A term \(u\) is called an _instance_ of \(t\) if \(u=t\sigma\) for some \(\sigma\). A substitution is called a _renaming_ if it is a bijection on variables.
A _term rewrite system_ (TRS) over \(\mathcal{F}\) is a set of rewrite rules. Here a pair \((\ell,r)\) of terms over \(\mathcal{F}\) is a _rewrite rule_ or simply a _rule_ if \(\ell\notin\mathcal{V}\) and \(\mathcal{V}\mathsf{ar}(r)\subseteq\mathcal{V}\mathsf{ar}(\ell)\). We denote it by \(\ell\to r\). The rewrite relation \(\to_{\mathcal{R}}\) of a TRS \(\mathcal{R}\) is defined on terms as follows: \(s\to_{\mathcal{R}}t\) if \(s|_{p}=\ell\sigma\) and \(t=s[r\sigma]_{p}\) for some rule \(\ell\to r\in\mathcal{R}\), position \(p\), and substitution \(\sigma\). We write \(s\stackrel{{ p}}{{\to}}_{\mathcal{R}}t\) if the rewrite position \(p\) is relevant. We call subsets of \(\mathcal{R}\)_subsystems_. We write \(\mathcal{F}\mathsf{un}(\ell\to r)\) for \(\mathcal{F}\mathsf{un}(\ell)\cup\mathcal{F}\mathsf{un}(r)\) and \(\mathcal{F}\mathsf{un}(\mathcal{C})\) for the union of \(\mathcal{F}\mathsf{un}(\ell\to r)\) for all rules \(\ell\to r\in\mathcal{C}\). The set \(\{f\mid f(\ell_{1},\ldots,\ell_{n})\to r\in\mathcal{R}\}\) is the set of _defined symbols_ and denoted by \(\mathcal{D}_{\mathcal{R}}\). A TRS \(\mathcal{R}\) is _left-linear_ if \(\ell\) is linear for all \(\ell\to r\in\mathcal{R}\). Since any TRS \(\mathcal{R}\) can be regarded as the ARS \((\mathcal{T}(\mathcal{F},\mathcal{V}),\{\to_{\mathcal{R}}\})\), we use notions and notations of ARSs for TRSs. For instance, a TRS \(\mathcal{R}\) is (locally) confluent if the ARS \((\mathcal{T}(\mathcal{F},\mathcal{V}),\{\to_{\mathcal{R}}\})\) is so. Similarly, two TRSs commute if their corresponding ARSs commute.
Local confluence of TRSs is characterized by notion of critical pair. We say that a rule \(\ell_{1}\to r_{1}\) is a _variant_ of a rule \(\ell_{2}\to r_{2}\) if \(\ell_{1}\rho=\ell_{2}\) and \(r_{1}\rho=r_{2}\) for some renaming \(\rho\).
**Definition 2.1**.: Let \(\mathcal{R}\) and \(\mathcal{S}\) be TRSs. Suppose that the following conditions hold:
* \(\ell_{1}\to r_{1}\) and \(\ell_{2}\to r_{2}\) are variants of rules in \(\mathcal{R}\) and in \(\mathcal{S}\), respectively,
* \(\ell_{1}\to r_{1}\) and \(\ell_{2}\to r_{2}\) have no common variables,
* \(p\in\mathcal{P}\mathsf{os}_{\mathcal{F}}(\ell_{2})\),
* \(\sigma\) is a most general unifier of \(\ell_{1}\) and \(\ell_{2}|_{p}\), and
* if \(p=\epsilon\) then \(\ell_{1}\to r_{1}\) is not a variant of \(\ell_{2}\to r_{2}\).
The local peak \((\ell_{2}\sigma)[r_{1}\sigma]_{p}\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\lower 4.0pt \hbox{\tiny$\sim$}}}\limits^{p}}\ell_{2}\sigma\stackrel{{ \epsilon}}{{\to}}_{\mathcal{S}}r_{2}\sigma\) is called a _critical peak_ between \(\mathcal{R}\) and \(\mathcal{S}\). When \(t\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\lower 4.0pt\hbox{\tiny$\sim$}}} \limits^{p}}s\stackrel{{\epsilon}}{{\to}}_{\mathcal{S}}u\) is a critical peak, the pair \((t,u)\) is called a _critical pair_. To clarify the orientation of the pair, we denote it as the binary relation \(t\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\lower 4.0pt\hbox{\tiny$\sim$}}} \limits^{p}}\stackrel{{\epsilon}}{{\to}}\stackrel{{ \epsilon}}{{\to}}_{\mathcal{S}}u\), see [1]. Moreover, we write \(t\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\lower 4.0pt\hbox{\tiny$\sim$}}} \limits^{\epsilon}}\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\lower 4.0pt \hbox{\tiny$\sim$}}}\limits^{\epsilon}}\mathrel{\mathop{\kern 0.
**Definition 2.4**.: Let \(\mathcal{R}\) be a TRS and let \(P\) be a set of parallel positions. The _parallel step_\(\stackrel{{ P}}{{\twoheadrightarrow}}_{\mathcal{R}}\) is inductively defined on terms as follows:
* \(x\stackrel{{ P}}{{\twoheadrightarrow}}_{\mathcal{R}}x\) if \(x\) is a variable and \(P=\varnothing\).
* \(\ell\sigma\stackrel{{ P}}{{\twoheadrightarrow}}_{\mathcal{R}}r\sigma\) if \(\ell\to r\) is an \(\mathcal{R}\)-rule, \(\sigma\) is a substitution, and \(P=\{\epsilon\}\).
* \(f(s_{1},\ldots,s_{n})\stackrel{{ P}}{{\twoheadrightarrow}}_{ \mathcal{R}}f(t_{1},\ldots,t_{n})\) if \(f\) is an \(n\)-ary function symbol in \(\mathcal{F}\), \(s_{i}\stackrel{{ P_{i}}}{{\twoheadrightarrow}}_{\mathcal{R}}t_{i}\) holds for all \(1\leqslant i\leqslant n\), and \(P=\{i\cdot p\mid 1\leqslant i\leqslant n\text{ and }p\in P_{i}\}\).
We write \(s\stackrel{{ p}}{{\twoheadrightarrow}}_{\mathcal{R}}t\) if \(s\stackrel{{ P}}{{\twoheadrightarrow}}_{\mathcal{R}}t\) for some set \(P\) of positions.
Note that \(\mathrel{{\twoheadrightarrow}}_{\mathcal{R}}\) is reflexive and the inclusions \(\rightarrow_{\mathcal{R}}\subseteq\mathrel{{\twoheadrightarrow}}_{ \mathcal{R}}\subseteq\rightarrow_{\mathcal{R}}^{*}\) hold. As the latter entails \(\rightarrow_{\mathcal{R}}^{*}=\mathrel{{\twoheadrightarrow}}_{\mathcal{R}}^{*}\), we obtain the following useful characterizations.
**Lemma 2.5**.: _A TRS \(\mathcal{R}\) is confluent if and only if \(\mathrel{{\twoheadrightarrow}}_{\mathcal{R}}\) is confluent. Similarly, TRSs \(\mathcal{R}\) and \(\mathcal{S}\) commute if and only if \(\mathrel{{\twoheadrightarrow}}_{\mathcal{R}}\) and \(\mathrel{{\twoheadrightarrow}}_{\mathcal{S}}\) commute._
## 3. Parallel Closedness
Toyama made two variations of Huet's parallel closedness theorem [10] in 1981 [11] and in 1988 [11], but their relation has not been known. In this section we recall his and related results, and then show that Toyama's earlier result subsumes the later one. For brevity we omit the subscript \(\mathcal{R}\) from \(\rightarrow_{\mathcal{R}}\), \(\mathrel{{\twoheadrightarrow}}_{\mathcal{R}}\), and \({}_{\mathcal{R}}{\leftarrow}\mathord{\mathrel{\mathrel{\mathrel{\mathrel{ \mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{ \mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{{ }}}}}}}{}{}{}{}{}{}{{{{{{{{{{{{{{{{{{{{
Thus, the TRS is almost parallel closed. Hence, the TRS is confluent.
Inspired by almost parallel closedness, Gramlich [1] developed a confluence criterion based on _parallel critical pairs_ in 1996. Let \(t\) be a term and let \(P\) be a set of parallel positions in \(t\). We write \(\mathcal{V}\mathsf{ar}(t,P)\) for the union of \(\mathcal{V}\mathsf{ar}(t|_{p})\) for all \(p\in P\). By \(t[u_{p}]_{p\in P}\) we denote the term that results from replacing in \(t\) the subterm at \(p\) by a term \(u_{p}\) for all \(p\in P\).
**Definition 3.6**.: Let \(\mathcal{R}\) and \(\mathcal{S}\) be TRSs, \(\ell\to r\) a variant of an \(\mathcal{S}\)-rule, and \(\{\ell_{p}\to r_{p}\}_{p\in P}\) a family of variants of \(\mathcal{R}\)-rules, where \(P\) is a set of positions. A local peak
\[(\ell\sigma)[r_{p}\sigma]_{p\in P}\mathrel{\mathop{\hbox to 0.0pt{\hbox to 0.0pt{ \hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox t o 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox t o 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox to 0.0pt{\hbox t o 0.0pt{\hbox to 0.0pt{\hbox t o 0.0pt{\hbox{\hbox t o 0.0pt{\hbox{\hbox t o 0.0pt{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hbox{\hboxhbox{ }}}}}}}{}}{}}{}}}{}}{}}{}}}}}}}}\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Now we show that Theorem 3.9 even subsumes Theorem 3.4. The first part of the next lemma is a strengthened version of the Parallel Moves Lemma [1, Lemma 6.4.4]. Here a variable condition like Theorem 3.9 is associated. The second part of the lemma is irrelevant here but will be used in the subsequent sections. Note that the second part corresponds to [1, Lemma 55]. We write \(\sigma\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\lower 3.0pt\hbox{\vrule height 6. 5pt depth -0.2pt width 0.2pt\hss}\hbox to 0.0pt{\lower 3.0pt\hbox{\vrule height 6. 5pt depth -0.2pt width 0.2pt\hss}\hbox to 0.0pt{\lower 3.0pt\hbox{\vrule height 6. 5pt depth -0.2pt width 0.2pt\hss}\hbox to 0.0pt{\lower 3.0pt\hbox{\vrule height 6. 5pt depth -0.2pt width 0.2pt\hss}\hbox to 0.0pt{\lower 3.0pt\hbox{\vrule height 6. 5pt depth -0.2pt width 0.2pt\hss}\hbox to 0.0pt{\lower 3.0pt\hbox{\vrule height 6. 5pt depth -0.2pt width 0.2pt\hss}\hbox to 0.0pt{\lower 3.0pt\hbox{\vrule height 6. 5pt depth -0.2pt width 0.2pt\hss}\hbox to 0.
Clearly, \(t\xrightarrow{\epsilon}_{\{\ell\to r\}}v\) holds. So it remains to show \(u\xrightarrow{P^{\prime}}_{\mathcal{R}}v\) and \(\mathcal{V}\mathsf{ar}(v,P^{\prime})\subseteq\mathcal{V}\mathsf{ar}(s,P)\). Let \(p^{\prime}\) be an arbitrary position in \(P^{\prime}\). There exist positions \(p_{1}\in\mathcal{Pos}_{\mathcal{V}}(\ell)\), \(p_{1}^{\prime}\in\mathcal{Pos}_{\mathcal{V}}(r)\), and \(p_{2}\) such that \(p^{\prime}=p_{1}^{\prime}\cdot p_{2}\), \(p_{1}\cdot p_{2}\in P\), and \(\ell|_{p_{1}}=r|_{p_{1}^{\prime}}\). Denoting \(p_{1}\cdot p_{2}\) by \(p\), we have the identities: \[u|_{p^{\prime}} =(r\sigma)|_{p_{1}^{\prime}\cdot p_{2}}=(r|_{p_{1}^{\prime}}\sigma )|_{p_{2}}=(\ell|_{p_{1}}\sigma)|_{p_{2}}=(\ell\sigma)|_{p_{1}\cdot p_{2}}=s|_{p}\] \[v|_{p^{\prime}} =(r\tau)|_{p_{1}^{\prime}\cdot p_{2}}=(r|_{p_{1}^{\prime}}\tau)|_ {p_{2}}=(\ell|_{p_{1}}\tau)|_{p_{2}}=(\ell\tau)|_{p_{1}\cdot p_{2}}=t|_{p}\] From \(s\xrightarrow{P}_{\mathcal{R}}t\) we obtain \(s|_{p}\xrightarrow{\epsilon}_{\mathcal{R}}t|_{p}\) and thus \(u|_{p^{\prime}}\xrightarrow{\epsilon}_{\mathcal{R}}v|_{p^{\prime}}\). Therefore, \(u\xrightarrow{P^{\prime}}_{\mathcal{R}}v\) is obtained. Moreover, we have \(\mathcal{V}\mathsf{ar}(v|_{p^{\prime}})=\mathcal{V}\mathsf{ar}(t|_{p}) \subseteq\mathcal{V}\mathsf{ar}(s|_{p})\subseteq\mathcal{V}\mathsf{ar}(s,P)\). As \(\mathcal{V}\mathsf{ar}(v,P^{\prime})\) is the union of \(\mathcal{V}\mathsf{ar}(v|_{p^{\prime}})\) for all \(p^{\prime}\in P^{\prime}\), the desired inclusion \(\mathcal{V}\mathsf{ar}(v,P^{\prime})\subseteq\mathcal{V}\mathsf{ar}(s,P)\) follows.
2. Suppose that \(\Gamma\) is not orthogonal. By \(\ell_{p}\to r_{p}\) we denote the rule employed at the rewrite position \(p\in P\) in \(s\xrightarrow{P}_{\mathcal{R}}t\). Let \(P_{0}=P\cap\mathcal{Pos}_{\mathcal{F}}(\ell)\) and \(P_{1}=P\setminus P_{0}\). Since \(P\) is a set of parallel positions, \(s\xrightarrow{P}_{\mathcal{R}}t\) is split into the two steps \(s\xrightarrow{P_{0}}_{\mathcal{R}}v\xrightarrow{P_{1}}_{\mathcal{R}}t\), where \(v=s[t]_{p}]_{p\in P_{0}}\). First, we show that \(v\xleftrightarrow{P_{0}}s\xrightarrow{\epsilon}_{\{\ell\to r\}}u\) is an instance of a parallel critical peak. Let \(p\) be an arbitrary position in \(P_{0}\). Because of \(s\xrightarrow{\epsilon}_{\{\ell\to r\}}u\), we have \(s=\ell\mu\) and \(u=r\mu\) for some \(\mu\). Suppose that \(\ell_{p}^{\prime}\to r_{p}^{\prime}\) is a renamed variant of \(\ell_{p}\to r_{p}\) with fresh variables. There exists a substitution \(\mu_{p}\) such that \(s|_{p}=\ell_{p}^{\prime}\mu_{p}\) and \(t|_{p}=r_{p}^{\prime}\mu_{p}\). Note that \(\mathcal{Dom}(\mu)\cap\mathcal{Dom}(\mu_{p})=\varnothing\). We define the substitution \(\nu\) as follows: \[\nu(x)=\begin{cases}x\mu_{p}&\text{if $p\in P_{0}$ and $x\in\mathcal{V}\mathsf{ar}(\ell_{p}^{\prime})$}\\ x\mu&\text{otherwise}\end{cases}\] Because every \(\ell_{p}^{\prime}\) with \(p\in P_{0}\) is linear and do not share variables with each other, \(\nu\) is well-defined. Since \(\ell\) neither share variables with \(\ell_{p}^{\prime}\), we obtain the identities: \[\ell_{p}^{\prime}\nu=\ell_{p}^{\prime}\mu_{p}=s|_{p}=\ell|_{p}\mu=\ell|_{p}\nu\] Thus, \(\nu\) is a unifier of \(E=\{\ell_{p}^{\prime}\approx\ell|_{p}\}_{p\in P_{0}}\). Let \(V\) denote the set of all variables occurring in \(E\). According to [11, Proposition 4.10], there exists a most general unifier \(\nu^{\prime}\) of \(E\) such that \(\mathcal{Dom}(\nu^{\prime})\subseteq V\). Thus, there is a substitution \(\sigma\) with \(\nu=\nu^{\prime}\sigma\). Let \(s_{0}=\ell\nu^{\prime}\), \(t_{0}=(\ell\nu^{\prime})[r_{p}^{\prime}\nu^{\prime}]_{p\in P_{0}}\), and \(u_{0}=r\nu^{\prime}\). The peak \(t_{0}\xleftrightarrow{P_{0}}s_{0}\xrightarrow{\epsilon}u_{0}\) is a parallel critical peak, and \(v\xleftrightarrow{P_{0}}s\xrightarrow{\epsilon}u\) is an instance of the peak by the substitution \(\sigma\): \[s_{0}\sigma =\ell\nu^{\prime}\sigma=\ell\nu=\ell\mu=s\] \[t_{0}\sigma =(\ell\nu^{\prime}\sigma)[r_{p}^{\prime}\nu^{\prime}\sigma]_{p\in P _{0}}=(\ell\nu)[r_{p}^{\prime}\nu]_{p\in P_{0}}=(\ell\mu)[r_{p}^{\prime}\mu_{p}] _{p\in P_{0}}=v\] \[u_{0}\sigma =r\nu^{\prime}\sigma=r\nu=r\mu=u\] Next, we construct a substitution \(\tau\) so that it satisfies \(\sigma\xleftrightarrow{\mathcal{R}}\tau\) and \(t_{0}\sigma\xleftrightarrow{P_{0}}t_{0}\tau\). Given a variable \(x\in\mathcal{V}\mathsf{ar}(\ell)\), we write \(p_{x}\) for a variable occurrence of \(x\) in \(\ell\). Due to linearity of \(\ell\), the position \(p_{x}\) is uniquely determined. Let \(W=\mathcal{V}\mathsf{ar}(\ell)\setminus\mathcal{V}\mathsf{ar}(\ell,P_{0})\). Note that \(W\cap V=\varnothing\) holds. We define the substitution \(\tau\) as follows: \[\tau(x)=\begin{cases}t|_{p_{x}}&\text{if $x\in W$}\\ x\sigma&\text{otherwise}\end{cases}\]
To verify \(\sigma\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\leftrightarrow$}}\hbox{$ \rightarrow$}}_{\mathcal{R}}\tau\), consider an arbitrary variable \(x\). We show \(x\sigma\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\leftrightarrow$}}\hbox{$ \rightarrow$}}_{\mathcal{R}}x\tau\). If \(x\notin W\) then \(x\sigma=x\tau\), from which the claim follows. Otherwise, the definitions of \(V\) and \(\nu^{\prime}\) yield the implications: \[x\in W\implies x\notin V\implies x\notin\mathcal{D}\mathsf{om}(\nu^{\prime}) \implies x\nu^{\prime}=x\] So \(s_{0}|_{p_{x}}=x\) follows from the identities: \[s_{0}|_{p_{x}}=(\ell\nu)|_{p_{x}}=\ell|_{p_{x}}\nu=x\nu=x\] Let \(Q_{x}=\{q\mid p_{x}q\in P_{1}\}\). As \(s\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\leftrightarrow$}}\hbox{$ \rightarrow$}}_{\mathcal{R}}v\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$ \leftrightarrow$}}\hbox{$ \rightarrow$}}_{\mathcal{R}}t\) implies \(s|_{p_{x}}=v|_{p_{x}}\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$ \leftrightarrow$}}\hbox{$ \rightarrow$}}_{\mathcal{R}}t|_{p_{x}}\), we obtain \(x\sigma=s_{0}|_{p_{x}}\sigma=(s_{0}\sigma)|_{p_{x}}=s|_{p_{x}}\mathrel{ \hbox to 0.0pt{\lower 3.0pt\hbox{$\leftrightarrow$}}\hbox{$ \rightarrow$}}_{\mathcal{R}}t|_{p_{x}}=x\tau\). Therefore, the claim is verified. The remaining task is to show \(t_{0}\sigma\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\leftrightarrow$}}\hbox{$ \rightarrow$}}_{\mathcal{R}}t_{0}\tau\). Let \(p\in P_{1}\). As \(s_{0}|_{p_{x}}=x\) and \(s_{0}\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\leftrightarrow$}}\hbox{$ \rightarrow$}}_{\mathcal{R}}t_{0}\) imply \(x=t_{0}|_{p_{x}}\), the equation \((s_{0}\sigma)|_{p}=(t_{0}\sigma)|_{p}\) follows. By the definition of \(\tau\) we have \((t_{0}\tau)|_{p_{x}}=t|_{p_{x}}\), which leads to \((t_{0}\tau)|_{p}=t|_{p}\). Hence, we obtain the relations \[(t_{0}\sigma)|_{p}=(s_{0}\sigma)|_{p}=s|_{p}\mathrel{\hbox to 0.0pt{\lower 3.0pt \hbox{$\leftrightarrow$}}\hbox{$ \rightarrow$}}_{\mathcal{R}}t|_{p}=(t_{0}\tau)|_{p}\] which entails the desired parallel step \(t_{0}\sigma\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\leftrightarrow$}}\hbox{$ \rightarrow$}}_{\mathcal{R}}t_{0}\tau\).
For almost parallel closed TRSs the above statement is extended to local peaks \(\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\leftrightarrow$}}\hbox{$ \rightarrow$}}\cdot\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$ \leftrightarrow$}}\hbox{$ \rightarrow$}}\) of parallel steps. In its proof we measure parallel steps \(s\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\leftrightarrow$}}\hbox{$ \rightarrow$}}t\) in such a local peak by the _amount of contractums_\(|t|_{P}\), namely the sum of \(|(t|_{p})|\) for all \(p\in P\). Note that this measure attributes to [1, 1].
**Lemma 3.12**.: _Consider a left-linear almost parallel closed TRS. If \(t\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\leftrightarrow$}}\hbox{$ \leftarrow$}}_{\mathcal{R}}s\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$ \leftrightarrow$}}\hbox{$ \rightarrow$}}u\) then_
* \(t\to^{*}v_{1}\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\leftrightarrow$}}\hbox{$ \leftarrow$}}_{\mathcal{R}}^{P^{\prime}_{1}}u\) _for some_ \(v_{1}\) _and_ \(P^{\prime}_{1}\) _with_ \(\mathcal{V}\mathsf{ar}(v_{1},P^{\prime}_{1})\subseteq\mathcal{V}\mathsf{ar}(s,P_{1})\)_, and_
* \(t\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\leftrightarrow$}}\hbox{$ \rightarrow$}}_{\mathcal{R}}v_{2}\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$ \leftrightarrow$}}\hbox{$ \leftarrow$}}u\) _for some_ \(v_{2}\) _and_ \(P^{\prime}_{2}\) _with_ \(\mathcal{V}\mathsf{ar}(v_{2},P^{\prime}_{2})\subseteq\mathcal{V}\mathsf{ar}(s,P_{2})\)_._
Proof.: Let \(\Gamma\colon t\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\leftrightarrow$}}\hbox{$ \leftarrow$}}s\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$ \leftrightarrow$}}\hbox{$ \rightarrow$}}_{\mathcal{R}}u\) be a local peak. We show the claim by well-founded induction on \((|t|_{P_{1}}+|u|_{P_{2}},s)\) with respect to \(\succ\). Here \((m,s)\succ(n,t)\) if either \(m>n\), or \(m=n\) and \(t\) is a proper subterm of \(s\). Depending on the shape of \(\Gamma\), we distinguish six cases.
1. If \(P_{1}\) or \(P_{2}\) is empty then the claim follows from the fact: \(\mathcal{V}\mathsf{ar}(v,P)\subseteq\mathcal{V}\mathsf{ar}(w,P)\) if \(w\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\leftrightarrow$}}\hbox{$ \rightarrow$}}v\).
2. If \(P_{1}\) or \(P_{2}\) is \(\{\epsilon\}\) and \(\Gamma\) is orthogonal then Lemma 3.11(a) applies.
3. If \(P_{1}=P_{2}=\{\epsilon\}\) and \(\Gamma\) is not orthogonal then \(\Gamma\) is an instance of a critical peak. By almost parallel closedness \(t\to^{*}v_{1}\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\leftrightarrow$}}\hbox{$ \leftarrow$}}_{\mathcal{R}}u\) and \(t\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\leftrightarrow$}}\hbox{$ \rightarrow$}}_{\mathcal{R}}v_{2}\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$ \leftrightarrow$}}\hbox{$\leftarrow$}}u\) for some \(v_{1}\), \(v_{2}\), \(Q_{1}\), and \(Q_{2}\). For each \(k\in\{1,2\}\) we have \(s\succ^{*}v_{k}\), so \(\mathcal{V}\mathsf{ar}(v_{k})\subseteq\mathcal{V}\mathsf{ar}(s)\) follows. Thus, \(\mathcal{V}\mathsf{ar}(v_{k},Q_{k})\subseteq\mathcal{V}\mathsf{ar}(v_{k}) \subseteq\mathcal{V}\mathsf{ar}(s)=\mathcal{V}\mathsf{ar}(s,\{\epsilon\})\). The claim holds.
4. If \(P_{1}\nsubseteq\{\epsilon\}\), \(P_{2}=\{\epsilon\}\), and \(\Gamma\) is not orthogonal then there is \(p\in P_{1}\) such that \(s^{\prime}\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\leftrightarrow$}}\hbox{$ \leftarrow$}}s\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$ \leftrightarrow$}}\hbox{$\rightarrow$}}u\) is an instance of a critical peak and \(s^{\prime}\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\leftrightarrow$}}\hbox{$ \rightarrow$}}_{\mathcal{R}}t\) follows by Lemma 3.11(b) where \(P=\{p\}\). By the almost parallel closedness \(s^{\prime}\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\leftrightarrow$}}\hbox{$ \rightarrow$}}_{\mathcal{R}}u\) for some \(P^{\prime}_{2}\). Since \(P^{\prime}_{2}\) is a set of parallel positions in \(u\), we have \(|u|_{\{\epsilon\}}=|u|\geqslant|u|_{P^{\prime}_{2}}\). As \(|u|_{\{\epsilon\}}\geqslant|u|_{P^{\prime}_{2}}\) and \(|t|_{P_{1}}>|t|_{P_{1}\setminus\{p\}}\) yield \(|t|_{P_{1}}+|u|_{\{\epsilon\}}>|t|_{P_{1}\setminus\{p\}}+|u|_{P^{\prime}_{2}}\), we obtain the inequality: \[(|t|_{P_{1}}+|u|_{P_{2}},s)\succ(|t|_{P_{1}\setminus\{p\}}+|u|_{P^{\prime}_{2}},s^ {\prime})\] Thus, the claim follows by the induction hypothesis for \(t\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$\leftrightarrow$}}\hbox{$ \rightarrow$}}_{\mathcal{R}}s^{\prime}\mathrel{\hbox to 0.0pt{\lower 3.0pt\hbox{$ \leftrightarrow$}}\hbox{$ \rightarrow$}}_{\mathcal{R}}u\) and the inclusions \(\mathcal{V}\mathsf{ar}(s^{\prime},P_{1}\setminus\{p\})\subseteq\mathcal{V}\mathsf{ar}(s,P_{1})\) and \(\mathcal{V}\mathsf{ar}(s^{\prime},P^{\prime}_{2})\subseteq\mathcal{V
5. If \(P_{1}=\{\epsilon\}\), \(P_{2}\nsubseteq\{\epsilon\}\), and \(\Gamma\) is not orthogonal then the proof is analogous to the last case.
6. If \(P_{1}\nsubseteq\{\epsilon\}\) and \(P_{2}\nsubseteq\{\epsilon\}\) then we may assume \(s=f(s_{1},\ldots,s_{n})\), \(t=f(t_{1},\ldots,t_{n})\), \(u=f(u_{1},\ldots,u_{n})\), and \(t_{i}\stackrel{{ P_{1}^{i}}}{{\leftarrowleftarrowleftarrow}}s_{i} \stackrel{{ P_{2}^{i}}}{{\rightarrowrightarrow}}u_{i}\) for all \(1\leqslant i\leqslant n\). Here \(P_{k}^{i}\) denotes the set \(\{p\ |\ i\cdot p\in P_{k}\}\). For each \(i\in\{1,\ldots,n\}\), we have \(|t|_{P_{1}}\geqslant|t_{i}|_{P_{1}^{i}}\) and \(|u|_{P_{2}}\geqslant|u_{i}|_{P_{2}^{i}}\), and therefore \(|t|_{P_{1}}+|u|_{P_{2}}\geqslant|t_{i}|_{P_{1}^{i}}+|u_{i}|_{P_{2}^{i}}\). So we deduce the following inequality: \[(|t|_{P_{1}}+|u|_{P_{2}},s)\succ(|t_{i}|_{P_{1}^{i}}+|u_{i}|_{P_{2}^{i}},s_{i})\] Consider an \(i\)-th peak \(t_{i}\stackrel{{ P_{1}^{i}}}{{\leftarrowleftarrow}}s_{i} \stackrel{{ P_{2}^{i}}}{{\rightarrowrightarrow}}u_{i}\). By the induction hypothesis it admits valleys of the forms \(t_{i}\rightarrow^{*}v_{1}^{i}\stackrel{{ Q_{1}^{i}}}{{ \leftarrowleftarrow}}u_{i}\) and \(t_{i}\stackrel{{ Q_{2}^{i}}}{{\rightarrowrightarrow}}v_{2}^{i} \stackrel{{*}}{{\leftarrowleftarrow}}u_{i}\) such that \(\mathcal{V}\mathsf{ar}(v_{k}^{i},Q_{k}^{i})\subseteq\mathcal{V}\mathsf{ar}(s_ {i},P_{k}^{i})\) for both \(k\in\{1,2\}\). For each \(k\), define \(Q_{k}=\{i\cdot q\ |\ 1\leqslant i\leqslant n\text{ and }q\in Q_{k}^{i}\}\) and \(v_{k}=f(v_{k}^{1},\ldots,v_{k}^{n})\). Then we have \(t\rightarrow^{*}v_{1}\stackrel{{ Q_{1}}}{{\leftarrowleftarrowleftarrow}}u\) and \(t\stackrel{{ Q_{2}}}{{\rightarrowrightarrow}}v_{2} \stackrel{{*}}{{\leftarrowleftarrow}}u\). Moreover, \[\mathcal{V}\mathsf{ar}(v_{k},Q_{k})=\bigcup_{i=1}^{n}\mathcal{V}\mathsf{ar}(v_ {k}^{i},Q_{k}^{i})\subseteq\bigcup_{i=1}^{n}\mathcal{V}\mathsf{ar}(s_{i},P_{ k}^{i})=\mathcal{V}\mathsf{ar}(s,P_{k})\] holds. Hence, the claim follows.
**Theorem 3.13**.: _Every left-linear and almost parallel closed TRS satisfies conditions (a) and (b) of Theorem 3.9. In other words, Theorem 3.9 subsumes Theorem 3.4._
Proof.: Since (parallel) critical peaks are instances of \(\operatorname{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text
**Theorem 4.2**.: _Let \(\mathcal{A}=(A,\{\rightarrow_{1,\alpha}\}_{\alpha\in I})\) and \(\mathcal{B}=(A,\{\rightarrow_{2,\beta}\}_{\beta\in I})\) be \(I\)-indexed ARSs equipped with a well-founded order \(>\) on \(I\). Suppose that \(\bot\) is the least element in \(I\) and \(\rightarrow_{1,\bot}\) and \(\rightarrow_{2,\bot}\) commute. The ARSs \(\mathcal{A}\) and \(\mathcal{B}\) commute if every local peak \({}_{1,\alpha}\leftarrow\cdot\rightarrow_{2,\beta}\) with \((\alpha,\beta)\in I^{2}\setminus\{(\bot,\bot)\}\) is decreasing._
Proof.: We define the two ARSs \(\mathcal{A}^{\prime}=(A,\{\Rightarrow_{1,\alpha}\}_{\alpha\in I})\) and \(\mathcal{B}^{\prime}=(A,\{\Rightarrow_{2,\alpha}\}_{\alpha\in I})\) as follows:
\[\Rightarrow_{i,\alpha}=\begin{cases}\rightarrow_{i,\alpha}^{*}&\text{if } \alpha=\bot\\ \rightarrow_{i,\alpha}&\text{otherwise}\end{cases}\]
Since \(\rightarrow_{\mathcal{A}}^{*}=\Rightarrow_{\mathcal{A}}^{*}\) and \(\rightarrow_{\mathcal{B}}^{*}=\Rightarrow_{\mathcal{B}}^{*}\), the commutation of \(\mathcal{A}\) and \(\mathcal{B}\) follows from that of \(\mathcal{A}^{\prime}\) and \(\mathcal{B}^{\prime}\). We show the latter by proving decreasingness of \(\mathcal{A}^{\prime}\) and \(\mathcal{B}^{\prime}\) with respect to the given well-founded order \(>\). Let \(\Gamma\) be a local peak of form \({}_{1,\alpha}\Leftarrow\cdot\Rightarrow_{2,\beta}\). We distinguish four cases.
* If neither \(\alpha\) nor \(\beta\) is \(\bot\) then decreasingness of \(\Gamma\) follows from the assumption.
* If both \(\alpha\) and \(\beta\) are \(\bot\) then the commutation of \(\rightarrow_{1,\bot}\) and \(\rightarrow_{2,\bot}\) yields the inclusion: \[\xymatrix{\overleftarrow{\bot_{1,\bot}}\cdot\overrightarrow{\overrightarrow{ \overrightarrow{\overrightarrow{\overrightarrow{\overrightarrow{\overrightarrow{ \overrightarrow{\overrightarrow{\overrightarrow{\overrightarrow{\overrightarrow{\overrightarrow{ \overrightarrow{\overrightarrow{\
**Theorem 5.3**.: _A left-linear TRS \(\mathcal{R}\) is confluent if \(\mathcal{R}\) and \(\mathcal{R}\setminus\mathcal{C}\) are mutually orthogonal for some confluent TRS \(\mathcal{C}\) with \(\mathcal{C}\subseteq\mathcal{R}\)._
Proof.: Let \(\mathcal{A}=(\mathcal{T}(\mathcal{F},\mathcal{V}),\{\mathbin{\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Let \(\mathcal{C}=\{5,7,8,10,11,13\}\). The six non-trivial parallel critical pairs of \(\mathcal{R}\) are
\[(x,\mathsf{gcd}(0,\mathsf{mod}(x,0)))\quad\ \ (y,\mathsf{gcd}(y,\mathsf{mod}(0,y))) \quad\ (0,\mathsf{if}(0<\mathsf{s}(y),0,\mathsf{mod}(0-\mathsf{s}(y),\mathsf{s}(y))))\]
and their symmetric versions. All of them are joinable by \(\mathcal{C}\). So it remains to show that \(\mathcal{C}\) is confluent. Because \(\mathcal{C}\) only admits trivial parallel critical pairs, \({}_{\mathcal{C}}\xleftrightarrow{}_{\mathcal{C}}\subseteq\xleftrightarrow{} _{\varnothing}^{*}\) holds. Therefore, the confluence of \(\mathcal{C}\) is concluded if we show the confluence of the empty system. The latter claim is trivial. This completes the proof.
Theorem 5.4 is a generalization of Toyama's yet another theorem:
**Corollary 5.6** ([16]).: _A left-linear TRS \(\mathcal{R}\) is confluent if \({}_{\mathcal{R}}\xleftrightarrow{}_{\mathcal{R}}\subseteq\xleftrightarrow{} _{\mathcal{C}}^{*}\) holds for some terminating and confluent TRS \(\mathcal{C}\) with \(\mathcal{C}\subseteq\mathcal{R}\)._
## 6. Rule Labeling
In this section we recast the _rule labeling_ criterion [11, 10] in a compositional form. Rule labeling is a direct application of decreasing diagrams to confluence proofs for TRSs. It labels rewrite steps by their employed rewrite rules and compares indexes of them. Among others, we focus on the variant of rule labeling based on parallel critical pairs, introduced by Zankl et al. [1].
**Definition 6.1**.: Let \(\mathcal{R}\) be a TRS. A _labeling function_ for \(\mathcal{R}\) is a function from \(\mathcal{R}\) to \(\mathbb{N}\). Given a labeling function \(\phi\) and a number \(k\in\mathbb{N}\), we define the TRS \(\mathcal{R}_{\phi,k}\) as follows:
\[\mathcal{R}_{\phi,k}=\{\ell\to r\in\mathcal{R}\ |\ \phi(\ell\to r)\leqslant k\}\]
The relations \(\rightarrow_{\mathcal{R}_{\phi,k}}\) and \(\nleftrightarrow_{\mathcal{R}_{\phi,k}}\) are abbreviated to \(\rightarrow_{\phi,k}\) and \(\nleftrightarrow_{\phi,k}\). Let \(\phi\) and \(\psi\) be labeling functions for \(\mathcal{R}\). We say that a local peak \(t\xleftrightarrow{}_{\phi,k}s\xrightarrow{\epsilon}u\) is \((\psi,\phi)\)_-decreasing_ if
\[t\xleftrightarrow{}_{\forall k}\cdot\xleftrightarrow{}_{\psi,m}\cdot\xleftrightarrow{}_{ \forall km}v\xleftrightarrow{}_{\phi,k}^{P^{\prime}}\cdot\xleftrightarrow{} _{\forall m}u\]
and \(\mathcal{V}\mathsf{ar}(v,P^{\prime})\subseteq\mathcal{V}\mathsf{ar}(s,P)\) for some set \(P^{\prime}\) of parallel positions and term \(v\). Here \(\leftrightarrow_{K}\) stands for the union of \({}_{\phi,k}\xleftrightarrow{}_{\phi}\) and \(\rightarrow_{\psi,k}\) for all \(k\in K\).
Figure 2. Proof of Theorem 5.4 (3).
The following theorem is a commutation-based rule labeling method [14, Theorem 56].
**Theorem 6.2**: **.** _Let \(\mathcal{R}\) be a left-linear TRS, and \(\phi\) and \(\psi\) its labeling functions. The TRS \(\mathcal{R}\) is confluent if the following conditions hold for all \(k,m\in\mathbb{N}\)._
\(\bullet\) _Every parallel critical peak of form \(t\underset{\phi,k}{\Longleftrightarrow}\ s\overset{\epsilon}{\underset{\psi,m} {\longleftrightarrow}}u\) is \((\psi,\phi)\)-decreasing._
\(\bullet\) _Every parallel critical peak of form \(t\underset{\phi,m}{\Longleftrightarrow}\ s\overset{\epsilon}{\underset{\phi,k }{\longleftrightarrow}}u\) is \((\phi,\psi)\)-decreasing._
With a small example we illustrate the usage of rule labeling.
**Example 6.3**: _Consider the left-linear TRS \(\mathcal{R}\):_
\[(x+y)+z\to x+(y+z)\qquad\qquad\qquad x+(y+z)\rightarrow(x+y)+z\]
_We define the labeling functions \(\phi\) and \(\psi\) as follows: \(\phi(\ell\to r)=0\) and \(\psi(\ell\to r)=1\) for all \(\ell\to r\in\mathcal{R}\). Because \(\mathcal{R}\) is reversible, all parallel critical peaks can be closed by \(\rightarrow_{\phi,0}\)-steps, like the following diagram:_
\[\begin{array}{c}s=((x+y)+z)+w\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par\par \par
1. If \(P\) or \(Q\) is empty then the claim is trivial.
2. If \(P\) or \(Q\) is \(\{\epsilon\}\) and \(\Gamma\) is orthogonal then Lemma 3.11(a) yields \(t\underset{\psi,m}{\dashrightarrow}\cdot\underset{\phi,k}{\dashrightarrow}u\).
3. If \(P\neq\varnothing\), \(Q=\{\epsilon\}\), and \(\Gamma\) is not orthogonal then by Lemma 3.11(b) there exist a parallel critical peak \(t_{0}\underset{\phi,k^{\prime}}{\dashrightarrow}s_{0}\xrightarrow{ \epsilon}u_{0}\) and substitutions \(\sigma\) and \(\tau\) such that \(k^{\prime}\leqslant k\), \(t=t_{0}\tau\), \(u=u_{0}\sigma\), \(\sigma\underset{\phi,k}{\dashrightarrow}\tau\), \(t_{0}\sigma\underset{\phi,k}{\dashrightarrow}t_{0}\tau\), and \(P_{1}\subseteq P\). We distinguish two subcases.2 If \(k^{\prime}=0\) and \(m=0\) then \(t_{0}\underset{0}{\overset{*}{\dashrightarrow}}u_{0}\). As \(\mathrel{\mathop{\hbox to 0.0pt{\rightarrowfill}}\limits}\) is closed under substitutions, \(t_{0}\tau\underset{0}{\overset{*}{\dashrightarrow}}u_{0}\tau\) follows. The step can be written as \(t_{0}\tau\underset{\gamma k}{\overset{*}{\dashrightarrow}}u_{0}\tau\) because \((k,m)\neq(0,0)\) and \(m=0\) imply \(k>0\). Summing them up, we obtain the sequence \[t=t_{0}\tau\underset{\gamma k}{\overset{*}{\dashrightarrow}}u_{0}\tau \underset{\phi,k}{\dashrightarrow}u_{0}\sigma=u\] from which we conclude decreasingness of \(\Gamma\). Otherwise, \(k^{\prime}>0\) or \(m>0\) holds. The assumption yields \[t_{0}\underset{\gamma k^{\prime}}{\dashrightarrow}\cdot\underset{\psi,m}{ \dashrightarrow}\cdot\underset{\psi,m}{\dashrightarrow}\cdot\underset{ \psi^{\prime}k^{\prime}m}{\dashrightarrow}v_{0}\underset{\phi,k^{\prime}}{ \dashrightarrow}w_{0}\underset{\gamma m}{\dashrightarrow}u_{0}\] and \(\mathcal{V}\mathsf{ar}(v_{0},P^{\prime}_{1})\subseteq\mathcal{V}\mathsf{ar}(s _{0},P_{1})\) for some \(v_{0}\), \(w_{0}\), and \(P^{\prime}_{1}\). Since \(k^{\prime}\leqslant k\) and the rewrite steps are closed under substitutions, the following relations are obtained: \[t_{0}\tau\underset{\gamma k}{\overset{*}{\dashrightarrow}}\cdot\underset{ \psi,m}{\dashrightarrow}\cdot\underset{\gamma km}{\dashrightarrow}v_{0}\tau \underset{\psi,m}{\dashrightarrow}v_{0}\sigma \underset{\gamma m}{\dashrightarrow}u_{0}\sigma\] Since \(t_{0}\sigma|_{p}=t_{0}\tau|_{p}\) holds for all \(p\in P_{1}\), the identity \(x\sigma=x\tau\) holds for all \(x\in\mathcal{V}\mathsf{ar}(s_{0},P_{1})\). Therefore, \(x\sigma=x\tau\) holds for all \(x\in\mathcal{V}\mathsf{ar}(v_{0},P^{\prime}_{1})\). Because \(v_{0}\underset{\phi,k}{\dashrightarrow}v_{0}\), \(\sigma\underset{\phi,k}{\dashrightarrow}\tau\), and \(x\sigma=x\tau\) for all \(x\in\mathcal{V}\mathsf{ar}(v_{0},P^{\prime}_{1})\) hold, Lemma 6.4 yields \(w_{0}\sigma\underset{\phi,k}{\dashrightarrow}v_{0}\tau\). Hence, the decreasingness of \(\Gamma\) is witnessed by the following sequence: \[t=t_{0}\tau\underset{\gamma k}{\overset{*}{\dashrightarrow}}\cdot \underset{\psi,m}{\dashrightarrow}\cdot\underset{\gamma km}{\dashrightarrow}v_{0} \tau\underset{\phi,k}{\dashrightarrow}w_{0}\sigma\underset{\gamma m}{ \dashrightarrow}u_{0}\sigma=u\] Note that the construction is depicted in Figure 3.
4. If \(P=\{\epsilon\}\), \(Q\neq\varnothing\), and \(\Gamma\) is not orthogonal then the proof is analogous to the last case.
5. If \(P\nsubseteq\{\epsilon\}\) and \(Q\nsubseteq\{\epsilon\}\) then \(s\), \(t\), and \(u\) can be written as \(f(s_{1},\dots,s_{n})\), \(f(t_{1},\dots,t_{n})\), and \(f(u_{1},\dots,u_{n})\) respectively, and moreover, \(t_{i}\underset{\phi,k}{\dashrightarrow}s_{i}\xrightarrow{\mu\to}u_{i}\) holds for all \(1\leqslant i\leqslant n\). By the induction hypotheses we have \(t_{i}\underset{\gamma k}{\overset{*}{\dashrightarrow}}\cdot\underset{ \psi,m}{\dashrightarrow}\cdot\underset{\gamma km}{\dashrightarrow}\cdot \underset{\phi,k}{\dashrightarrow}\cdot\underset{\phi,k}{\dashrightarrow} \cdot\underset{\gamma m}{\dashrightarrow}u_{i}\) for all \(1\leqslant i\leqslant n\). Therefore, we obtain the desired relations: \[t=f(t_{1},\dots,t_{n})\underset{\gamma k}{\overset{*}{\dashrightarrow}} \cdot\underset{\psi,m}{\dashrightarrow}\cdot\underset{\psi,m}{ \dashrightarrow}\cdot\underset{\gamma km}{\dashrightarrow}\cdot\underset{ \phi,k}{\dashrightarrow}\cdot\underset{\phi,k}{\dashrightarrow}\cdot \underset{\gamma m}{\dashrightarrow}f(u_{1},\dots,u_{n})=u\] Hence \(\Gamma\) is decreasing.
The original version of rule labeling (Theorem 6.2) is a special case of Theorem 6.5: Suppose that labeling functions \(\phi\) and \(\psi\) for a left-linear TRS \(\mathcal{R}\) satisfy the conditions of Theorem 6.2. By taking the labeling functions \(\phi^{\prime}\) and \(\psi^{\prime}\) with
\[\phi^{\prime}(\ell\to r)=\phi(\ell\to r)+1 \psi^{\prime}(\ell\to r)=\psi(\ell\to r)+1\]
Theorem 6.5 applies for \(\phi^{\prime}\), \(\psi^{\prime}\), and the empty TRS \(\mathcal{C}\).
The next example shows the combination of our rule labeling variant (Theorem 6.5) with Knuth-Bendix' criterion (Theorem 2.3).
**Example 6.6**.: Consider the left-linear TRS \(\mathcal{R}\):
\[1\colon\ 0+x\to x\qquad 2\colon\ (x+y)+z\to x+(y+z)\qquad 3\colon\ x+(y+z)\to(x+y)+z\]
Let \(\mathcal{C}=\{1,2\}\). We define the labeling functions \(\phi\) and \(\psi\) as follows:
\[\phi(\ell\to r)=\psi(\ell\to r)=\begin{cases}0&\text{if }\ell\to r\in \mathcal{C}\\ 1&\text{otherwise}\end{cases}\]
For instance, the parallel critical pairs involving rule 3 admit the following diagrams:
They fit for the conditions of Theorem 6.5. The other parallel critical pairs also admit suitable diagrams. Therefore, it remains to show that \(\mathcal{C}\) is confluent. Since \(\mathcal{C}\) is terminating and all its critical pairs are joinable, confluence of \(\mathcal{C}\) follows by Knuth and Bendix' criterion (Theorem 2.3). Thus, \(\mathcal{R}_{\phi,0}\) and \(\mathcal{R}_{\psi,0}\) commute because \(\mathcal{R}_{\phi,0}=\mathcal{R}_{\psi,0}=\mathcal{C}\). Hence, by Theorem 6.5 we conclude that \(\mathcal{R}\) is confluent.
While a proof for Theorem 5.4 is given in Section 3, here we present an alternative proof based on Theorem 6.5.
Proof of Theorem 5.4.: Define the labeling functions \(\phi\) and \(\psi\) as in Example 6.6. Then Theorem 6.5 applies.
We conclude the section by stating that rule labeling based on parallel critical pairs (Theorem 6.2) subsumes parallel closedness based on parallel critical pairs (Theorem 3.9):
Figure 3. Proof of Theorem 6.5(3).
Suppose that conditions (a,b) of Theorem 3.9 hold. We define \(\phi\) and \(\psi\) as the constant rule labeling functions \(\phi(\ell\to r)=1\) and \(\psi(\ell\to r)=0\). By using structural induction as well as Lemmata 3.11 and 6.4 we can prove the implication
\[t\xrightleftharpoons[\phi,1]{P_{1}}s\xrightleftharpoons[\psi,0]{\bullet}u \implies t\xrightleftharpoons[\psi,0]{\bullet}v\xrightleftharpoons[\phi,1]{P_{ 1}^{\prime}}u\text{ and }\mathcal{V}\mathsf{ar}(v,P_{1}^{\prime})\subseteq \mathcal{V}\mathsf{ar}(s,P_{1})\text{ for some }P_{1}^{\prime}\]
Thus, the conditions of Theorem 6.2 follow. As a consequence, our compositional version (Theorem 6.5) is also a generalization of parallel closedness.
## 7. Critical Pair Systems
The last example of compositional criteria in this paper is a variant of the confluence criterion by critical pair systems [11]. It is known that the original criterion is a generalization of the orthogonal criterion (Theorem 5.2) and Knuth and Bendix' criterion (Theorem 2.3) for left-linear TRSs.
**Definition 7.1**.: The _critical pair system_\(\mathsf{CPS}(\mathcal{R})\) of a TRS \(\mathcal{R}\) is defined as the TRS:
\[\{s\to t,s\to u\mid t\mathrel{\mathop{\mathcal{R}}\limits}\gets s \stackrel{{\epsilon}}{{\rightarrow}}_{\mathcal{R}}u\text{ is a critical peak}\}\]
**Theorem 7.2**[11].: _A left-linear and locally confluent TRS \(\mathcal{R}\) is confluent if \(\mathsf{CPS}(\mathcal{R})/\mathcal{R}\) is terminating (i.e., \(\mathsf{CPS}(\mathcal{R})\) is relatively terminating with respect to \(\mathcal{R}\))._
The theorem is shown by using the decreasing diagram technique (Theorem 4.1), see [11].
**Example 7.3**.: Consider the left-linear and non-terminating TRS \(\mathcal{R}\):
\[\mathsf{s}(\mathsf{p}(x))\to\mathsf{p}(\mathsf{s}(x)) \mathsf{p}(\mathsf{s}(x))\to x \infty\to\mathsf{s}(\infty)\]
The TRS \(\mathcal{R}\) admits two critical pairs and they are joinable:
The critical pair system \(\mathsf{CPS}(\mathcal{R})\) consists of the four rules:
\[\mathsf{s}(\mathsf{p}(\mathsf{s}(x)))\to\mathsf{s}(x) \mathsf{p}(\mathsf{s}(\mathsf{p}(x)))\to\mathsf{p}(\mathsf{p}( \mathsf{s}(x)))\] \[\mathsf{s}(\mathsf{p}(\mathsf{s}(x)))\to\mathsf{p}(\mathsf{s}( \mathsf{s}(x))) \mathsf{p}(\mathsf{s}(\mathsf{p}(x)))\to\mathsf{p}(x)\]
Termination of \(\mathsf{CPS}(\mathcal{R})/\mathcal{R}\) can be shown by, e.g., the termination tool NaTT[15]. Hence the confluence of \(\mathcal{R}\) follows by Theorem 7.2.
We argue about the parallel critical pair version of \(\mathsf{CPS}(\mathcal{R})\):
\[\mathsf{PCPS}(\mathcal{R})=\{s\to t,s\to u\mid t\mathrel{\mathop{ \mathcal{R}}\limits}\leftarrows\stackrel{{\epsilon}}{{ \rightarrow}}_{\mathcal{R}}u\text{ is a parallel critical peak}\}\]
Interestingly, replacing \(\mathsf{CPS}(\mathcal{R})\) by \(\mathsf{PCPS}(\mathcal{R})\) in Theorem 7.2 results in the same criterion (see [14]). Since \(\to_{\mathsf{CPS}(\mathcal{R})}\subseteq\to_{\mathsf{PCPS}(\mathcal{R})} \subseteq\to_{\mathsf{CPS}(\mathcal{R})}\cdot\allowbreak\mathrel{\mathop{ \mathcal{R}}\limits}\to_{\mathsf{CPS}(\mathcal{R})/\mathcal{R}}=\to_{\mathsf{ PCPS}(\mathcal{R})/\mathcal{R}}\) follows. So the termination of \(\mathsf{PCPS}(\mathcal{R})/\mathcal{R}\) is equivalent to that of \(\mathsf{CPS}(\mathcal{R})/\mathcal{R}\). However, a compositional form of Theorem 7.2 may benefit from the use of parallel critical pairs, as seen in Section 5.
**Definition 7.4**.: Let \(\mathcal{R}\) and \(\mathcal{C}\) be TRSs. The _parallel critical pair system_\(\mathsf{PCPS}(\mathcal{R},\mathcal{C})\) of \(\mathcal{R}\) modulo \(\mathcal{C}\) is defined as the TRS:
\[\{s\to t,s\to u\mid t_{\mathcal{R}}\mbox{\raisebox{-1.29pt}{$\leftrightarrow$} }s\stackrel{{\epsilon}}{{\rightarrow}}\mathcal{R}\;u\mbox{ is a parallel critical peak but not }t\leftrightarrow_{\mathcal{C}}^{*}u\}\]
Note that \(\mathsf{PCPS}(\mathcal{R},\varnothing)\subseteq\mathsf{PCPS}(\mathcal{R})\) holds in general, and \(\mathsf{PCPS}(\mathcal{R},\varnothing)\subsetneq\mathsf{PCPS}(\mathcal{R})\) when \(\mathcal{R}\) admits a trivial critical pair.
The next lemma relates \(\mathsf{PCPS}(\mathcal{R},\mathcal{C})\) to closing forms of parallel critical peaks.
**Lemma 7.5**.: _Let \(\mathcal{R}\) be a left-linear TRS and \(\mathcal{R}_{1}\), \(\mathcal{R}_{2}\), and \(\mathcal{C}\) subsets of \(\mathcal{R}\), and let \(\mathcal{P}=\mathsf{PCPS}(\mathcal{R},\mathcal{C})\). Suppose that holds. If \(t_{\mathcal{R}_{1}}\mbox{\raisebox{-1.29pt}{$\leftrightarrow$}}s\stackrel{{ \leftrightarrow}}{{\rightarrow}}\mathcal{R}_{2}\)\(u\) then_
1. \(t\mathrel{\mathop{\hbox to 0.0pt{$\leftrightarrow$}}}_{\mathcal{R}_{2}}\cdot \leftrightarrow_{\mathcal{C}}^{*}\cdot_{\mathcal{R}_{1}}\mbox{\raisebox{-1.29pt }{$\leftrightarrow$}}u\)_, or_
2. \(t\mathrel{\mathop{\hbox to 0.0pt{$\leftrightarrow$}}}_{\mathcal{R}_{1}}\mbox{ \raisebox{-1.29pt}{$\leftrightarrow$}}t^{\prime}\mathrel{\mathop{\hbox to 0.0pt{$ \leftrightarrow$}}}s\to_{\mathcal{P}}u^{\prime}\mathrel{\mathop{\hbox to 0.0pt{$ \leftrightarrow$}}}_{\mathcal{R}_{2}}u\) _and_ \(t^{\prime}\to_{\mathcal{R}}^{*}\cdot_{\mathcal{R}}^{*}\!\leftarrow u^{\prime}\) _for some_ \(t^{\prime}\) _and_ \(u^{\prime}\)_._
Proof.: Let \(\Gamma\colon t\mathrel{\mathop{\hbox to 0.0pt{$\leftrightarrow$}}}_{\mathcal{R}_{1}} \stackrel{{ P}}{{\leftrightarrow}}s\stackrel{{ Q}}{{ \rightarrow}}\mathcal{R}_{2}\)\(u\) be a local peak. We use structural induction on \(s\). Depending on the form of \(\Gamma\), we distinguish five cases.
1. If \(P\) or \(Q\) is the empty then (i) holds trivially.
2. If \(P\) or \(Q\) is \(\{\epsilon\}\) and \(\Gamma\) is orthogonal then (i) follows by Lemma 3.11(a).
3. If \(P\neq\varnothing\), \(Q=\{\epsilon\}\), and \(\Gamma\) is not orthogonal then we distinguish two cases.
4. If there exist \(P_{0}\), \(t_{0}\), \(u_{0}\), and \(\sigma\) such that "\(P_{0}\subseteq P\), \(t\mathrel{\mathop{\hbox to 0.0pt{$\leftrightarrow$}}}_{\mathcal{R}_{1}}\mbox{ \raisebox{-1.29pt}{$\leftrightarrow$}}t_{0}\sigma\mathrel{\mathop{\hbox to 0.0pt{$ \leftrightarrow$}}}_{\mathcal{R}_{1}}\mbox{\raisebox{-1.29pt}{$ \leftrightarrow$}}s\stackrel{{\epsilon}}{{\rightarrow}}\mathrel{ \mathop{\hbox to 0.0pt{$\leftrightarrow$}}}_{\mathcal{R}_{2}}u_{0}\sigma=u\), and \(t_{0}\mathrel{\mathop{\hbox to 0.0pt{$\leftrightarrow$}}}_{\mathcal{C}}^{*}\! \mathrel{\mathop{\hbox to 0.0pt{$\leftrightarrow$}}}_{\mathcal{R}}u_{0}\)" but not \(t_{0}\leftrightarrow_{\mathcal{C}}^{*}u_{0}\). Take \(t^{\prime}=t_{0}\sigma\) and \(u^{\prime}=u_{0}\sigma\). Then holds and by the assumption \(t^{\prime}\to_{\mathcal{R}}^{*}\cdot\mathrel{\mathop{\hbox to 0.0pt{$\leftrightarrow$}}}_{ \mathcal{R}}^{*}\!\leftarrow u^{\prime}\) also holds. Hence (ii) follows.
5. Otherwise, whenever \(P_{0}\), \(t_{0}\), \(u_{0}\), and \(\sigma\) satisfy the conditions quoted in the last item, \(t_{0}\leftrightarrow_{\mathcal{C}}^{*}u_{0}\) holds. Because \(\Gamma\) is not orthogonal, by Lemma 3.11(b) there exist \(P_{0}\), \(t_{0}\), \(u_{0}\), \(\sigma\), and \(\tau\) such that \(P_{0}\subseteq P\), \(t=t_{0}\tau\mathrel{\mathop{\hbox to 0.0pt{$\leftrightarrow$}}}_{\mathcal{R}_{1}} \mbox{\raisebox{-1.29pt}{$\leftrightarrow$}}t_{0}\sigma\mathrel{\mathop{ \hbox to 0.0pt{$\leftrightarrow$}}}_{\mathcal{R}_{1}}\mbox{\raisebox{-1.29pt }{$\leftrightarrow$}}s\stackrel{{\epsilon}}{{\rightarrow}} \mathrel{\mathop{\hbox to 0.0pt{$\leftrightarrow$}}}_{\mathcal{R}_{2}}u_{0}\sigma=u\), and \(\sigma\mathrel{\mathop{\hbox to 0.0pt{$\leftrightarrow$}}}_{\mathcal{R}_{1}}\tau\). Thus \(t_{0}\leftrightarrow_{\mathcal{C}}^{*}u_{0}\) follows. Therefore, \(t=t_{0}\tau\leftrightarrow_{\mathcal{C}}^{*}u_{0}\tau\mathrel{\mathop{\hbox to 0.0pt{$ \leftrightarrow$}}}_{\mathcal{R}_{1}}\mbox{\raisebox{-1.29pt}{$\leftrightarrow$ }}u_{0}\sigma=u\), and hence (i) holds.
6. If \(P=\{\epsilon\}\), \(Q\nsubseteq\{\epsilon\}\), and \(\Gamma\) is not orthogonal then the proof is analogous to the last case.
7. If \(P\nsubseteq\{\epsilon\}\) and \(Q\nsubseteq\{\epsilon\}\) then \(s\), \(t\), and \(u\) can be written as \(f(s_{1},\ldots,s_{n})\), \(f(t_{1},\ldots,t_{n})\), and \(f(u_{1},\ldots,u_{n})\) respectively, and \(\Gamma_{i}\colon t_{i}\mathrel{\mathop{\hbox to 0.0pt{$\leftrightarrow$}}}_{ \mathcal{R}_{1}}\mbox{\raisebox{-1.29pt}{$\leftrightarrow$}}s_{i}\stackrel{{ \leftrightarrow}}{{\rightarrow}}\mathrel{\mathop{\hbox to 0.0pt{$\leftrightarrow$}}}_{ \mathcal{R}_{2}}u_{i}\) holds for all \(1\leqslant i\leqslant n\). For every peak \(\Gamma_{i}\) the induction hypothesis yields (i) or (ii). If (i) holds for all \(\Gamma_{i}\) then (i) is concluded for \(\Gamma\). Otherwise, some \(\Gamma_{i}\) satisfies (ii). By taking \(t^{\prime}=f(s_{1},\ldots,t_{i},\ldots,s_{n})\) and \(u^{\prime}=f(s_{1},\ldots,u_{i},\ldots,s_{n})\) we have \(t\mathrel{\mathop{\hbox to 0.0pt{$\leftrightarrow$}}}_{\mathcal{R}_{1}}\mbox{ \raisebox{-1.29pt}{$\leftrightarrow$}}t^{\prime}\mathrel{\mathop{\hbox to 0.0pt{$\leftrightarrow$}}}s\to_{\mathcal{P}}u^{\prime} \mathrel{\mathop{\hbox to 0.0pt{$\leftrightarrow$}}}_{\mathcal{P}}u\). From \(t_{i}\to_{\mathcal{R}}^{*}\cdot\mathrel{\mathop{\hbox to 0.0pt{$\leftrightarrow$}}}_{ \mathcal{R}}^{*}\!\leftarrow u_{i}\) we obtain \(t^{\prime}\to_{\mathcal{R}}^{*}\cdot\mathrel{\mathop{\hbox to 0.0pt{$\leftrightarrow$}}}_{ \mathcal{R}}^{*}\!\leftarrow u^{\prime}\). Hence \(\Gamma\) satisfies (ii).
The next theorem is a compositional confluence criterion based on parallel critical pair systems.
**Theorem 7.6**.: _Let \(\mathcal{R}\) be a left-linear TRS and \(\mathcal{C}\) a confluent TRS with \(\mathcal{C}\subseteq\mathcal{R}\). The TRS \(\mathcal{R}\) is confluent if and \(\mathcal{P}/\mathcal{R}\) is terminating, where \(\mathcal{P}=\mathsf{PCPS}(\mathcal{R},\mathcal{C})\)._
Proof.: Let \(\bot\) be a fresh symbol and let \(I=\mathcal{T}(\mathcal{F},\mathcal{V})\cup\{\bot\}\). We define the relation \(>\) on \(I\) as follows: \(\alpha>\beta\) if \(\alpha\neq\bot=\beta\) or \(\alpha\to_{\mathcal{P}/\mathcal{R}}^{+}\beta\). Since \(\mathcal{P}/\mathcal{R}\) is terminating, \(>\) is a well-founded order. Let \(\mathcal{A}=(\mathcal{T}(\mathcal{F},\mathcal{V}),\{\mbox{\raisebox{-1.29pt}{$ \leftrightarrow$}}_{\alpha}\}_{\alpha\in I})\) be the ARS, where is defined as follows: if either \(\alpha=\bot\) and \(s\mathrel{\mathop{\hbox to 0.0pt{$\leftrightarrow$}}}_{\mathcal{C}}t\), or \(\alpha\neq\bot\) and \(\alpha\to_{\mathcal{R}}^{*}s\mathrel{\mathop{\hbox to 0.0pt{$\leftrightarrow$}}}_{ \mathcal{R}}\!\leftarrow u^{\prime}\) for some \(t^{\prime}\) and \(u^{\prime}\). Since the commutation of \(\mathcal{C}\) and \(\mathcal{C}\) follows from confluence of \(\mathcal{C}\), Lemma 2.5 yields the commutation of \(\to_{\
with \((\alpha,\beta)\in I^{2}\setminus\{(\bot,\bot)\}\) is decreasing. By the definition of \(\mathcal{A}\) we have \(s\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\rightarrow$}}\limits}_{ \mathcal{R}_{1}}t\) and \(s\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\rightarrow$}}\limits}_{ \mathcal{R}_{2}}u\) for some TRSs \(\mathcal{R}_{1},\mathcal{R}_{2}\in\{\mathcal{R}\setminus\mathcal{C},\mathcal{C}\}\). Using Lemma 7.5, we distinguish two cases.
1. Suppose that Lemma 7.5(i) holds for \(\Gamma\). Then \(t\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\rightarrow$}}\limits}_{ \mathcal{R}_{2}}t^{\prime}\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\leftarrow$}} \limits}_{\mathcal{R}_{1}}^{*}\mathrel{u^{\prime}}_{\mathcal{R}_{1}} \mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\leftarrow$}}\limits}u\) holds for some \(t^{\prime}\) and \(u^{\prime}\). If \(\mathcal{R}_{2}=\mathcal{R}\setminus\mathcal{C}\) then \(t\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\rightarrow$}}\limits}_{ \beta}t^{\prime}\) follows from \(\beta\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\rightarrow$}}\limits}_{ \mathcal{R}}^{*}s\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$ \rightarrow$}}\limits}_{\mathcal{R}}^{*}t\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$ \rightarrow$}}\limits}_{\mathcal{R}\setminus\mathcal{C}}t^{\prime}\). Otherwise, \(\mathcal{R}_{2}=\mathcal{C}\) yields \(t\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\rightarrow$}}\limits}_{ \bot}t^{\prime}\). In either case \(t\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\rightarrow$}}\limits}_{ \{\beta,\bot\}}t^{\prime}\) is obtained. Similarly, \(u\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\rightarrow$}}\limits}_{ \{\alpha,\bot\}}u^{\prime}\) is obtained. Moreover, \(t^{\prime}\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\leftrightarrow$}} \limits}_{\bot}^{*}u^{\prime}\) follows from \(t^{\prime}\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\leftrightarrow$}} \limits}_{\mathcal{C}}^{*}u^{\prime}\). Since \((\alpha,\beta)\neq(\bot,\bot)\) yields \(\bot\in\gamma\,\alpha\beta\) and the reflexivity of \(\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\rightarrow$}}\limits}_{\bot}\) yields \(\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\rightarrow$}}\limits}_{ \{\delta,\bot\}}\subseteq\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$ \rightarrow$}}\limits}_{\delta}^{*}\cdot\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$ \rightarrow$}}\limits}_{\bot}\) for any \(\delta\), we obtain the desirable conversion \(t\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\rightarrow$}}\limits}_{\beta}t^{\prime} \mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\leftarrow$}}\limits}_{ \gamma\,\alpha\beta}^{*}u^{\prime}\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$ \leftarrow$}}\limits}_{\alpha}u\). Hence, \(\Gamma\) is decreasing.
2. Suppose that Lemma 7.5(ii) holds for \(\Gamma\). We have \(t\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\leftarrow$}}\limits}_{ \Omega}\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\leftarrow$}}\limits}t^{ \prime}\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\leftarrow$}}\limits} \mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\leftarrow$}}\limits} \mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\leftarrow$}}\limits}s\mathrel{ \mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\rightarrow$}}\limits}p\mathrel{u^{\prime}} \mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\rightarrow$}}\limits}_{ \mathcal{R}_{2}}u\) and \(t^{\prime}\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\rightarrow$}}\limits}_{ \mathcal{R}}^{*}v\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$ \leftarrow$}}\limits}_{\mathcal{R}}^{*}\mathrel{u^{\prime}}\) for some \(t^{\prime}\), \(u^{\prime}\), and \(v\). As \((\alpha,\beta)\neq(\bot,\bot)\), we have \(\alpha\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\rightarrow$}}\limits}_{ \mathcal{R}}s\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\rightarrow$}}\limits}p\mathrel{t^{\prime}}\) or \(\beta\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\rightarrow$}}\limits}_{ \mathcal{R}}s\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\rightarrow$}}\limits}p\mathrel{t^{\prime}}\), from which \(\alpha>t^{\prime}\) or \(\beta>t^{\prime}\) follows. Thus, \(t^{\prime}\in\gamma\,\alpha\beta\). If \(\mathcal{R}_{2}=\mathcal{R}\setminus\mathcal{C}\) then \(t^{\prime}\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\leftrightarrow$}} \limits}_{\vdash^{\prime}}t\). Otherwise, \(\mathcal{R}_{2}=\mathcal{C}\) yields \(t^{\prime}\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\rightarrow$}}\limits}_{ \bot}t\). So in either case \(t^{\prime}\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\rightarrow$}}\limits}_{ \gamma\,\alpha\beta}t\) holds. Next, we show \(t^{\prime}\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\leftrightarrow$}} \limits}_{\gamma\,\alpha\beta}^{*}v\). Consider terms \(w\) and \(w^{\prime}\) with \(t^{\prime}\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\rightarrow$}}\limits}_{ \mathcal{R}}^{*}w\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\rightarrow$}}\limits}_{ \mathcal{R}}w^{\prime}\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\rightarrow$}}\limits}_{ \mathcal{R}}^{*}v\). We have \(w\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\rightarrow$}}\limits}_{ \mathcal{R}}w^{\prime}\) or \(w\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\rightarrow$}}\limits}_{ \bot}w^{\prime}\). So \(w\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\rightarrow$}}\limits}_{ \gamma\,\alpha\beta}w^{\prime}\) follows by \(\{t^{\prime},\bot\}\subseteq\gamma\,\alpha\beta\). Summing up, we obtain \(t\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\leftarrow$}}\limits}_{ \gamma\,\alpha\beta}t^{\prime}\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$ \leftrightarrow$}}\limits}_{\alpha\beta}^{*}v_{\alpha\beta}\). In a similar way \(u\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\leftarrow$}}\limits}_{ \gamma\,\alpha\beta}u^{\prime}\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$ \leftrightarrow$}}\limits}_{\gamma\,\alpha\beta}^{*}v\) is obtained. Therefore \(t\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\leftarrow$}}\limits}_{\gamma\,\alpha\beta}t^{ \prime}\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\rightarrow$}}\limits}_{ \gamma\,\alpha\beta}^{*}v\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$ \leftarrow$}}\limits}_{\gamma\,\alpha\beta}^{*}u^{\prime}\mathrel{ \mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\leftarrow$}}\limits}_{\gamma\,\alpha\beta}^{*}u\), and hence \(\Gamma\) is decreasing.
We claim that Theorem 7.2 is subsumed by Theorem 7.6. Suppose that \(\mathcal{C}\) is the empty TRS. Trivially \(\mathcal{C}\) is confluent. Because \(\mathsf{PCPS}(\mathcal{R},\mathcal{C})\) is a subset of \(\mathsf{PCPS}(\mathcal{R})\), termination of \(\mathsf{PCPS}(\mathcal{R},\mathcal{C})/\mathcal{R}\) follows from that of \(\mathsf{PCPS}(\mathcal{R})/\mathcal{R}\), which is equivalent to termination of \(\mathsf{CPS}(\mathcal{R})/\mathcal{R}\). Finally, \(\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\leftarrow$}}\limits} \mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\leftarrow$}}\limits} \mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\rightarrow$}}\limits}_{ \mathcal{R}}\subseteq\mathrel{\mathop{\kern 0.0pt\hbox
## 8. Reduction Method
We present a _reduction method_ for confluence analysis. The method shrinks a rewrite system \(\mathcal{R}\) to a subsystem \(\mathcal{C}\) such that \(\mathcal{R}\) is confluent iff \(\mathcal{C}\) is confluent. Because compositional confluence criteria address the 'if' direction, the question here is how to guarantee the reverse direction. In this section we develop a simple criterion, which exploits the fact that confluence is preserved under signature extensions. The resulting reduction method can easily be automated by using SAT solvers.
We will show that if TRSs \(\mathcal{R}\) and \(\mathcal{C}\) satisfy \(\mathcal{R}\!\restriction_{\mathcal{C}}\subseteq\to_{\mathcal{C}}^{*}\) then confluence of \(\mathcal{R}\) implies confluence of \(\mathcal{C}\). Here \(\mathcal{R}\!\restriction_{\mathcal{C}}\) stands for the following subsystem of \(\mathcal{R}\):
\[\mathcal{R}\!\restriction_{\mathcal{C}}=\{\ell\to r\in\mathcal{R}\mid\mathcal{ Fun}(\ell)\subseteq\mathcal{Fun}(C)\}\]
The following auxiliary lemma explains the role of the condition \(\mathcal{R}\!\restriction_{\mathcal{C}}\subseteq\to_{\mathcal{C}}^{*}\).
**Lemma 8.1**.: _Suppose \(\mathcal{R}\!\restriction_{\mathcal{C}}\subseteq\to_{\mathcal{C}}^{*}\)._
1. _If_ \(s\to_{\mathcal{R}}t\) _and_ \(\mathcal{Fun}(s)\subseteq\mathcal{Fun}(\mathcal{C})\) _then_ \(s\to_{\mathcal{C}}^{*}t\) _and_ \(\mathcal{Fun}(t)\subseteq\mathcal{Fun}(\mathcal{C})\)_._
2. _If_ \(s\to_{\mathcal{R}}^{*}t\) _and_ \(s\in\mathcal{T}(\mathcal{Fun}(\mathcal{C}),\mathcal{V})\) _then_ \(s\to_{\mathcal{C}}^{*}t\)_._
Proof.: We only show the first claim, because then the second claim is shown by straightforward induction. Suppose \(s\in\mathcal{T}(\mathcal{Fun}(\mathcal{C}),\mathcal{V})\) and \(s\to_{\mathcal{R}}t\). There exist a rule \(\ell\to r\in\mathcal{R}\), a position \(p\in\mathcal{Pos}_{\mathcal{F}}(s)\), and a substitution \(\sigma\) such that \(s|_{p}=\ell\sigma\) and \(t=t[r\sigma]_{p}\).
* If \(\ell\to r\in\mathcal{R}\!\restriction_{\mathcal{C}}\) then \(s\to_{\mathcal{C}}^{*}t\) and \(\mathcal{Fun}(r)\subseteq\mathcal{Fun}(\mathcal{C})\) by assumption. From the latter and \(\mathcal{Fun}(\ell\sigma)\subseteq\mathcal{Fun}(s)\subseteq\mathcal{Fun}( \mathcal{C})\) we obtain \(\mathcal{Fun}(t)=\mathcal{Fun}(s[r\sigma]_{p})\subseteq\mathcal{Fun}(\mathcal{ C})\).
* Otherwise, \(\mathcal{Fun}(\ell)\nsubseteq\mathcal{Fun}(\mathcal{C})\). However, we have \(\mathcal{Fun}(\ell)\subseteq\mathcal{Fun}(s)\subseteq\mathcal{Fun}(\mathcal{ C})\), so this case does not happen.
As a consequence of Lemma 8.1(2), confluence of \(\mathcal{R}\) carries over to confluence of \(\mathcal{C}\), when the inclusion \(\mathcal{R}\!\restriction_{\mathcal{C}}\subseteq\to_{\mathcal{C}}^{*}\) holds and the signature of \(\mathcal{C}\) is \(\mathcal{Fun}(\mathcal{C})\). The restriction against the signature of \(\mathcal{C}\) can be lifted by the fact that confluence is preserved under signature extensions:
**Proposition 8.2**.: _A TRS \(\mathcal{C}\) is confluent if and only if the implication_
\[t\stackrel{{*}}{{{}_{\mathcal{C}}}}\gets s\to_{\mathcal{C}}^{* }u\implies t\to_{\mathcal{C}}^{*}\cdot\stackrel{{*}}{{{}_{\mathcal{C }}}}\gets u\]
_holds for all terms \(s,t,u\in\mathcal{T}(\mathcal{Fun}(\mathcal{C}),\mathcal{V})\)._
Proof.: Toyama [13] showed that the confluence property is _modular_, i.e., the union of two TRSs \(\mathcal{R}_{1}\) and \(\mathcal{R}_{2}\) over signatures \(\mathcal{F}_{1}\) and \(\mathcal{F}_{2}\) with \(\mathcal{F}_{1}\cap\mathcal{F}_{2}=\varnothing\) is confluent if and only if both \(\mathcal{R}_{1}\) and \(\mathcal{R}_{2}\) are confluent. Let \(\mathcal{C}\) be a TRS over a signature \(\mathcal{F}\). The claim follows by taking \(\mathcal{R}_{1}=\mathcal{C}\), \(\mathcal{R}_{2}=\varnothing\), \(\mathcal{F}_{1}=\mathcal{Fun}(\mathcal{C})\), and \(\mathcal{F}_{2}=\mathcal{F}\setminus\mathcal{F}_{1}\).
Now we are ready to show the main claim.
**Theorem 8.3**.: _Suppose \(\mathcal{R}\mathord{\restriction}_{\mathcal{C}}\subseteq\to^{*}_{\mathcal{C}}\). If \(\mathcal{R}\) is confluent then \(\mathcal{C}\) is confluent._
Proof.: Suppose that \(\mathcal{R}\) is confluent. It is enough to show the implication in Proposition 8.2 for all \(s,t,u\in\mathcal{T}(\mathcal{Fun}(\mathcal{C}),\mathcal{V})\). Suppose \(t\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox to 0.0pt{ \raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt\hbox to 0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.00pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.00pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt\hbox to 0.0pt{\raise 0.00pt\hbox to 0.0pt{\raise 0.00pt\hbox to 0.0pt{\raise 0.00pt\hbox t o 0.0pt{\raise 0.00pt\hbox to 0.0pt{\raise 0.00pt\hbox t o 0.0pt\hbox to 0.0pt{\raise 0.00pt\hbox to 0.0pt{\raise 0.00pt\hbox t o 0.0pt\hbox to 0.0pt{\raise 0.00pt\hbox to 0.0pt{\raise 0.00pt\hbox t o 0.0pt{\raise 0.00pt\hbox to 0.0pt{\raise 0.00pt\hbox t o 0.0pt\hbox to 0.0pt{\raise 0.00pt\hbox to 0.0pt{\raise 0.00pt\hbox t o 0.0pt{\raise 0.00pt\hbox to 0.0pt{\raise 0.00pt\hbox t o 0.0pt{\raise 0.00pt\hbox to 0.0pt{\hss 0.000pt\hbox to 0.0pt{\hss 0.000pt\hss\hbox to 0.0pt{\hss 0.00pt\hss\hss\hss\hbox to 0.00pt{\hss 0.00pt\hss\hss\hbox to 0.00pt{\hss 0.00pt\hss\hss\hbox to 0.0pt{\hss 0.00pt\hss\hss\hss\hbox to 0.00pt{\hss 0.00pt\hss\hss\hss\hbox to 0.00pt\hss\hss 0.00pt\hss \hss\hbox to 0.00pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\hss\hss\hss\hbox to 0.00pt{\hss\hsshss 0.000pt\hss\hss \hss\hbox to 0.00pt{\hss\hss 0.00pt\hss\hss \hss\hbox to 0.0pt{\hss\hss\hss 0.00pt\hss\hss \hss\hbox to 0.00pt{\hss\hss 0.0pt\hss\hss \hss\hss}\hss\hbox to 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox t o 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox to 0.0pt{\raise 0.0pt\hbox to 0.0pt{\hss\hss 0.0pt\hss\hss \hss\hss}\hss\hbox to 0.0pt{\raise 0.
(i) \(\mathcal{C}_{0}\subseteq\mathcal{C}\subsetneq\mathcal{R}\) and (ii) \(\mathcal{R}\mathord{\restriction}_{\mathcal{C}}\subseteq\to_{\mathcal{C}}^{\leqslant k}\) for a designated number \(k\in\mathbb{N}\). This search problem can be reduced to a SAT problem. Let \(\mathsf{S}_{k}(\ell\to r)\) be the following set of subsystems:
\[\mathsf{S}_{k}(\ell\to r)=\{\{\beta_{1},\ldots,\beta_{n}\}\mid\ell\to_{\beta_{ 1}}\cdots\to_{\beta_{n}}r\text{ and }n\leqslant k\}\]
In our SAT encoding we use two kinds of propositional variables: \(x_{\ell\to r}\) and \(y_{f}\). The former represents \(\ell\to r\in\mathcal{C}\), and the latter represents \(f\in\mathcal{F}\mathsf{un}(\mathcal{C})\). With these variables the search problem for \(\mathcal{C}\) is encoded as follows:
\[\bigwedge_{\alpha\in\mathcal{C}_{0}}x_{\alpha}\ \wedge\ \bigvee_{\alpha\in \mathcal{R}}\neg x_{\alpha}\ \wedge\ \bigwedge_{\alpha\in\mathcal{R}}\biggl{(}\neg x_{\alpha}\vee\bigwedge_{f\in \mathcal{F}\mathsf{un}(\alpha)}y_{f}\biggr{)}\ \wedge\ \bigwedge_{\alpha\in\mathcal{R}\setminus \mathcal{C}_{0}}\biggl{(}\bigl{(}\bigvee_{\mathcal{S}\in\mathsf{S}_{k}(\alpha) }x_{\mathcal{S}}\bigr{)}\ \vee\ \bigl{(}\neg\bigwedge_{f\in\mathcal{F}\mathsf{un}(\ell)}y_{f}\bigr{)}\biggr{)}\]
Here \(x_{\mathcal{S}}=x_{\beta_{1}}\wedge\cdots\wedge x_{\beta_{n}}\) for \(\mathcal{S}=\{\beta_{1},\ldots,\beta_{n}\}\). It is easy to see that the first two clauses encode condition (i) and the third clause characterizes \(\mathcal{F}\mathsf{un}(\mathcal{C})\). The last clause encodes condition (ii).
**Example 8.6** (Continued from Example 8.5).: Recall that \(\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{ R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$} \raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{ R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$} \raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{ R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$} \raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{ R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$} \raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{ R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$} \raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{ R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$} \raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\! \left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$} \raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\! \left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0 pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\! \left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0 pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\! \left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0 pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\! \left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0 pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\! \left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0 pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\! \left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0 pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\! \left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0 pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\! \left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0 pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\! \left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0 pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\! \left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0 pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\! \left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0 pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\! \left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0 pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\! \left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0 pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\! \left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0 pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\! \left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0 pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\! \left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0 pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\! \left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0 pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\! \left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0 pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\! \left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0 pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\! \left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0 pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\! \left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$} \raisebox{0.0pt}{$\mathcal{R}\!\left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\! \left<\!\!\right>$}\raisebox{0.0pt}{$\mathcal{R}\!\left<\!\right>$}\raisebox{0.0 pt}{$\mathcal{R}\!
* **OO**: Successive application of Theorem 5.4, as illustrated in Example 5.5.
* **RC**: Theorem 6.5, where confluence of a subsystem \(\mathcal{C}\) is shown by Theorem 7.6 with the empty subsystem.
* **CR**: Theorem 7.6, where confluence of a subsystem \(\mathcal{C}\) is shown by Theorem 6.5 with the empty subsystem.
* **rOO**, **rRC**, and **rCR**: They are same as **OO**, **RC**, and **CR** but Corollary 8.4 is always used repeatedly before a (compositional) confluence criterion is applied.
For the sake of comparison the results of the confluence tools ACP version 0.62 [1], CoLL-Sagiawa version 1.6 [15] and and CSI version 1.25 [16] are also included in the table, where CoLL-Sagiawa is abbreviated to CoLL.
We briefly explain how these criteria are automated in our tool. Suitable subsystems for the compositional criteria are searched by enumeration. Relative termination, required by Theorems 7.2 and 7.6, is checked by employing the termination tool NaTT version 1.9 [17]. Joinability of each (parallel) critical pair \((t,u)\) is tested by the relation:
\[t\stackrel{{\leq 5}}{{\longrightarrow}}\cdot\stackrel{{ \leq 5}}{{\longleftarrow}}u\]
For rule labeling, the decreasingness of each parallel critical peak \(t\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\hss$\circ$}\kern-3.0pt\raise 0.43pt \hbox{\tiny$\bullet$}}}_{\phi,k}\stackrel{{ P}}{{\longleftarrow}}s \stackrel{{\epsilon}}{{\longrightarrow}}_{\psi,m}u\) is checked by existence of a conversion of the form
\[t\stackrel{{\longrightarrow}}{{\longrightarrow}}_{\forall k} \stackrel{{ i_{1}}}{{\longrightarrow}}\cdot\stackrel{{ \longrightarrow}}{{\longrightarrow}}_{\psi,m}\stackrel{{ i_{2}}}{{ \longrightarrow}}\cdot\stackrel{{ j_{3}}}{{\longrightarrow}}_{ \forall km}v\stackrel{{ j_{2}}}{{\longleftarrow}}\stackrel{{ P^{\prime}}}{{\longleftarrow}}\cdot\stackrel{{ j_{1}}}{{\longleftarrow}}\cdot\stackrel{{\longleftarrow}}{{\longleftarrow}} \cdot\stackrel{{\longleftarrow}}{{\longleftarrow}}u\]
such that \(i_{1},i_{3},j_{1},j_{3}\in\mathbb{N}\), \(i_{2},j_{2}\in\{0,1\}\), \(i_{1}+i_{2}+i_{3}\leqslant 5\), \(j_{1}+j_{2}+j_{3}\leqslant 5\), and the inclusion \(\mathcal{V}\mathsf{ar}(v,P^{\prime})\subseteq\mathcal{V}\mathsf{ar}(s,P)\) holds. This is encoded into linear arithmetic constraints [12], and they are solved by the SMT solver Z3 version 4.8.11 [12]. Finally, automation of the reduction method (Corollary 8.4) is done by SAT solving as presented in Section 8. The SMT solver Z3 is used for solving SAT problems for the method.
As theoretically expected, in the experiments **O**, **R**, and **C** are subsumed by their compositional versions **OO**, **RC**, and **CR**, respectively. Moreover, **OO** is subsumed by **R**, **RC**, and **CR**. Due to timeouts, **CR** misses three systems of which **R** can prove confluence. While the union of **R** and **C** amounts to 145, the union of **RC** and **CR** amounts to 153. Differences between **RC** and **CR** are summarized as follows:
* Three systems are proved by **RC** but not by **CR** nor **R**.4 One of them is the next TRS (COPS number 994). **RC** uses the subsystem \(\{2,4,6\}\) whose confluence is shown by **C**.
Footnote 4: The three systems are COPS numbers 994, 1001, and 1029. The aforementioned confluence tools also fail to prove confluence of these systems.
\begin{table}
\begin{tabular}{l r r r r r r r r r r r r} \hline & **O** & **R** & **C** & **OO** & **RC** & **CR** & **rOO** & **rRC** & **rCR** & **ACP** & **CoLL** & **CSI** \\ \# of proved TRSs & 20 & 135 & 59 & 88 & 152 & 143 & 91 & 153 & 146 & 198 & 177 & 214 \\ timeouts & 0 & 20 & 10 & 13 & 86 & 32 & 10 & 80 & 36 & 48 & 169 & 3 \\ \hline \end{tabular}
\end{table}
Table 1. Experimental results on 462 left-linear TRSs.
* The only TRS where \(\mathsf{CR}\) is advantageous to \(\mathsf{RC}\) is COPS number 132: \[1\colon\ -(x+y)\to(-x)+(-y) 3\colon\ -(-x)\to x\] \[2\colon\ (x+y)+z\to x+(y+z) 4\colon\ x+y\to y+x\] Its confluence is shown by the composition of Theorem 7.6 and Theorem 6.2, the latter of which proves the subsystem \(\{1,2,4\}\) confluent.
The columns \(\mathsf{rOO}\), \(\mathsf{rRC}\), and \(\mathsf{rCR}\) in Table 1 show that the use of the reduction method (Corollary 8.4) basically improves the power and efficiency of the underlying compositional confluence criteria. We note a few observations:
* The confluence proving powers of \(\mathsf{rOO}\) and \(\mathsf{OO}\) are theoretically equivalent, because the reduction method as a compositional confluence criterion is an instance of \(\mathsf{OO}\). In the experiments \(\mathsf{rOO}\) handled three more systems. This is due to the improvement of efficiency. The same argument holds for the relation between \(\mathsf{rRC}\) and \(\mathsf{RC}\).
* While the use of the reduction method improves the efficiency in most of cases, there are a few exceptions (e.g. COPS number 689). The bottleneck is the reachability test by \(\lnot_{\mathcal{C}^{\leqslant k}}\).
* The reduction method and \(\mathsf{C}\) are incomparable with each other. Hence \(\mathsf{rCR}\) is more powerful than \(\mathsf{CR}\). In the experiments, \(\mathsf{rCR}\) subsumes \(\mathsf{CR}\) and it includes three more systems. As a drawback, \(\mathsf{rCR}\) has four more timeouts.
* Among \(\mathsf{rOO}\), \(\mathsf{rRC}\), and \(\mathsf{rCR}\), the second criterion is the most powerful. As in the cases of their underlying criteria, the results of \(\mathsf{rOO}\) are subsumed by both \(\mathsf{rRC}\) and \(\mathsf{rCR}\), and COPS number 132 is the only problem where \(\mathsf{rCR}\) outperforms \(\mathsf{rRC}\).
## 10. Conclusion
We studied how compositional confluence criteria can be derived from confluence criteria based on the decreasing diagrams technique, and showed that Toyama's almost parallel closedness theorem is subsumed by his earlier theorem based on parallel critical pairs. We conclude the paper by mentioning related work and future work.
Simultaneous critical pairs. van Oostrom [11] showed the almost development closedness theorem: A left-linear TRS is confluent if the inclusions
\[\xleftarrow{\times}\xleftarrow{\epsilon}\subseteq\xleftarrow{*}\cdot \xleftarrow{\times}\xleftarrow{\times}\xleftarrow{\times}\]
hold, where \(\xleftarrow{\times}\) stands for the multi-step [13, Section 4.7.2]. Okui [15] showed the simultaneous closedness theorem: A left-linear TRS is confluent if the inclusion
\[\xleftarrow{\times}\xrightarrow\subseteq\xleftarrow{*}\cdot\xleftarrow{ \times}\]
holds, where \(\xleftarrow{\times}\xrightarrow{\times}\) stands for the set of simultaneous critical pairs [15]. As this inclusion characterizes the inclusion \(\xleftarrow{\cdot}\cdot\rightarrow\subseteq\xrightarrow{*}\cdot\xleftarrow{\times}\), simultaneous closedness subsumes almost development closedness. The main result in Section 3 is considered as a counterpart of this relationship in the setting of parallel critical pairs.
Critical-pair-closing systems. A TRS \(\mathcal{C}\) is called _critical-pair-closing_ for a TRS \(\mathcal{R}\) if
\[\mathcal{R}\!\!\leftarrow\!\!\rtimes\stackrel{{\epsilon}}{{ \rightarrow}}\!\!\!\rightarrow_{\mathcal{R}}\subseteq\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\
## Acknowledgment
We are grateful to Jean-Pierre Jouannaud, Vincent van Oostrom, and Yoshihito Toyama for their valuable comments on preliminary results of this work. We are also grateful to Rene Thiemann for spotting and correcting a mistake in the proof of Theorem 6.5 in the preliminary version of this paper [1]. We thank the reviewers of our FSCD submission [1] for their thorough reading and suggestions, which helped us to improve the presentation.
|
2307.16142 | Implicit Neural Representation in Medical Imaging: A Comparative Survey | Implicit neural representations (INRs) have gained prominence as a powerful
paradigm in scene reconstruction and computer graphics, demonstrating
remarkable results. By utilizing neural networks to parameterize data through
implicit continuous functions, INRs offer several benefits. Recognizing the
potential of INRs beyond these domains, this survey aims to provide a
comprehensive overview of INR models in the field of medical imaging. In
medical settings, numerous challenging and ill-posed problems exist, making
INRs an attractive solution. The survey explores the application of INRs in
various medical imaging tasks, such as image reconstruction, segmentation,
registration, novel view synthesis, and compression. It discusses the
advantages and limitations of INRs, highlighting their resolution-agnostic
nature, memory efficiency, ability to avoid locality biases, and
differentiability, enabling adaptation to different tasks. Furthermore, the
survey addresses the challenges and considerations specific to medical imaging
data, such as data availability, computational complexity, and dynamic clinical
scene analysis. It also identifies future research directions and
opportunities, including integration with multi-modal imaging, real-time and
interactive systems, and domain adaptation for clinical decision support. To
facilitate further exploration and implementation of INRs in medical image
analysis, we have provided a compilation of cited studies along with their
available open-source implementations on
\href{https://github.com/mindflow-institue/Awesome-Implicit-Neural-Representations-in-Medical-imaging}.
Finally, we aim to consistently incorporate the most recent and relevant papers
regularly. | Amirali Molaei, Amirhossein Aminimehr, Armin Tavakoli, Amirhossein Kazerouni, Bobby Azad, Reza Azad, Dorit Merhof | 2023-07-30T06:39:25Z | http://arxiv.org/abs/2307.16142v1 | # Implicit Neural Representation in Medical Imaging: A Comparative Survey
###### Abstract
Implicit neural representations (INRs) have gained prominence as a powerful paradigm in scene reconstruction and computer graphics, demonstrating remarkable results. By utilizing neural networks to parameterize data through implicit continuous functions, INRs offer several benefits. Recognizing the potential of INRs beyond these domains, this survey aims to provide a comprehensive overview of INR models in the field of medical imaging. In medical settings, numerous challenging and ill-posed problems exist, making INRs an attractive solution. The survey explores the application of INRs in various medical imaging tasks, such as image reconstruction, segmentation, registration, novel view synthesis, and compression. It discusses the advantages and limitations of INRs, highlighting their resolution-agnostic nature, memory efficiency, ability to avoid locality biases, and differentiability, enabling adaptation to different tasks. Furthermore, the survey addresses the challenges and considerations specific to medical imaging data, such as data availability, computational complexity, and dynamic clinical scene analysis. It also identifies future research directions and opportunities, including integration with multimodal imaging, real-time and interactive systems, and domain adaptation for clinical decision support. To facilitate further exploration and implementation of INRs in medical image analysis, we have provided a compilation of cited studies along with their available open-source implementations on GitHub. Finally, we aim to consistently incorporate the most recent and relevant papers regularly.
## 1 Introduction
Knowledge representation is one of the fundamental pillars of artificial intelligence (AI). Its importance stems from its significant impact on the success of machines in learning many AI-related tasks, such as classification and decision-making. Humans construct task-specific representations to facilitate their interaction with the world around them [36]. Consequently, a substantial amount of AI research involves proposing algorithms and models that mimic this human cognition process to solve machine learning tasks with a degree of performance that equals, or even surpasses, human capabilities [49, 7, 8, 4].
Our visual world can be represented continuously, which is a fundamental principle in fields such as computer vision. Data obtained through observation and sensing manifests in various forms, including images and audio. Conventional approaches to encoding input signals as representations typically follow an explicit paradigm, where the input space is discretized or partitioned into separate elements (_e.g_., point clouds, voxel grids, and meshes). However, in recent years, an alternative approach to representation, known as implicit representations, has gained popularity due to its efficient memory usage [23]. Unlike explicit (or discrete) representations that directly encode the features or signal values, implicit representations are defined as a generator function that maps input coordinates to their corresponding value within the input space.
In computer vision, the quality of representing an image signal holds central importance. Deep neural networks have emerged as the de facto tools for complex tasks across various AI domains, particularly in computer vision, owing to their remarkable representation learning ability [27, 5]. As a result, there has been an exploration of leveraging their capacity to function as implicit functions, yielding promising results [50, 63]. Within this context, a Multi-Layer Perceptron (MLP) is trained to parameterize the signal of interest, such as an image or shape, utilizing coordinates as input. The objective is to predict the corresponding data values at those coordinates. Thus, the MLP serves as an **Implicit Neural Representation** function that encodes the signal's representation within its weights. For instance, in the case of image signals, feeding the MLP with pixel coordinates leads to the generation of its RGB value as the output.
These implicit neural functions find extensive application in tasks such as image generation, super-resolution, 3D object shape reconstruction, and modeling complex signals [33, 12, 34, 62]. The utilization of MLPs for image and
shape parameterization offers several advantages. Firstly, they are resolution-agnostic as they operate in the continuous domain. This characteristic enables them to generate values for coordinates in-between pixel or voxel-wise grids, thereby facilitating vision tasks. Secondly, the memory requirements for representing the signal are not limited by its resolution. Consequently, MLP models exhibit enhanced memory efficiency while remaining effective, especially when compared to the grid or voxel representations. The memory demand scales according to the complexity of the signal itself. Thirdly, MLPs assist INRs in mitigating potential locality biases that could hinder performance, unlike convolutional neural networks (CNNs). Finally, MLP models are differentiable, endowing them with significant adaptability across a wide range of applications. Their weights can be adjusted using gradient-based techniques, allowing for versatility in handling different tasks [15, 37, 50, 30, 63, 55, 42, 72].
To address the challenges posed by the high cost of labeled data and limited memory resources, INRs have garnered increasing attention from the medical community, as evident from the exponential growth in research papers dedicated to this domain (Figure 1). This surge of interest in INR within the medical imaging community has resulted in a multitude of applications across diverse medical imaging scenarios. In recent years, their predominant usage has been in enhancing resolution and synthesizing missing information. They offer a solution to alleviate the burden of collecting labeled data in medical domains by eliminating the need for training data and explicit labels. Instead, they leverage available measurements or signals without the requirement of labeled data for each instance, enabling the reconstruction of 3D anatomical structures or generation of 2D scans [54, 40, 48]. For instance, in MRI, super-resolution techniques can enhance the spatial resolution of images, providing clearer visualization of structural features and diagnostic information [31, 61, 62]. Moreover, they can be employed in synthesis and inverse problems, such as reconstructing CT and MRI data from projection and frequency domains while reducing radiation exposure [54, 69, 40, 48, 64]. Additionally, INRs find utility in neural rendering to model complex relationships in scenes, enabling detailed visualizations of anatomical structures or aiding in robotic surgery by reconstructing deformable surgical scenes [16, 58].
To present a comprehensive review of these emerging architectures, this paper provides an overview of their core principles and diverse applications, as well as their advantages and limitations. To the best of our knowledge, this is the first survey paper that covers the application of INR in medical imaging and sheds light on new directions and research opportunities, serving as a roadmap and systematic guide for researchers. We also aspire to generate increased interest within the vision community to delve into the exploration of implicit neural representations in the medical domain. Our key contributions are as follows:
\(\bullet\) We conduct a systematic and comprehensive review of the applications of INR in the field of medical imaging. We analyze and compare state-of-the-art (SOTA) approaches for each specific task.
\(\bullet\) We provide a detailed and organized taxonomy, as illustrated in Figure 4, which allows for a structured understanding of the research progress and limitations in different areas.
\(\bullet\) Additionally, we discuss the challenges and open issues associated with INR in medical imaging. We identify new trends, raise important questions, and propose future directions for further exploration.
_Search Strategy_. To conduct a comprehensive search, we utilized DBLP, Google Scholar, and Arxiv Sanity Preserver, employing customized search queries tailored to retrieve relevant scholarly publications. Our search queries consisted of keywords such as (INR | implicit neural representation | medical | Task), (INR | medical | Neural rendering), and
Figure 1: Chart (a) visually displays the relative proportions of published papers according to their application, and chart (b) illustrates the count of published papers based on INR design in medical tasks during different time periods (with ”H” indicating the first or second half of the year). The assessment of the statistics is based on a sample of 64 research papers published during the years 2021 to 2023.
(INR | medical | NeRF). Here, **Task** refers to one of the applications covered (Figure 4). To ensure the selection of relevant papers, we conducted a meticulous evaluation based on factors such as novelty, contribution, and significance. Priority was given to papers that were pioneering in the field of medical imaging. Subsequently, we selected papers with the highest rankings for further examination.
## 2 Background
Implicitly representing signals with neural networks has gathered pace in recent years. Instead of parametrizing signals with discrete representations such as grids, voxels, point clouds, and meshes, a simple MLP can be learned to continuously represent the signal of interest as an implicit function \(\Psi:\mathrm{x}\rightarrow\Psi(\mathrm{x})\), mapping their spatial coordinates \(\mathrm{x}\in\mathbb{R}^{M}\) from \(M\) dimensional space to their corresponding \(N\) dimensional value \(\psi(\mathrm{x})\in\mathbb{R}^{N}\) (, occupancy, color, etc.). While INRs have shown promising, they can fail to encode high-frequency details compared to discrete representations, leading to a suppressed representation quality. Rahaman [39] have made significant strides in uncovering limitations within conventional ReLU-based MLPs and their ability to represent fine details in underlying signals accurately. These MLPs have shown a propensity to learn low-frequency details, leading to a phenomenon known as spectral bias in piece-wise linear networks. In order to address this issue, several approaches have been explored to redirect the network's focus toward capturing high-frequency details and effectively representing the signal with finer-grained details. To enhance the representation of the input signal, three avenues can be pursued within an MLP framework based on its structure. Firstly, one can consider changing the _input_ type by mapping it to a higher-dimensional space to enable the network to capture more intricate details within the signal. Secondly, another approach involves replacing the ReLU activation function with a new _activation function_ that better facilitates the learning of high-frequency components. Lastly, one can explore altering the _output_ of the MLP to a higher-dimensional space, where each node is responsible for reconstructing a specific part of the signal. In this section, we will provide a background based on the modifications that can be made to mitigate the spectral bias issue. Additionally, we will cover a neural volume rendering model called NeRF [34] as a pioneering approach to bridge implicit representations and novel view synthesis. Figure 2 illustrates the overview of our proposed background.
### Input
The conventional approach in INR treats the spatial coordinate of each element in the signal, such as pixels in an image, as the input to an MLP. However, this approach tends to learn low-frequency functions, limiting its ability to effectively represent complex signals. To address this limitation, recent progress suggests using a sinusoidal mapping of the Cartesian coordinates to a higher dimensional space, which enables the learning of high-frequency details more effectively [55]:
1. **Basic:**\(\gamma(\mathbf{v})=[\text{cos}(2\pi\mathbf{v},\text{sin}(2\pi\mathbf{v})]^{T}\).
2. **PE:**\(\gamma(\mathbf{v})=[...,\text{cos}(2\pi\sigma^{j/m}\mathbf{v},\text{sin}(2\pi \sigma^{j/m}\mathbf{v}),...]^{T}\) for \(j=0,...,m-1\). **PE** denotes Positional Encoding, and the scale \(\sigma\) is determined for individual tasks and datasets through a process of hyperparameter sweep.
3. **Gaussian:**\(\gamma(\mathbf{v})=[\text{cos}(2\pi\mathbf{B}\mathbf{v},\text{sin}(2\pi\mathbf{ B}\mathbf{v})]^{T}\), where the variable \(\mathbf{v}\) represents the signal coordinates, while \(\mathbf{B}\) is a random Gaussian matrix, where each entry is independently sampled from a normal distribution \(\mathcal{N}(0,\sigma^{2})\). Similarly, the scale \(\sigma\) is selected through a hyperparameter sweep for each task and dataset.
These encoding processes are known as Fourier features mapping.
### Activation Function
In general, the intuition behind activation functions is to apply non-linearity to the neural network. As for implicit
Figure 2: The figure illustrates various modifications to alleviate the spectral bias problem in INRs, provides an overview of their underlying principles, and introduces NeRF as an additional background method, as discussed in section 2.
representations, nonlinearities can be either periodic or non-periodic. However, non-periodic functions, such as \(ReLU\) or \(tanh\), are not conducive to the effective learning of high-frequency signals. To handle this issue, Sinusoidal Representation Networks (SIRENs) [50] utilize \(sine\) as the activation function of the MLP to parametrize complex data:
\[\begin{split}\mathbf{\Psi}(\mathbf{x})=\mathbf{W}_{n}(\psi_{n-1} \circ\psi_{n-2}\circ\ldots\psi_{0})(\mathbf{x})+\mathbf{b_{n}},\\ \mathbf{x_{i}}\mapsto\psi_{i}(\mathbf{x_{i}})=\sin(\mathbf{W_{i }x_{i}}+\mathbf{b_{i}}),\end{split} \tag{1}\]
where \(\psi_{i}\) indicates the \(i^{th}\) layer of the neural network, \(\mathbf{x}\) is the signal of interest, \(\mathbf{W}\) and \(\mathbf{b_{i}}\) represents the weight matrix and bias, respectively. The use of sine as the activation boils down to its derivative being a shifted sine (cosine), which enables the network to efficiently parametrize higher-order derivatives, such as the image Laplacian or Helmholtz equation. Furthermore, the sine function helps to effectively represent signals containing high-frequency details. SIREN authors suggest a unique initialization technique to prevent vanishing gradients in traditional activation functions. They initialize weights of each layer as \(W\sim U(\frac{c}{\sqrt{n}},\frac{c}{\sqrt{n}})\), where \(W\) is the weight of the layers, \(U(.)\) is a uniform distribution, \(c\) denotes a constant that controls the range of the weight values and \(n\) is the number of inputs neurons.
### Output
Target signals such as images and audio typically exhibit local structure and dependencies among neighboring elements, which can be effectively utilized to enhance ReLU networks during training. Aftab _et al_. [1] introduce a multi-head network architecture where the main body learns global features of the signal, while the output layer consists of multiple heads. These heads reconstruct separate parts of the signal and learn its local features. For instance, in the case of images, they divide the image into equal grid cells. Each cell is then processed by an MLP in the main body to capture global features, and the output sparse nodes reconstruct the details of each cell individually. This approach aims to reduce the network's bias towards low-frequency components by exploiting the inherent properties of the target signal. Therefore, changing the output to a higher dimensional space can effectively alleviate the issue of spectral bias.
### NeRF
Neural Radiance Fields (NeRFs) [34] unites INRs with volume rendering by using fully connected MLPs to implicitly represent scenes and objects with the goal of novel view synthesis. The objective of novel view synthesis is to develop a system that can generate novel viewpoints of an object from any direction by observing a few images of that particular object. The process is defined as:
\[F_{\theta}(\mathrm{x},\mathrm{d})\longrightarrow(\mathrm{c},\sigma), \tag{2}\]
where \(\mathrm{x}\) indicates the 3D location \((x,y,z)\), \(\mathrm{d}\) represents the 2D vector of viewing direction \((\theta,\phi)\), \(\mathrm{c}\) denotes the color values \((r,g,b)\), and \(\sigma\) is the volume intensity. The primary idea is to overfit an implicit function on the training data such that, given \((x,y,z)\) as the spatial coordinates and \((\theta,\phi)\) as the viewing direction, the network outputs the color and volume density of the particular location. Unlike SIREN [50], the NeRF architecture is equipped with ReLU as its activation function, but similar to Fourier features [55], adopts a positional encoding approach to map coordinates to higher dimensions as follows:
\[\begin{split}\gamma(p)=(sin(2^{0}\pi p),cos(2^{0}\pi p),\ldots,\\ sin(2^{L-1}\pi p),cos(2^{L-1}\pi p)),\end{split} \tag{3}\]
where \(p\) can be each of the coordinate or viewing direction components. NeRF's architecture is designed in a two-stage matter to obtain density and color values as follows:
\[\mathrm{First\;Stage:}\sigma,\mathrm{h}=MLP(\mathrm{x}) \tag{4}\] \[\mathrm{Second\;Stage:}\mathrm{c}=MLP(concat[\mathrm{h},\mathrm{d }]) \tag{5}\]
where in the first stage the 3D coordinate \(\mathrm{x}\) is passed through the MLP to obtain the density \(\sigma\) and a feature representation \(\mathrm{h}\in\mathbb{R}^{N}\) (\(N\) equals 256 in original implementation), and in the second stage, \(\mathrm{h}\) is input to the MLP to output the color values \(\mathrm{c}\) (the original implementation [34] adopts identical MLPs to accomplish this). Finally, volume rendering [26] is utilized to generate novel views by tracing camera rays through every pixel of the target synthesized image.
## 3 Clinical Importance
Thanks to the memory and data efficiency of INRs, they are widely utilized in numerous medical imaging tasks. One of the most significant challenges in automated medical imaging is the collection of ground truth annotated data from reliable sources, such as clinicians and medical professionals [24, 3, 27]. This process is painstaking, expensive, time-consuming, and requires significant effort. Un
Figure 3: Comparison of the aligned Pedunculopontine Nucleus (PPN) region (blue) and manually segmented PPN region (lavender) by radiologists in a Parkinson’s disease patient [29]. The brain planes (axial, sagittal, and coronal) show the results of affine and Diffeomorphic (DARTEL) registration, aligning the myelin staining histological atlas and the enhanced atlas generated through INR-based super-resolution. Expert radiologists manually segmented the last two columns to validate the INR-based method.
like simple scenes that can be easily recognized and labeled (_e.g_., categorizing an indoor scene), annotating medical images should be performed by medical professionals and clinicians. This reliance on experts, coupled with privacy concerns and the need for patient authorization, creates a major bottleneck in the annotation process for medical imaging. INRs offer significant advantages in various applications without the need for external training annotations [54, 38, 40, 48, 60]. For example, in the field of medical imaging, INRs are particularly beneficial for tasks like super-resolution. During the medical imaging procedures like CT scans, PET scans, MRI, and ultrasound, patient movement can also cause motion artifacts and result in blurred images or poorly defined structures, especially in upper abdominal regions such as the chest which are negatively affected by patient motion. Additionally, in Cone Beam Computed Tomography (CBCT), which is commonly used in dental and maxillofacial imaging, slow imaging speed combined with patient movement can result in motion artifacts and lead to poorly defined structure boundaries [51, 18, 72]. Furthermore, obtaining high-quality MRI scans poses a challenge due to longer scan times [61, 62]. Conventional approaches are not suitable for effectively handling these issues through super-resolution or image reconstruction as they are not resolution-agnostic and require notable amounts of data. However, INRs can address super-resolution more effectively by considering inputs from the continuous coordinate domain and being resolution-agnostic.
Implicit neural models are also widely used in biomedical applications, particularly in solving inverse imaging problems [69, 40, 48, 48]. These problems involve learning the structure of an object (organ of interest in medical cases) from observations or measurements. Using INRs, it becomes possible to reconstruct CT or MRI scans directly from the sensor domain. Moreover, they can even facilitate the tracking of tissue progression by incorporating prior scans from earlier time steps, subsequently reconstructing the updated scan for the current time.
In practical applications, the reconstruction of images from sparsely sampled data plays a crucial role. This need arises in various domains, including medical imaging, where it has proven particularly valuable in specific applications such as reducing radiation dose in CT imaging and accelerating MRI scans [54, 40, 48]. Notably, Single Image Super-Resolution (SISR) techniques have attracted considerable attention due to their potential to restore a high-resolution (HR) image solely based on a low-resolution (LR) input [62, 31]. The ability of SISR methods to enhance image details and fidelity has significant implications for improving diagnostic accuracy and aiding medical professionals in their decision-making processes. This cannot be accomplished using convolution-based techniques as they are trained exclusively for particular up-scaling tasks. Additionally, the time-consuming process of retraining them for a new up-scaling task hinders their usefulness in clinical applications [61].
INRs have also found application in assisting robotic surgery [46, 45, 58, 71, 2]. The integration of INRs within robotic surgical systems allows for enhanced perception and understanding of the surgical environment. By leveraging INRs, robotic surgical systems can better interpret intraoperative images, providing real-time feedback and guidance to the surgeon. This can assist in accurate tissue segmentation, localization of critical anatomical structures, and precise surgical tool manipulation.
To underscore the practical applicability of INRs in addressing real-world challenges, it is crucial to highlight the validation of these models through human expertise. By corroborating the findings of INRs with the insights of medical professionals, we can establish the reliability and effectiveness of these novel diagnostic tools. This notion is exemplified by a compelling study that utilized the expertise of radiologists to validate the outcomes. In the depicted case study (Figure 3), the aligned Pedunculopontine Nucleus (PPN) and the manually segmented PPN region by two radiologists are compared for a patient with Parkinson's disease [29]. The study focuses on enhancing the visibility and localization of the PPN, a deep brain structure crucial for Parkinson's disease treatment, through a combination of a Quantitative Susceptibility Mapping Atlas (QSM) and an INR network. The INR-based process significantly improves the spatial resolution of the atlas, effectively overcoming limitations and minimizing artifacts and blurring effects. As a result, it allows for better delineation of the specific region (PPN) on the atlas, indicating the usability and effectiveness of this approach in clinical settings.
In conclusion, INRs models offer significant advantages in medical imaging tasks, addressing challenges such as the lack of annotated data and artifacts in scans. We believe that research can exploit experts to validate their method's practicality, therefore better demonstrating the usefulness of INRs. In conclusion, INRs have emerged as a valuable and adaptable tool in clinical settings, successfully tackling a diverse range of imaging challenges. Their widespread use is anticipated to continue to grow in the future, offering new possibilities for medical imaging research.
## 4 Taxonomy
In this section, we provide a taxonomy with a focus on the application of INRs in several medical imaging tasks to acquaint researchers with the notable operation and functionality of these models.
### Reconstruction
Image reconstruction is a critical task in medical analysis, enabling professionals to obtain high-quality images for clinical applications. Many studies have explored the use of convolutional neural networks (CNNs) to learn a mapping function that transforms raw data into reconstructed images. However, this approach faces challenges, including the need for large-scale training datasets, instability in the presence
of structural modifications, and difficulties in generalizing to diverse image modalities or anatomical locations [6]. Overcoming these obstacles is crucial to improve the reliability and applicability of image reconstruction in medical settings. To use INR here, the task is conventionally defined as an inverse problem in medical image reconstruction takes noisy or undersampled measurements of a medical image as input and aims to generate a reconstructed, complete image as output. The input can come from various imaging modalities such as CT, MRI, or ultrasound, and the incompleteness may be due to time constraints, reduced radiation exposure, or patient movement. The INR model learns to map the input measurements to the corresponding complete images, recovering the missing information and producing high-quality images that resemble the ground truth obtained from fully sampled acquisitions.
To address the aforementioned challenges, numerous INR-based reconstruction methods have been developed in recent years. For instance, **NeRP**[48] framework proposes to integrate implicit neural networks for reconstructing sparsely sampled medical images through three stages without demanding any training data. As illustrated in Figure 5, in the first stage, a neural network's weights are encoded with a CT image as prior knowledge. Next, the implicit network is optimized on sparsely sampled sinogram measurements to learn reconstruction. Finally, the network is applied to all associated spatial coordinates to generate the final reconstructed CT image. The effectiveness of NeRP in reconstructing tumor structural progression and its versatile applicability across various imaging modalities have been demonstrated through experiments conducted on 2D and 3D data, including CT clinical scans and brain tumor progression MRI data images.
Additionally, Reed _et al_. [40] proposed a method (**DCTR**) for reconstructing dynamic, time-varying scenes using computed tomography (4D-CT). The INR is utilized to estimate a template reconstruction of the 3D volume's linear attenuation coefficients (LACs) in the scene, acting as a prior model that captures the spatial distribution of LACs. Here, the template refers to a representation or approximation of the scene's properties, specifically the LACs in the 3D volume. By using the INR, DCTR generates a template reconstruction of the LACs based on available CT measurements or sinograms through learning a mapping between coordinates \((x,y,z)\) and the template reconstruction of the LACs, which serves as a starting point for the overall reconstruction process. DCTR then employs a parametric motion field, a set of parameters describing how the template should be warped over time to account for scene motion. Finally, the warped template reconstruction is used to synthesize sinograms through a differentiable Radon transform, which is then compared to the actual sinogram to evaluate the accuracy of the reconstruction. The proposed method demonstrates robust reconstruction of images with deformable and periodic motion and is validated on their synthetic D4DCT [40] dataset and the thoracic CT data [11].
Figure 4: The taxonomy subsections delineate six distinct sub-fields in medical imaging: (1) Reconstruction, (2) Segmentation, (3) Registration, (4) Compression, and (5) Neural Rendering. We use the numbering of the methods in ascending order and provide the reference for their paper as follows: 1. [61], 2. [48], 3. [40], 4. [62], 5. [54], 6. [59], 7. [60], 8. [53], 9. [9], 10. [70], 11. [22], 12. [29], 13. [67], 14. [66], 15. [16], 16. [58], 17. [18].
### Segmentation
Medical image segmentation is a critical task in healthcare systems, aiding in disease diagnosis and treatment planning. Deep learning methods have shown promising results in achieving accurate segmentation results. However, these methods often suffer from computational inefficiency and difficulty in handling complex topologies [56]. Complex topologies refer to intricate structural relationships and variations within medical images, such as lesions, tumors, and intricate vessel structures.
To address these limitations, Barrowclough [9] introduced a novel approach called **BS-ISR** that combines convolutional neural networks (CNNs) with INRs. INRs were specifically selected for their ability to handle complex, high-dimensional medical imaging data and capture intricate topologies. Instead of directly generating images, the model utilizes spline representations to capture geometric boundaries and structures. The authors also introduced new loss functions tailored for modeling implicit splines, utilizing binary inside-outside masks. Evaluation on the Congenital Heart Disease dataset [65] demonstrated the superior performance of the model compared to other SOTA methods, as measured by the average volumetric test Dice score metric. In another research, Gu [22] proposed a self-distillation-based INR method for segmentation of retinal vessels for ocular disease diagnosis (**Retinal INR**). They utilized the Vision Transformer (ViT) [17] to capture global dependencies in retinal images by treating the image as a sequence of patches rather than focusing solely on local features. The self-distillation method extracted key features for blood vessel segmentation. The primary benefit of the suggested approach lies in its ability to enhance the resolution of retinal images and magnify the finer details of capillaries through the use of INR. To ensure accurate results, they utilized an improved centerline dice (clDice) loss function to constrain blood vessel topology. The proposed model was evaluated on the Drive [52] and Chase [19] datasets, showcasing its superiority over non-INR methods in terms of segmentation accuracy, detection of detailed structures, and robustness to variations in image quality and content.
### Registration
Medical image registration is a process that aligns multiple images, volumes, or surfaces within a common coordinate system to identify common areas. It requires learning a transformation function that geometrically aligns coordinates between a source and target image. Traditional methods often require complex, multi-step processes and assumptions about the nature of the transformations. However, INRs are capable to model the transformation function without external assumptions. This allows them to handle the variations and warping in the image in a smooth, coherent manner and at any image resolution, making them ideally suited for tasks such as image registration.
In this regard, Wolterink [60] proposed **IDIR**, that employs INR to model the transformation function with a SIREN-based design for deformable registration, which attempts spatial alignment of images to account for changes in the shape, position, or size of anatomical structures, such as organs, tumors, or other features of interest. As depicted in Figure 6, the transformation function \(\phi(x)=u(x)+x\), which maps each coordinate \(x\) in a fixed image to a coordinate in a moving image, is represented using the MLP. The MLP takes a continuous coordinate \(x\) from the image domain as input and predicts a deformation vector \(u(x)\). The addition of \(u(x)\) and \(x\) gives the output \(\phi(x)=u(x)+x\)
Figure 5: Overview of the NeRP framework (taken from [48]). NeRP solves the reconstruction problem in three stages: (a) Embedding a prior medical image in an MLP by implicitly encoding it in its weights. (b) Adapting the MLP on the target image by minimizing the loss between the measurements of the prior and the target image. (c) Inferencing the actual target CT or MRI image from each of the coordinates.
Moreover, the periodic activation function in the MLP allows for higher-order derivatives, enabling advanced regularization techniques for accurate and flexible image registration without relying on CNNs. This model was tested on 4D chest CT registration using the DIR-LAB dataset [11] and surpassed all deep learning-based methods without folding or training data needed.
In another study, Sun _et al_. [53] developed **mirnf**, which can model both displacement vector fields and velocity vector fields, providing two different approaches for performing image registration. While displacement vector fields are used for deformable registration, velocity vector fields are employed for diffeomorphic registration, both utilizing INRs to model the transformations between the target and moving images. The network in registration based on velocity predicts velocity vectors \([v_{p_{x}},v_{p_{y}},v_{p_{z}}]\) from 3D coordinates in the target image. These vectors are integrated over time using a Neural ODE Solver to generate a deformation field by mapping each point in the target image to its corresponding deformed position in the moving image. By applying the deformation field to the target image, it can be aligned with the moving image. Alternatively, another approach trains an MLP to directly predict displacement vectors \([\phi_{p_{x}},\phi_{p_{y}},\phi_{p_{z}}]\) for each coordinate in the source image. These vectors describe how each point in the source image should be shifted or deformed to align with the target image. The target volume is deformed to match the source volume by applying these displacement vectors, which involves adding the displacement vector to each point's position in the target volume. The authors conducted experiments on two 3D MR brain scan datasets, Mindboggle101 [28] and OASIS [32], and found that INR achieves SOTA performance in terms of registration accuracy, optimization speed, and regularity compared to traditional methods.
### Compression
With increasing volumes of biomedical data, efficient compression methods are needed for storage, transmission, and secure sharing. While compression techniques for natural image/video data exist, they are not effective for biomedical data due to their unique characteristics. Biomedical data contains diverse tissue types, complex structures, and high-resolution details, which pose challenges for conventional compression techniques. In recent years, target-data-specific approaches like INR have shown promise in effectively compressing diverse visual data.
For instance, Yang _et al_. [67] presented a mathematical interpretation and adaptive partitioning for the design of an INR-based compressor called **SCI**. SCI partitions the data into blocks, and each block is compressed separately using an MLP network. The first layer is allocated a wide set of neurons to capture a broader range of frequencies, with a proportional reduction of layer size with increasing depth. This choice is based on the observation that increasing the depth of the network rather than the width (number of neurons) is more efficient to represent a larger range of frequencies or higher-order harmonics. To maintain high reconstruction fidelity, the allocation of parameters to blocks is done in accordance with the range of frequencies they cover. After the compression is completed for each block, the network parameters, including the weights and biases of the neural network that contains the learned representations and encoding information for that specific block, are serialized. Using the HiP-CT dataset [57] as a testbed, Yang _et al_. found that their method outperformed conventional techniques (JPEG, H.264, HEVC), data-driven techniques (DVC, SGA+BB, SSF), and existing INR-based techniques (SIREN [50], NeRF [34], and NeRV [44]) on a wide variety of biological and medical data.
In another attempt to improve the compression fidelity of INR, a Tree-structured Implicit Neural Compression (**TINC**) was proposed [66]. TINC uses MLPs to fit segmented local regions, and these MLPs are arranged in a tree structure to enable parameter sharing based on spatial distance. The parameter-sharing mechanism ensures a smooth transition between neighboring regions and eliminates re
Figure 6: An illustration of the IDIR registration framework (taken from [60]), which uses Implicit Neural Representations (INRs) in a multi-layer perceptron (MLP) to directly optimize the deformation function \(\Phi(x)=u(x)+x\), which maps image coordinates to the deformation vector \(u(x)\).
dundancy, whether it exists locally or non-locally. The experiments on the HiP-CT dataset [57] demonstrate the superiority of TINC over conventional techniques. However, its limitations, similar to those of other INR-based methods, are its slower compression speed, although its decompression speed is high.
### Neural Rendering
Neural rendering refers to a class of approaches that involve training a neural network to model the complex relationships between scene geometry, lighting, and details, which allows for the generation of novel views based on existing scenes. Implicit representations can be applied in the context of neural rendering of medical images, allowing for the creation of more detailed and accurate visualizations of complex anatomical structures and other medical data.
In 3D CT imaging, the long exposure time of patients to harmful ionizing radiation imposes a noticeable challenge. Consequently, to alleviate this problem, **MedNeRF**[16] proposes to incorporate GRAF [47] (which integrates NeRF [34] with a CNN) to render CT projections from a single or multi-view X-rays. The intention behind GRAF boils down to NeRF struggling to handle complex scenes with large amounts of geometric complexity. To handle this limitation, the NeRF is trained to minimize the difference between the rendered and ground truth images, while the GAN [21] is trained to distinguish between the generated image and a ground truth image, and utilized to refine the NeRF outputs and improve image quality. The conducted evaluations of MedNeRF on X-ray chest and knee datasets demonstrate reconstruction improvements in terms of volumetric depth estimation compared to the neural radiance field methods.
The application of neural rendering for the purpose of reconstructing surgical scenes in 3D was first introduced by Wang _et al_. [58]. As shown in the Figure 7, the proposed method (**Surgical Neural Rendering**) employs Implicit Neural Representations (INRs) to capture the dynamic and deformable nature of surgical scenes through a canonical radiance field and a time-dependent displacement field, represented using an MLP that maps coordinates and view-in directions to RGB colors and space occupancy. By making the volume rendering process differentiable, it becomes possible to backpropagate gradients through the rendering operations, allowing for end-to-end learning of the implicit neural fields, and enabling the optimization of these parameters to reconstruct the surgical scenes. To generate renderings for supervision, the approach utilizes differentiable volume rendering, where camera rays are shot into the scene, and the color and optical depth of each ray are evaluated using the volume rendering integral. Sampled points along the rays provide the necessary inputs to obtain color and space occupancy from the neural fields. The network parameters of the implicit neural fields are optimized to reconstruct the shapes, colors, and deformations of the surgical scene. This optimization is achieved by jointly supervising the rendered color and optical depth with ground-truth data.
## 5 Comparative Overview
To provide a comparative overview, we have organized comparative information and findings in the Table 1. According to the table, it is evident that image reconstruction has attracted more interest than tasks like segmentation, compression, registration, and others. This preference is mainly driven by their great ability to enhance resolution and reduce noise, especially in medical scenarios where the imaging device is prone to uncertainty. We discuss and compare noteworthy elements in the following:
**Defining Parameters:** The parameters used as the input to INR are not always cartesian coordinates and depend on the
Figure 7: The Surgical Neural Rendering framework proposed by Wang _et al_. (figure taken from [58]). The surgical scenes are represented using a canonical radiance field \(F_{\theta}(x,d)\) and a time-dependent displacement field \(G_{\Phi}(x,t)\). Both models are designed using MLPs, where the canonical radiance field takes as input the spatial coordinates \(\mathrm{x}\in\mathbb{R}^{3}\) and the unit view-in directions \(d\in\mathbb{R}^{3}\), and the displacement field takes as input the space-time coordinates \((x,t)\). The output of these MLPs is the RGB colors \(c(x,d)\in\mathbb{R}^{3}\) and space occupancy \(\sigma(x)\in\mathbb{R}\) for the canonical radiance field and the displacement vector at point \(x\) and time \(t\) for the displacement field.
task and the signal distribution that the neural network is defining. For instance, CoiL [54] tried to define the measurement field by using the parameters that characterize a sensor response including the viewing angle and spatial location of the detector. Likewise, NeRD [70] used the positional distance in cardinal directions to define the pixel-wise distribution function.
**Local Information:** It is worth noting that the methods employing CNNs, such as ArSSR [61], BS-ISR [9], and MedNeRF [16], use specifically leverage the power of CNNs to incorporate local semantic information during the representation process. By utilizing the convolutional layers, these methods can capture and encode local features and spatial relationships, enabling more accurate and context-aware representations for tasks such as noise removal, boundary modeling, and super-resolution.
**Sparse View CT Reconstruction:** As aforementioned in section 3, reducing exposure of patients to radiation dose plays a significant role in improving health care systems. As a result, a notable number of works have developed various strategies to reconstruct CT images with sparse and limited measurements and projection data. Both NeRP [48] and CoiL [54] address the challenge of sparse CT reconstruction by leveraging prior information or geometric relationships. DCTR [40] addresses this challenge in the context of dynamic 4D-CT reconstruction, which is suited for moving structures, such as organs affected by respiration or cardiac motion. In cone-beam computed tomography (CBCT), only the area of interest is exposed to radiation, which reduces radiation exposure to surrounding tissues and organs. SNAF [18] studied reconstructing scans for this special medical imaging technique by utilizing a neural rendering method to implicitly learn an attenuation field. However, due to the use of limited input projections, the resulting out
puts are blurry and require additional effort. It's important to note that sparse view reconstruction is a technique that trades off radiation dose reduction with the potential loss of image quality and accuracy, which is why INR gathered a lot of research attention in this field.
**Network type: SIREN-based vs NeRF-based:** Most of the works reviewed are using ReLU MLPs with Fourier mapping applied to their input to mitigate spectral bias. Since neural volume rendering is based on NeRF [34] design for view synthesis and continuous representation, the activation function is ReLU with Fourier features as input to model 3D structure of the scene accurately [16, 58, 18]. Nonetheless, volume rendering in medical scenarios differs in terms of the surface boundary, as the entire organ holds valuable diagnostic information compared to other domains using NeRF. The type of network is influenced by the objective of the task it aims to solve. Higher-order differentiability of periodic activations enables incorporating more advanced regularization terms into the optimization process of the registration, such as Jacobian regularizer, hyperelastic regularizer [10], and bending energy penalty [41], which is used in IDIR [60] method.
## 6 Future Work and Open Challenges
Despite the benefits of INRs, particularly in the field of medicine, they are still limited in various aspects and require further research efforts to become viable for practical applications, given the high-stakes nature of the medical domain. We discuss these limitations briefly in the following.
**Computational complexity and training time:** Learning a neural representation for each signal separately involves a considerable amount of memory and computational resources. Furthermore, fitting an INR for applications involving high-dimensional data like 3D volumes can be time-consuming [42]. This can pose challenges for real-time applications that require immediate responses. The complexity arises from factors such as the size of the input data and the model architecture. Meta-learning and multi-scale representations help accelerate training time and optimize memory utilization in several domains [20, 43], which provide pathways for representing anatomical and biological structures with reduced training time and greater practicality.
**Scaling to more complex signals:** To better represent higher-resolution signals or complex 3D shapes with fine detail can be challenging. The mapping involved in such representations is often highly nonlinear, making it difficult to scale up without incurring significant computational costs. Both widening and deepening the MLP can enhance its representation capability, but the backpropagation algorithm used for training deep neural networks becomes more computationally intensive as the depth increases, and the vanishing/exploding gradient problem may arise. Researchers often need to strike a balance between model complexity and available computational resources. Various techniques have been developed [14, 35, 25].
**Video-based INR:** When it comes to decoding time, video compression methods employing INR is better compared to other models [13]. This functionality allows parallel processing in feed-forwarding, enabling the independent computation of each frame during decoding. As a result, they got the most attention in robotic-assisted surgery where both speed and accuracy are critical [46, 45, 58]. However, modeling semantic relationships between frames in high-frequency videos (i.e., high frame rate) presents considerable challenges [73], and ongoing research and development are crucial to fully utilize the potential of INRs in this field.
## 7 Conclusion
In conclusion, this survey has offered a comprehensive overview of INRs within the realm of medical imaging. Through the utilization of neural networks and implicit continuous functions, INRs have demonstrated substantial potential in tackling complex issues within medical settings. The survey has emphasized the benefits of employing INRs and has delved into their application across various medical imaging tasks. Additionally, it has identified open challenges and areas of future research, providing valuable insights for researchers in the field.
|
2310.04189 | Bridging the Gap between Human Motion and Action Semantics via Kinematic
Phrases | Motion understanding aims to establish a reliable mapping between motion and
action semantics, while it is a challenging many-to-many problem. An abstract
action semantic (i.e., walk forwards) could be conveyed by perceptually diverse
motions (walking with arms up or swinging). In contrast, a motion could carry
different semantics w.r.t. its context and intention. This makes an elegant
mapping between them difficult. Previous attempts adopted direct-mapping
paradigms with limited reliability. Also, current automatic metrics fail to
provide reliable assessments of the consistency between motions and action
semantics. We identify the source of these problems as the significant gap
between the two modalities. To alleviate this gap, we propose Kinematic Phrases
(KP) that take the objective kinematic facts of human motion with proper
abstraction, interpretability, and generality. Based on KP, we can unify a
motion knowledge base and build a motion understanding system. Meanwhile, KP
can be automatically converted from motions to text descriptions with no
subjective bias, inspiring Kinematic Prompt Generation (KPG) as a novel
white-box motion generation benchmark. In extensive experiments, our approach
shows superiority over other methods. Our project is available at
https://foruck.github.io/KP/. | Xinpeng Liu, Yong-Lu Li, Ailing Zeng, Zizheng Zhou, Yang You, Cewu Lu | 2023-10-06T12:08:15Z | http://arxiv.org/abs/2310.04189v3 | # Bridging the Gap between Human Motion and Action Semantics via Kinematic Phrases
###### Abstract
The goal of motion understanding is to establish a reliable mapping between motion and action semantics, while it is a challenging many-to-many problem. An abstract action semantic (i.e., _walk forwards_) could be conveyed by perceptually diverse motions (walk with arms up or swinging), while a motion could carry different semantics w.r.t. its context and intention. This makes an elegant mapping between them difficult. Previous attempts adopted direct-mapping paradigms with limited reliability. Also, current automatic metrics fail to provide reliable assessments of the consistency between motions and action semantics. We identify the source of these problems as the **significant gap** between the two modalities. To alleviate this gap, we propose Kinematic Phrases (KP) that take the objective kinematic facts of human motion with **proper abstraction, interpretability**, and **generality** characteristics. Based on KP as a mediator, we can unify a motion knowledge base and build a motion understanding system. Meanwhile, KP can be **automatically** converted from motions and to text descriptions with no subjective bias, inspiring Kinematic Prompt Generation (KPG) as a novel automatic motion generation benchmark. In extensive experiments, our approach shows superiority over other methods. Our code and data would be made publicly available here.
## 1 Introduction
Human motion understanding has a wide range of applications, including autonomous driving (Paden et al., 2016), robotics (Koppula and Saxena, 2013), and automatic animation (Van Welbergen et al., 2010), making it increasingly attractive. The core of human motion understanding is to establish a mapping between the motion space and the action semantics space. The motion space indicates a space of sequential 3D human representations, e.g., 3D pose or SMPL (Loper et al., 2015)/SMPL-X (Pavlakos et al., 2019) parameter sequence, while the action semantic space can be represented as action categories or sentences described by natural language.
Recently, a growing focus has been on generative mapping from semantics to motion, including action category-based generation (Petrovich et al., 2021) and text-based generation (Petrovich et al., 2022; Guo et al., 2022; Lucas et al., 2022; Zhang et al., 2022; Tevet et al., 2022; Chen et al., 2023; Zhang et al., 2023a). Most of them typically build a mapping that links motion and semantics either directly or via motion latent, with understated concerns for intermediate motion-semantic structures. However, these models suffer from inferior reliability. They cannot guarantee they generated correct samples without human filtering. Additionally, the existing evaluation of motion generation is problematic. Widely adopted FID and R-Precision rely on the latent space from a black-box pre-trained model, which might fail to out-of-distribution (OOD) and over-fitting cases. There is a long-standing need for an evaluation method that can cheaply and reliably assess whether a generated motion is consistent with particular action semantics. We identify the essence of these as the significant gap between raw human motion and action semantics, which makes direct mapping hard to learn.
As in Fig. 1, an action semantics can correspond to diverse motions. For instance, a person could _walk_ in countless ways with diverse motions, either with arms up or swinging, while action semantics tend to abstract these away from a walking motion. Additionally, they are robust against small
perturbations, while motion is more specific and complex, with representations changing vastly when perturbed or mis-captured. Moreover, a motion sequence could have diverse semantics w.r.t. contexts. Modeling this many-to-many mapping between motion and semantics is challenging.
To bridge this gap between motion and action semantics, we propose Kinematic Phrases (KP), an interpretable intermediate representation. KP focuses on the objective kinematic facts, which are usually omitted by general action semantics, like left-hand moving forwards then backward. KP is designed as qualitative categorical representations of these facts. For objectivity and actuality, KP captures **sign changes** with minimal pre-defined standards. Inspired by previous studies on kinematic human motion representation (von Laban & Lange, 1975; Bartlett, 1997), KP is proposed as six types shown in Fig. 1, covering **joint positions**, **joint pair positions** and **distances**, **limb angles** and **directions**, and **global velocity**. Note that, although KP can be described by natural language, a major difference is that KP is strictly dedicated to objective kinematic facts instead of coarse actions such as _surrender_ or fine-grained actions like _raise both hands_.
We highlight three advantages of KP. First, KP offers **proper abstraction**, which disentangles motion perturbations and semantics changes, easing the learning process. Even though the motion differs significantly, KP manages to capture _walk_ patterns easily. Second, KP is **interpretable**, as it can be viewed as instructions on executing the action, making it easily understandable to humans. Finally, KP is **general**, as it can be automatically extracted from different modalities of human motion, including skeleton and SMPL parameters. The conversion from KP to text is also effortless.
With KP as an intermediate representation, we first construct a unified large-scale motion knowledge base. Then, to fully exploit KP and the knowledge base, we build a motion understanding system with KP mediation. In detail, we learn a motion-KP joint latent space in a self-supervised manner and then adopt it for multiple motion understanding applications, including motion interpolation, modification, and generation. Moreover, leveraging the interpretability of KP, we propose a benchmark called Kinematic Prompts Generation (KPG), which generates motion from text prompts converted from KPs. Thanks to the consistency and convenience of the KP-to-text conversion, KPG enables reliable and efficient motion generation evaluation.
Our contributions are: (1) We propose KP as an intermediate representation to bridge the gap between motion and action semantics. (2) We build a novel motion understanding system using KP and the aggregated large-scale knowledge base. (3) We propose KPG as a benchmark for reliable and efficient motion generation evaluation. Promising results are achieved on motion interpolation and generation tasks. Moreover, extensive user studies are conducted, verifying the efficacy of our methods, also the consistency between KPG evaluation and human perception.
Figure 1: The huge gap between motion and action semantics results in the _many-to-many_ problem. We propose Kinematic Phrases (KP) as an intermediate to bridge the gap. KPs objectively capture human kinematic cues. It properly abstracts diverse motions with interpretability. As shown, the Phrases in the yellow box could capture key patterns of _walk_ for diverse motions.
## 2 Related Works
**Motion Representation**. An intuitive motion representation is a sequence of static pose representations, like joint locations and limb rotations. Efforts are paid to address the discontinuity of rotation for deep-learning methods (Zhou et al., 2019; Bregier, 2021). Recent works on parametric body models (Loper et al., 2015; Pavlakos et al., 2019) enable a more realistic body representation. Meanwhile, Pons-Moll et al. (2014) proposed Posebits, representing pose with boolean geometric part relationships. Delmas et al. (2022; 2023) translates Posebits into text descriptions. These abstract representations are flexible and insensitive to little perturbations, but their static nature ignores motion dynamics. Tang et al. (2022) acquire similar fine-grained descriptions from human annotation, while Xiang et al. (2022); Athanasiou et al. (2023) adopted large-scale language models. However, few recognize their potential in bridging the low-level motion and the high-level action semantics. Phase functions (Holden et al., 2020), Labanotations (von Laban & Lange, 1975), and learned Motion Words (Aristidou et al., 2018) were also explored, though limited to specific actions like locomotion and dancing.
**Motion Generation** can be conditioned by its prefix/suffix (Hernandez et al., 2019; Athanasiou et al., 2022; Guo et al., 2023), action categories (Petrovich et al., 2021; Guo et al., 2020; Xu et al., 2023), or audio (Li et al., 2021; 2021). Text-based motion generation has developed rapidly with the proposal of text-motion datasets Punnakkal et al. (2021); Guo et al. (2022). Petrovich et al. (2022); Guo et al. (2022); Qian et al. (2023) used VAEs, while Tevet et al. (2022); Hong et al. (2022); Lin et al. (2023) extended the CLIP (Radford et al., 2021) space to motion. Recently, attention has been paid to diffusion models (Zhang et al., 2022; Tevet et al., 2022; Dabral et al., 2023; Wang et al., 2023). Azadi et al. (2023) adopted a U-Net structure. Zhang et al. (2023); Petrovich et al. (2023) explored retrieval-based methods. Karunratanakul et al. (2023) aimed at controllable generation, while Yuan et al. (2023) introduced physical constraints. However, most approaches still suffer from the gap between motion and action semantics. Lucas et al. (2022); Guo et al. (2022); Zhang et al. (2023); Chen et al. (2023); Zhou & Wang (2023); Zhong et al. (2023); Kong et al. (2023) adopted (VQ-)VAE-compressed motion representation as mediation, while in the current data-limited situation, we identify that this single-modality compression might be sub-optimal. Instead, KP could alleviate this by introducing explicit semantic-geometric correlation.
## 3 Kinematic Phrase Base
### Kinematic Phrases
Kinematic Phrases abstract motion into objective kinematic facts like left-hand moves up qualitatively. We take inspiration from previous kinematic motion representations (von Laban & Lange, 1975) and qualitative static pose representations (Delmas et al., 2022; Pons-Moll et al., 2014), proposing six types of KP to comprehensively represent motion from different kinematic hierarchies: For **joint movements**, there are 36 Position Phrases (PPs). For **joint pair movements**, there are 242 Pairwise Relative Position Phrases (PRPPs) and 81 Pairwise Distance Phrases (PDPs). For **limb movements**, there are 8 Limb Angle Phrases (LAPs) and 33 Limb Orientation Phrases (LOPs). For **whole-body movements**, there are 3 Global Velocity Phrases (GPVs). KP extraction is based on a skeleton sequence \(X=\{x_{i}|x_{i}x_{i}\in\mathcal{R}^{n_{k}\times 3}\}_{i=1}^{i}\), where \(n_{k}\) is the number of joints (\(n_{k}=17\) here), \(x_{i}\) is the joint coordinates at \(i\)-th frame, and \(t\) is the sequence length. Note that \(x_{i}^{0}\) indicates the pelvis/root joint. For each Phrase, a scalar indicator sequence is calculated from the skeleton sequence. Phrases are extracted as per-frame categorical representations w.r.t. indicator signs. Unlike previous efforts (Pons-Moll et al., 2014; Delmas et al., 2022), we limit the criteria of KP as the indicator signs to minimize the need for human-defined standards (e.g., numerical criteria on the closeness of two joints) for objectivity and actually. Fig. 2 illustrated the extraction procedure.
**Reference Vectors** are first constructed, indicating right, upward, and forward directions from a human _cognitive view_. We aim at the _egocentric_ reference frames that human tends to use when performing actions. The negative direction of gravity is adopted as upward vector \(r^{u}\), the vector from left hip to right hip is adopted as right vector \(r^{r}\), and the forward vector is calculated as \(r^{f}=r^{r}\times r^{r}\). These vectors of each frame are denoted as \(R=\{r_{i}\}_{i=1}^{t}\).
**Position Phrase (PP)** focuses on the movement direction of joint \(x^{j}\) w.r.t. reference vector \(R\). The indicator for PP at \(i\)-th frame is calculated as
\[s_{i}^{(j,\cdot)}=\langle(x_{i}^{j}-x_{i}^{0}),r_{i}^{\cdot}\rangle-\langle(x_{ i-1}^{j}-x_{i-1}^{0}),r_{i-1}\rangle. \tag{1}\]
The sign of \(s_{i}^{(j,\cdot)}\) categorizes PP into moving along/against \(R\), or relatively static along \(R\) for indicators with small amplitudes. After filtering, 36 different PPs are extracted.
**Pairwise Relative Position Phrase (PRPP)** describes the relative position between a pair of joints \((x^{j},x^{k})\) w.r.t. reference vector \(R\). PRPP indicator at \(i\)-th frame is \(s_{i}^{(j,k,\cdot)}=\langle(x_{i}^{j}-x_{i}^{k}),r_{i}^{\cdot}\rangle\). For (L-Hand, R-Hand) and forward vector \(R^{f}\), PRPP could be L-Hand behind/in front of R-Hand according to the sign of \(s_{i}^{(j,k,\cdot)}\). After filtering, 242 PRPPs are extracted.
**Pairwise Distance Phrase (PDP)** describes how the L2 distance between a pair of joints \((x^{j},x^{k})\) changes. The indicator for PDP is calculated as
\[s_{i}^{(j,k)}=\|x_{i}^{j}-x_{i}^{k}\|_{2}-\|x_{i-1}^{j}-x_{i-1}^{k}\|_{2}. \tag{2}\]
The sign of \(s_{i}^{(j,k)}\) categorizes PDP into moving closer/away, or relatively static. After dropping joint pairs in the skeleton topology, such as the hand and elbow, 81 PDPs are extracted.
**Limb Angle Phrase (LAP)** targets at the change of bend angle between two connected limbs \((x^{j},x^{k})\) and \((x^{j},x^{l})\). The indicator for LAP is calculated as
\[s_{i}^{(j,k,l)}=arccos(\langle x_{i}^{k}-x_{i}^{j},x_{i}^{l}-x_{i}^{j}\rangle) -arccos(\langle x_{i-1}^{k}-x_{i-1}^{j},x_{i-1}^{l}-x_{i-1}^{j}\rangle). \tag{3}\]
LAP describes the limb chain \((x^{j},x^{k})\)-\((x^{j},x^{l})\) as bending or unbending. 8 LAPs are extracted.
**Limb Orientation Phrase (LOP)** describes the orientation of the limb \((x^{j},x^{k})\) w.r.t. \(R\), note that \(x^{k}\) is the distal limb. The scalar indicator for LOP is calculated as \(s_{i}^{(j,k,\cdot)}=\langle x_{i}^{k}-x_{i}^{j},r_{i}^{\cdot}\rangle\). The sign of \(s_{i}^{(j,k,\cdot)}\) categorizes the LOP into limb \((x^{j},x^{k})\) pointing along/against \(R\), or a placeholder category for those with little magnitude. 33 LOPs are extracted.
**Global Velocity Phrase (GVP)** describes the direction of global velocity with respect to \(R\). The indicator is calculated as \(s_{i}^{\cdot}=\langle x_{i+1}^{0}-x_{i}^{0},r_{i}^{\cdot}\rangle\). The three categories are moving along/against \(R\), or static along \(R\) according to the sign of \(s_{i}\).
These result in 403 Phrases in total, covering motion diversity and distribution from various levels. While we clarify that these Phrases do not rule out the possibility of other possible useful potentials.
Figure 2: Six types of KP from four kinematic hierarchies are extracted from a motion sequence. A scalar indicator \(s_{i}\) is calculated per Phrase _per frame_. Its sign categorizes the corresponding Phrase.
### Constructing Kinematic Phrase Base
KP enables us to unify motion data with different formats to construct a large-scale knowledge base containing motion, text, and KP. Motion sequences of different representations are collected, including 3D skeleton sequences and SMPL (Loper et al., 2015)/SMPL-X (Pavlakos et al., 2019) parameter sequences. The sequences are first re-sampled to 30Hz and rotated so that the negative direction of the z-axis is the gravity direction. Then, the sequences are converted into 3D skeleton sequences for KP extraction as in Sec. 3.1. Text annotations attached to the sequences are directly saved. For sequences with action category annotation, the category name is saved. For those with neither text nor action category, the text information is set from its attached additional information, like objects for SAMP (Hassan et al., 2021). Finally, we collect 87k motion sequences from 11 datasets. Detailed statistics are shown in Tab. 1. More details are included in the appendix.
## 4 Motion Understanding via KP
By motion understanding, we mean both low-level understanding like interpolation and modification, and high-level understanding like generative mapping from text to motion. To achieve this, we first learn a motion-KP joint space with less ambiguity and more interpretability. Then, with this space, we introduce its application to both low-level and high-level motion-semantics understanding.
### Preliminaries
We first introduce the representation for motion and KP. **Motion** is represented as a human pose sequence with \(n\) frames as \(M=\{m_{i}\}_{i=1}^{n}\). In detail, SMPL (Loper et al., 2015) pose parameters are transformed from axis-angle format to the 6D continuous representation (Zhou et al., 2019), then concatenated with the velocity of the root joint, resulting in a 147-dimensional representation per frame. **KP** is represented by signs of the indicators.
### Joint Space Learning
**Model Structure.** An overview of our model is illustrated in Fig. 3. **Motion VAE** is a transformer-based VAE adapted from Petrovich et al. (2021). The encoder \(\mathcal{E}_{m}\) takes motion \(M\) and two distribution tokens \(m_{\mu},m_{\sigma}\) as input, and the outputs corresponding to the distribution tokens are taken as the \(\mu_{m}\) and \(\sigma_{m}\) of the Gaussian distribution. Then, the transformer decoder \(\mathcal{D}_{m}\) takes \(z_{m}\sim\mathcal{G}(\mu_{m},\sigma_{m})\) as \(K,V\), and a sinusoidal positional encoding of the expected duration as \(Q\). The output is fed into a linear layer to obtain the reconstructed motion sequence \(\hat{M}\). **KP VAE** with encoder \(\mathcal{E}_{p}\) and decoder \(\mathcal{D}_{p}\) resembles Motion VAE. The sign of \(\mathcal{D}_{p}\) output is adopted as the predicted KP \(\hat{C}\). Notice that the decoders \(\mathcal{D}_{m},\mathcal{D}_{p}\) could take arbitrary combinations of \(z_{m},z_{p}\) as input, outputting \(\hat{M},\hat{C}\).
**Self-supervised Training.** With the VAEs, we propose a self-supervised training strategy to learn motion-KP joint space. As a coherent representation, the overall representation should not change drastically with a small portion of KP unknown. Even more, the missing Phrases should be recovered from existing Phrases. In this view, we randomly corrupt samples during training by setting a small portion of KP as 0. The training is thus executed in a self-supervised manner. This helps mine the correlation among different Phrases while also effectively increasing the robustness of the joint
\begin{table}
\begin{tabular}{l c c c c} \hline Dataset & Mot. Rep. & \#Scas & \#Actions & Text \\ \hline AMASS (Mahmood et al., 2019) & SMPL-X & 26k & 260 & ✓ \\ GRAB (Taheri et al., 2020) & SMPL-X & 1k & 4 & ✓ \\ SAMP (Hassan et al., 2021) & SMPL-X & 0.2k & N/A & ✓* \\ Fri3D (Fieraru et al., 2021) & SMPL-X & 0.4k & 29 & ✓ \\ CH3D (Fieraru et al., 2020) & SMPL-X & 0.4k & 8 & ✓ \\ UESTC (Ii et al., 2018) & SMPL & 26k & 40 & ✓ \\ AlST+TA (Li et al., 2021) & SMPL & 1k & N/A & ✓* \\ BEHAVE (Bhatnagar et al., 2022) & SMPL & 0.3k & N/A & ✓* \\ HuMMan (Cai et al., 2022) & SMPL & 0.3k & 339 & ✓ \\ GTAUran (Cai et al., 2021) & SMPL & 20k & N/A & ✗ \\ Motion-X(Lin et al., 2023a) & SMPL-X & 65k & N/A & ✓ \\ \hline
**Sum** & & **140k** & **680+** & - \\ \hline \end{tabular}
\end{table}
Table 1: Statistics of Kinematic Phrase Base. _Mot. Rep._ indicates motion representation. “✓*” means texts are generated from the attached additional information instead of human annotation.
space. Similar to TEMOS (Petrovich et al., 2022), four losses are adopted: reconstruction loss, KL divergence loss, distribution alignment loss, and embedding alignment loss.
### KP-mediated Motion Understanding
With the joint space, we can perform both low-level and high-level motion understanding with KP mediation. We introduce three applications to show the capability of KP, as shown in Fig. 3.
**KP-mediated Motion Interpolation** Given a corrupted motion sequence \(\tilde{M}\), we extract its corresponding KP sequence \(\tilde{C}\), then feed them to encoders \(\mathcal{E}_{m},\mathcal{E}_{p}\) and decoder \(\mathcal{D}_{p}\), resulting in the estimated KP sequence \(\hat{C}\). \(\hat{C}\) and \(\tilde{M}\) are fed into \(\mathcal{E}_{m},\mathcal{E}_{p}\) and \(\mathcal{D}_{m}\), resulting in interpolated \(\tilde{M}\).
**Motion Modification** Motion modification functions similarly. Motion \(M\) is first extracted into KP sequence \(C\). Modifications could be made on \(C\) resulting in \(\tilde{C}\). Modified motion frames are then masked, getting \(\tilde{M}\). \(\tilde{M},\tilde{C}\) are fed into \(\mathcal{E}_{m},\mathcal{E}_{p}\) and \(\mathcal{D}_{m}\), getting the interpolated \(\hat{M}\).
**KP-mediated Motion Generation.** Given text \(t\), to generate a motion sequence from it, we first encode it into latent \(z_{t}\) with CLIP text encoder \(\mathcal{E}_{t}\). Direct mapping could be achieved by training the motion decoder \(\mathcal{D}_{m}\) for \(\tilde{M}=\mathcal{D}_{m}(z_{t})\). We show that the direct mapping could be impressively improved with our joint space in Sec. 6.4. With KP, we could perform a novel KP-mediated motion generation. We adopt a vanilla latent diffusion paradigm for KP-mediated text-to-motion tasks. An extra denoiser is trained to denoise a random noise \(z_{p}^{T}\) to KP latent \(z_{p}=z_{p}^{0}\) with \(T\) diffusion steps. We then decode KP sequence \(\hat{C}\) from \(z_{p}\) with \(\mathcal{D}_{p}\). Then, \(\hat{C}\) is encoded by \(\mathcal{E}_{p}\), getting distribution \(\mathcal{G}(\mu_{p},\sigma_{p})\). \(z_{p}\) is sampled and sent to \(\mathcal{D}_{m}\) to generate a motion sequence. Experiments show that KP could be a promising stepping stone to mitigate the huge gap from action semantics to motion.
## 5 Kinematic Prompt Generation
With the interpretability and objectivity of KP, we propose a new motion generation benchmark.
Before that, we first analyze current benchmarks. A crucial aspect of motion generation evaluation is motion-semantic consistency. The gold standard is user study. However, it is expensive and inefficient to scale. Early metrics like MPJPE (Mean Per Joint Position Error) and MAE (Mean Angle Error) mechanically calculate the error between the generated and GT samples. These metrics fail to reveal the real ability of generative models: What if the models memorize GT samples? Or what if the samples are diverse from GT but also true? FID (Frechet Inception Distance) is adopted to mitigate this issue. However, it provides a macro view of the quality of all generated samples without guarantees for individual samples. Guo et al. (2022) proposed R-Precision, using a pre-trained
Figure 3: We train motion-KP joint latent space in a self-supervised training manner. KP is randomly masked during training. Reconstruction and alignment losses are adopted. The joint space could be applied for multiple tasks, including motion interpolation, modification, and generation.
text-motion matching model to examine whether the generated samples carry true semantics. They both rely on the latent space from a black-box pre-trained model, which is not credible. Besides, models might learn short paths to over-fit the pre-trained model. Moreover, since automatic mapping from motion to semantics across their huge gap is still an unsettled problem, adopting it to evaluate motion generation is not a decent choice. Moreover, most current motion generation evaluations are performed on datasets (Guo et al., 2022; Plappert et al., 2016; Ji et al., 2018) with considerable complex everyday actions, further increasing the difficulty.
To this end, we propose a novel benchmark: Kinematic Prompts Generation (KPG). Instead of previous benchmarks focusing on everyday activities or sports, we take a step _back_ in the complexity of the target action semantics. Based on KP, KPG focuses on evaluating _whether the models could generate motion sequences consistent with specific kinematic facts given text prompts_.
In detail, we convert KP into text prompts with templates as in Tab. 2, resulting in 840 text prompts. Given prompt \(T_{i}\in T\) from Phrase \(c_{i}\), the model generates motion \(\hat{M}_{i}\), along with extracted KP \(\hat{C}_{i}\). We calculate Accuracy as \(Acc=\frac{1}{|T|}\sum_{T_{i}\in T}\text{1}[c_{i}\in\hat{C}_{i}]\), where \(1[\cdot]=1\) if the expression in \([\cdot]\) is True, otherwise 0. Note that, for \(c_{i}\in\hat{C}_{i}\), \(c_{i}\) should keep for more than 5 consecutive frames to avoid trivial perturbations. Accuracy examines whether the Phrase corresponding to the given prompt appears in the KP sequence converted from generated motion. The calculation involves no black-box model thanks to KP, presenting a fully reliable evaluation pipeline. Also, with the effortless motion-to-KP conversion, the computation could be conducted automatically. More details are in the appendix.
## 6 Experiment
**Implementation Details.** HumanML3D (Guo et al., 2022) test split is held out for evaluation, with the rest of KPB for training. During training, the motion sequences are canonicalized by eliminating the rotation along the z-axis in the first frame, and the same counter-rotation is applied to the following frames. Sequences are sampled to 15 FPS and randomly clipped into short clips with lengths between 30 frames and 150 frames. The batch size is set as 288, and an AdamW optimizer with a learning rate of 1e-4 is adopted. We randomly corrupt less than 20% of the Phrases for a sample. The Motion-KP joint space is trained for 6,000 epochs. While the text-to-motion latent diffusion model is trained for 3,000 epochs, with the joint space frozen. All experiments are conducted in 4 NVIDIA RTX 3090 GPUs. More details are provided in the appendix.
### Motion Interpolation
Following Jiang et al. (2023), 50% frames are randomly masked for interpolation evaluation. FID and Diversity are also evaluated. We adopt MDM (Tevet et al., 2022) as the baseline. In Tab. 3, our method provides better FID. While with additional KPB, the Diversity is increased.
### Motion Generation
**Settings**. We adopt the HumanML3D test set (Guo et al., 2022) for conventional text-to-motion evaluation. The evaluation model from Guo et al. (2022) is adopted to calculate R-Precision, FID, Diversity, and Multimodality. KPG is also adopted, with the proposed Accuracy. Also, Diversity is computed as a reference. We run the evaluation 20 times and report the average metric value. Details are given in the appendix.
**Results on conventional text to motion** are shown in Tab. 3. Our method is competitive without KPB. However, KPB brings a counter-intuitive performance drop. To evaluate this, we further conduct a user study to make human volunteers judge the motions instead of a proxy neural network.
Our user study is different from previous efforts in two aspects. First, instead of testing a small set of text prompts (less than 50 in previous works (Tevet et al., 2022; Chen et al., 2023)), we randomly
\begin{table}
\begin{tabular}{l c} \hline \hline KP & Text prompt samples \\ \hline PP & **Left hand** moves forwards. \\ PRPP & **Left hand** is below head **above head.** \\ PRP & **Left hand** moves **raw from **head.** \\ LAP & **Left arm** bends. \\ LOP & **Left forearm** points **foxards** then backward. \\ GVP & The person moves forwards. \\ \hline \hline \end{tabular}
\end{table}
Table 2: Text prompts converted from KP. **Joint/limb names**, _prepositions, verbs, and adverbials_ could be replaced w.r.t. specific Phrases.
select 600 sentences from the HumanML3D test set. By scaling up, the result is convincing in reflecting the ability to generate motion for diverse text inputs. Second, neither asking the volunteers to give a general rating for each sample nor to choose between different samples, we ask them two questions: 1) Do the motion and the text match? and 2) Is the motion natural? For Q1, three choices are given as "No, Partially, Yes". For Q2, two choices are given as "Yes, No". In this way, we explicitly decouple the evaluation of text-to-motion into semantic consistency and naturalness, corresponding to R-Precision and FID. For each prompt, we generate one sample considering the annotation cost. We claim that the models should generate natural text-matching motion most of the time so that the one-sample setting would not hurt the fidelity of our user study. 36 volunteers are invited, each reviewing 200 sequences. Thus each sequence receives 3 user reviews. Also, we compute R-precision@1 of the generated sequences for reference. MDM (Tevet et al., 2022b), T2M-GPT (Zhang et al., 2023a), MLD (Chen et al., 2023), and our method are evaluated.
User study results are shown in Fig. 4. Though our method is not superior in R-Precision, we receive better user reviews, showcasing the efficacy of our KP-mediated generation strategy. Recent T2M-GPT and MLD present similar R-Precision, but only T2M-GPT manages to keep a good performance with user reviews. Moreover, the discrepancy between R-Precision and user reviews is revealed in both absolute value and trends. More results and analysis are given in the appendix.
**Results on KPG** are shown in Tab. 4. KPG is considered an easier task than conventional text-based motion generation since it is targeted at action semantics with much less complexity. However, previous methods are not performing as well as expected. Though we managed to deliver substantial improvements, the accuracy remains below 60%, which is far from satisfying. There is a considerable gap between existing methods and ideal motion generation models.
Furthermore, given the discrepancy between automatic metrics and user study as shown in Fig. 4, we conducted a similar user study with 100 randomly selected prompts from KPG involving T2M-GPT and our model. Fig. 5 demonstrates that KP-inferred Accuracy and user reviews share similar trends. We also calculate their consistency, showing KP and user study give the same reviews for **84%** of the samples. We believe KPG could thus be a first step towards reliable automatic motion generation evaluation. More analyses are given in the appendix.
### Visualization
We first present a modification sample in Fig. 6. By modifying KP, we could edit arbitrary motion at a fine-grained level. Also, We compare generated samples of T2M-GPT and our methods in Fig. 7.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{Motion Interpolation} & \multicolumn{3}{c}{Motion Generation} \\ Methods & FID1 & Diversity– & R-P@1 \(\uparrow\) & FID1 & Diversity– & Multimodality \\ \hline GT & 0.002 & 9.503 & 0.511 & 0.002 & 9.503 & - \\ \hline TEMOS (Petrovich et al., 2022) & - & - & 0.424 & 3.734 & 8.973 & 0.568 \\ T2M (Gue et al., 2022a)* & - & - & 0.455 & 1.067 & 9.188 & 2.090 \\ MDM (Tevet et al., 2022b)* & 2.698 & 8.42 & 0.320 & 0.544 & 9.559 & 2.799 \\ T2T (Guo et al., 2022b)* & - & - & 0.424 & 1.501 & 8.589 & 2.424 \\ MLD (Chen et al., 2023a)* & - & - & 0.481 & 0.473 & 9.724 & 2.413 \\ T2M-GPT (Zhang et al., 2023a)* & - & - & **0.492**, **0.141** & 9.722 & 1.831 \\ MotionGPT (Jiang et al., 2023)* & 0.214 & **9.560** & **0.492** & 0.232 & **9.528** & 2.008 \\ \hline Ours* & **0.197** & 9.772 & 0.475 & 0.412 & 10.161 & 2.065 \\ Ours & 0.226 & 10.022 & 0.434 & 0.631 & 10.372 & 2.584 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Result Comparison of motion interpolation and generation on HumanML3D. R-P@1 is short for R-Precision@1. * indicates the model is trained on the HumanML3D train set only.
Our method properly responds to text prompts with constraints on specific body parts. This could be attributed to KP mediation, which explicitly decomposes the action semantics into kinematics cues of body parts. Note that T2M-GPT might generate redundant motion for simple prompts, while our method provides more concise and precise results. More visualizations are in the appendix.
### Ablation Studies
Ablation study results on KPG are shown in Tab. 5.
**KP mediation.** By using our joint space without KP mediation, we still present a competitive result, showing the efficacy of motion-KP joint space.
**Direct mapping.** By directly mapping with no KP involved, we present a similar performance compared to previous methods. It demonstrates the significance of KP in conveying action semantics.
**Different KP sets.** We examine the contribution of different KP sets: joint KP (PP), joint pair KP (PRPP, PDP), limb KP (LAP, LOP), and body KP (GVP). A leave-one-out style evaluation shows the elimination of joint KP and joint pair KP results in notable performance degradation, while the influence of the rest is relatively subtle.
## 7 Discussion
Here, we discuss the limitations and prospects of KP and KP-based applications. **First**, KP could be extended beyond its current criteria of sign. These criteria guarantee objectivity but overlook important kinematic information like movement amplitude and speed. Also, due to the granularity of the adopted skeleton, fine-grained kinematic information on fingers is not well-preserved. The exploration of amplitude/speed/finger-based KP would be a promising goal to pursue. **Second**, KPB could be extended to datasets with other modalities, like 2D pose and egocentric action datasets. Though these modalities provide incomplete 3D information, we could extract KP that is credibly accessible across modalities. **Third**, with the convenient conversion from KP to text, auxiliary text descriptions could be automatically generated for motions via KP. **Fourth**, KPG could be extended by paraphrasing existing prompts and combining different Phrases.
## 8 Conclusion
In this paper, we proposed an intermediate representation to bridge human motion and action semantics as the Kinematic Phrase. By focusing on objective kinematic facts of human motion, KP
achieved proper abstraction, interpretability, and generality. A motion understanding system based on KP was proposed and proven effective in motion interpolation, modification, and generation. Moreover, a novel motion generation benchmark Kinematic Prompt Generation is proposed. We believe that KP has great potential for advancing motion understanding.
|
2305.00159 | The planar Schrodinger--Poisson system with exponential critical growth:
The local well-posedness and standing waves with prescribed mass | In this paper, we investigate a class of planar Schr\"{o}dinger-Poisson
systems with critical exponential growth. We establish conditions for the local
well-posedness of the Cauchy problem in the energy space, which seems
innovative as it was not discussed at all in any previous results. By
introducing some new ideas and relaxing some of the classical growth
assumptions on the nonlinearity, we show that such system has at least two
standing waves with prescribed mass, where one is a ground state standing waves
with positive energy, and the other one is a high-energy standing waves with
positive energy. In addition, with the help of the local well-posedness, we
show that the set of ground state standing waves is orbitally stable. | Juntao Sun, Shuai Yao, Jian Zhang | 2023-04-29T03:35:40Z | http://arxiv.org/abs/2305.00159v1 | The planar Schrodinger-Poisson system with exponential critical growth: The local well-posedness and standing waves with prescribed mass
###### Abstract
In this paper, we investigate a class of planar Schrodinger-Poisson systems with critical exponential growth. We establish conditions for the local well-posedness of the Cauchy problem in the energy space, which seems innovative as it was not discussed at all in any previous results. By introducing some new ideas and relaxing some of the classical growth assumptions on the nonlinearity, we show that such system has at least two standing waves with prescribed mass, where one is a ground state standing waves with positive energy, and the other one is a high-energy standing waves with positive energy. In addition, with the help of the local well-posedness, we show that the set of ground state standing waves is orbitally stable.
**Keywords:** The planar Schrodinger-Poisson system; Critical exponential growth; Standing waves; Local well-posedness; Variational methods.
**2010 Mathematics Subject Classification:** 35B35, 35B38, 35J20, 35J61, 35Q40.
## 1 Introduction
Consider the planar Schrodinger-Poisson system of the type
\[\left\{\begin{array}{l}i\partial_{t}\psi+\Delta\psi+\gamma w\psi+f(\psi)=0, \ \ \ \ \ \forall(t,x)\in\mathbb{R}^{1+2},\\ -\Delta w=|\psi|^{2},\\ \psi(0,x)=\psi_{0}(x),\end{array}\right. \tag{1}\]
where \(\psi:\mathbb{R}^{2}\times\mathbb{R}\to\mathbb{C}\) is the (time-dependent) wave function, the function \(w\) represents an Newtonian potential for a nonlocal self-interaction of the wave function \(\psi\), the coupling constant \(\gamma\in\mathbb{R}\) describe the relative strength of the potential, and the sign of the \(\gamma\) determines whether the interactions of the potential are repulsive or attractive, i.e. the interaction is attractive when \(\gamma>0\), and it is repulsive when \(\gamma<0\). The function \(f\) is supposed to satisfy that \(f(e^{i\theta}z)=e^{i\theta}f(z)\) for \(\theta\in\mathbb{R}\) and \(z\in\mathbb{C}\). Such system arises from quantum mechanics [7, 10, 27] and in semiconductor theory [29, 30].
An important topic is to establish conditions for the well-posedness of Cauchy problem (1). From a mathematical point of view, the second equation in the system determines \(w:\mathbb{R}^{2}\to\mathbb{R}\) up to harmonic functions, it is natural to choose \(w\) as the Newtonian potential of \(\psi^{2}\), i.e. the convolution of \(\psi^{2}\) with the Green function \(\Phi(x)=-\frac{1}{2\pi}\ln|x|\) of the Laplace operator. Thus the Newtonian potential \(\omega\) is given by
\[w=-\frac{1}{2\pi}(\ln|x|*\psi^{2}).\]
For higher dimensional cases (\(N\geq 3\)), the Green function of the Laplace operator becomes a different form \(\Phi(x)=\frac{1}{N(N-2)\omega_{N}}|x|^{2-N}\), where \(\omega_{N}\) denotes the volume of the unit ball in \(\mathbb{R}^{N}\). As a consequence, the Schrodinger-Poisson system in higher dimensions can be viewed as a special case of the Hartree equation, and there has been a number of works on local existence, global existence, blow up in finite time and scattering theory, see [5, 11, 18, 19, 20, 21, 25, 36] and references therein. However, for the two dimensional case, there seem to be quite few results on the well-posedness of the Cauchy problem (1), since the Newtonian potential \(w\) diverges at the spatial infinity no matter how fast \(\psi\) decays. So far, we are only aware of two papers [31, 32]. More precisely, Masaki [31] proposed a new approach to deal with such nonlocal term, which can be decomposed into a sum of the linear logarithmic potential and a good remainder. By using the perturbation method, the global well-posedness for the Cauchy problem (1) with \(f(\psi)=|\psi|^{p-2}\psi(p>2)\) is established in the smaller Sobolev space \(\mathcal{H}\) given by
\[\mathcal{H}:=\left\{\psi\in H^{1}(\mathbb{R}^{2})\ |\ \int_{\mathbb{R}^{2}}\ln( \sqrt{1+|x|^{2}})\psi^{2}dx<\infty\right\}. \tag{2}\]
For two dimensional case, we note that the Sobolev embedding guarantees that every power type nonlinearity is energy subcritical. Hence, if we are to identify an energy critical nonlinearity, then it is natural to consider an exponential type one. As far as we know, the well-posedness of the Cauchy problem (1) with critical exponential growth has not been concerned in the existing literature, which is the first aim of this paper.
Another interesting topic on system (1) is to study the standing waves of the form \(\psi(x,t)=e^{i\lambda t}u(x)\), where \(\lambda\in\mathbb{R}\) and \(u:\mathbb{R}^{2}\to\mathbb{R}.\) Then system (1) is reduced to the system
\[\left\{\begin{array}{ll}-\Delta u+\lambda u-\gamma wu=f(u)&\mbox{ in }\mathbb{R}^{2},\\ -\Delta w=u^{2}&\mbox{ in }\mathbb{R}^{2}.\end{array}\right. \tag{3}\]
With this formal inversion of the second equation in system (3), we obtain the following integro-differential equation:
\[-\Delta u+\lambda u-\gamma(\Phi*|u|^{2})u=f(u),\ \forall x\in\mathbb{R}^{2}. \tag{4}\]
Then at least formally, the energy functional associated with equation (4) is
\[I(u)=\frac{1}{2}\int_{\mathbb{R}^{2}}(|\nabla u|^{2}+\lambda u^{2})dx+\frac{ \gamma}{8\pi}\int_{\mathbb{R}^{2}}\int_{\mathbb{R}^{2}}\ln(|x-y|^{2})|u(x)|^{2} |u(y)|^{2}dxdy-\int_{\mathbb{R}^{2}}F(u)dx,\]
where \(F(t)=\int_{0}^{t}f(s)ds\). Obviously, if \(u\) is a critical point of \(I,\) then the pair \((u,\Phi\ast|u|^{2})\) is a weak solution of system (3). However, the energy functional \(I\) is not well-defined on the natural Sobolev space \(H^{1}(\mathbb{R}^{2}),\) since the logarithm term changes sign and is neither bounded from above nor from below. Inspired by [34], Cingolani and Weth [16] developed a variational framework of equation (4) in the smaller Hilbert space \(X\), where
\[X:=\left\{u\in H^{1}(\mathbb{R}^{2})\ |\ \int_{\mathbb{R}^{2}}\ln(1+|x|)u^{2}dx< \infty\right\},\]
endowed with the norm
\[\|u\|_{X}^{2}:=\int_{\mathbb{R}^{2}}(|\nabla u|^{2}+u^{2}(1+\ln(1+|x|)))dx.\]
We note that there are two different ways to deal with equation (4) according to the role of \(\lambda\): \((i)\) the frequency \(\lambda\) is a fixed and assigned parameter;
\((ii)\) the frequency \(\lambda\) is an unknown of the problem.
For case \((i),\) one can see that solutions of equation (4) can be obtained as critical points of the functional \(I\) in \(X.\) This case has attracted much attention in the last years, under various types of nonlinearities \(f\), see, for example, [1, 3, 13, 14, 16, 17] and the references therein.
Alternatively, one can look for solutions to equation (4) with the frequency \(\lambda\) unknown. In this case, the real parameter \(\lambda\) appears as a Lagrange multiplier, and \(L^{2}\)-norms of solutions are prescribed, i.e. \(\int_{\mathbb{R}^{2}}|u|^{2}dx=c\) for given \(c>0,\) which are usually called normalized solutions. This study seems particularly meaningful from the physical point of view, since solutions of system (1) conserve their mass along time, and physicists are very interested in the stability.
Regarding the study of normalized solutions to equation (4), the first contribution was made by Cingolani and Jeanjean [15]. By introducing some new ideas, they obtained several results on nonexistence, existence and multiplicity of normalized solutions for equation (4) with the power nonlinearity \(f(u)=a|u|^{p-2}u\), depending on the assumptions on \(\gamma,a,p\) and \(c.\) Very recently, Alves et al. [2] investigated the case of exponential critical growth on equation (4). We recall that in \(\mathbb{R}^{2},\) the natural growth restriction on the function \(f\) is given by the Trudinger-Moser inequality [33, 35], and we say that a function \(f\) has \(\alpha_{0}\)-critical exponential growth at \(+\infty\) if
\[\lim_{t\rightarrow+\infty}\frac{f(t)}{e^{\alpha t^{2}}-1}=\left\{\begin{array} []{ll}0&\mbox{ for }\alpha>\alpha_{0},\\ +\infty&\mbox{ for }0<\alpha<\alpha_{0}.\end{array}\right.\]
To make it more precise, we recall below the conditions introduced in [2].
* \(f\in C(\mathbb{R},\mathbb{R}),\)\(f(0)=0\) and has a critical exponential growth with \(\alpha_{0}=4\pi;\)
* \(\lim_{|t|\to 0}\frac{|f(t)|}{|t|^{\tau}}=0\) for some \(\tau>3;\)
* there exists a constant \(\mu>6\) such that \[0<\mu F(t)\leq tf(t)\text{ for all }t\in\mathbb{R}\backslash\{0\};\]
* there exist constants \(p>4\) and \(\theta>0\) such that \[F(t)\geq\theta|t|^{p}\text{ for all }t\in\mathbb{R}.\]
In [2], under conditions \(\left(f_{1}\right)-\left(f_{4}\right)\), they found a mountain-pass type of normalized solution when either \(0<\gamma<\gamma_{0}\) and \(0<c<1\), or \(\gamma>0\) and \(0<c<c_{0}<<1\). Moreover, when \(-f(t)=f(-t)\) is also assumed, multiple normalized solutions with negative energy levels were obtained by using a genus approach.
In the present paper we are likewise interested in looking for normalized solutions to equation (4) with exponential critical growth. However, distinguishing from the study in [2], we mainly focus on the existence of ground state and high-energy normalized solutions by relaxing some of the classical growth assumptions on \(f\). As a result, the corresponding standing waves with prescribed mass of system (1) are obtained. In addition, the orbital stability of the set of ground state standing waves is studied as well. Specifically, for any \(c>0\) given, the problem we consider is the following:
\[\left\{\begin{array}{l}-\Delta u+\lambda u-(\Phi\ast|u|^{2})u=f(u)\quad \text{in }\mathbb{R}^{2},\\ \int_{\mathbb{R}^{2}}|u|^{2}dx=c>0,\end{array}\right.\] ( \[SP_{c}\] )
where \(f\) satisfies conditions \(\left(f_{1}\right),\left(f_{4}\right)\) and
* \(\lim_{t\to 0}\frac{|f(t)|}{|t|}=0\);
* there exists a constant \(\beta>4\) such that \(\frac{tf(t)-2F(t)}{|t|^{\beta}}\) is decreasing on \((-\infty,0)\) and is increasing on \((0,+\infty)\).
It is easily seen that solutions of problem \((SP_{c})\) corresponds to critical points of the energy functional \(J:X\rightarrow\mathbb{R}\) given by
\[J(u)=\frac{1}{2}\int_{\mathbb{R}^{2}}|\nabla u|^{2}dx+\frac{1}{4}\int_{ \mathbb{R}^{2}}\int_{\mathbb{R}^{2}}\ln|x-y|u^{2}(x)u^{2}(y)dxdy-\int_{ \mathbb{R}^{2}}F(u)dx \tag{5}\]
on the constraint
\[S(c):=\left\{u\in X\ |\ \int_{\mathbb{R}^{2}}|u|^{2}dx=c\right\}.\]
It is straightforward that \(J\) is a well-defined and \(C^{1}\) functional on \(S(c)\).
In this work we shall make a more in-depth study of the planar Schrodinger-Poisson system in the exponential critical case. First of all, for the local well-posedness of the Cauchy problem (1), due to the special feature of the nonlocal term, the usual integral equation
\[\psi(t)=e^{it\Delta}\psi_{0}+i\int_{0}^{t}e^{i(t-s)\Delta}\left(f(\psi)-\frac {\gamma}{2\pi}\psi\int_{\mathbb{R}^{2}}\ln|x-y||\psi(y)|^{2}dy\right)ds\]
is not a good choice. By adopting the ideas in [31], we can decompose the nonlocal term into a sum of the linear logarithmic potential and a good remainder. Thus we try to consider the following integral equation
\[\psi(t)=e^{it\mathcal{L}}\psi_{0}+i\int_{0}^{t}e^{i(t-s)\mathcal{L}}\left[f(\psi) -\frac{\gamma}{2\pi}\psi\int_{\mathbb{R}^{2}}\ln\left(\frac{|x-y|}{1+|x|} \right)|\psi(y)|^{2}dy\right]ds,\]
where \(\mathcal{L}:=\Delta-m\ln(1+|x|)\) is the new self-adjoint operator. For the purpose of further study, we work in the space \(X,\) not in \(\mathcal{H}\) as in (2), leading to different definitions of norms and different estimates from those in [31]. In addition, comparing with the power nonlinearity case [31], the estimates of exponential critical nonlinearity seem to be more difficult. Secondly, it seems that the geometric properties of the energy functional \(J\) have not been described in [2]. One objective of this study is to shed some light on the behavior of \(J.\) As a consequence, we shall study the existence of ground state and high-energy solutions for problem \((SP_{c}).\) Thirdly, we relax some of the classical growth assumptions on \(f\). For example, we introduce condition \((f_{5})\) instead of condition \((f_{2})\) to find the first solution of problem \((SP_{c}),\) although it may be not a ground state. However, if the monotonicity condition \((f_{6})\) is further assumed, then such solution is a ground state with positive energy. Moreover, as one may observe, condition \((f_{3})\) which is usually called the Ambrosetti-Rabinowitz condition was required in [2]. It was used in a technical but essential way in obtaining bounded constrained Palais-Smale sequences. We shall show that, under condition \((f_{6})\) that is weaker than condition \((f_{3}),\) we manage to extend the previous results on the existence of mountain-pass solution for problem \((SP_{c}),\) which is the second solution of problem \((SP_{c})\) and is a high-energy solution with positive energy.
### Main results
First of all, we establish conditions for the local well-posedness of the Cauchy problem (1) in the energy space \(X.\)
**Theorem 1.1**: _Assume that \(\|\nabla\psi_{0}\|_{L^{2}}^{2}<1\) and \(f\) satisfies \((f_{1})\) and \((f_{7})\) for any \(z_{1},z_{2}\in\mathbb{C}\) and \(\varepsilon>0\), there exists \(C_{\varepsilon}>0\) such that_
\[|f(z_{1})-f(z_{2})|\leq C_{\varepsilon}|z_{1}-z_{2}|\sum_{j=1}^{2}\left(e^{4 \pi(1+\varepsilon)|z_{j}|^{2}}-1\right),\]
_and_
\[|f^{\prime}(z_{1})-f^{\prime}(z_{2})|\leq C_{\varepsilon}|z_{1}-z_{2}|\sum_{j =1}^{2}\left(|z_{j}|+e^{4\pi(1+\varepsilon)|z_{j}|^{2}}-1\right).\]
_Then there exist \(T_{\max}>0\) and a unique solution \(\psi\in C([0,T_{\max}],X)\) for the Cauchy problem (1)._
Next, we consider the following local minimization problem:
\[\gamma_{c}^{\rho}:=\inf_{u\in S(c)\cap\mathcal{B}_{\rho}}J(u), \tag{6}\]
where
\[{\cal B}_{\rho}:=\left\{u\in X\ |\ \int_{\mathbb{R}^{2}}|\nabla u|^{2}dx\leq\rho \right\}\ \mbox{for}\ \rho>0\ \mbox{given}. \tag{7}\]
We are now in a position to state the following result.
**Theorem 1.2**: _Assume that conditions \((f_{1})\) and \((f_{5})\) hold. In addition, we assume that \(F(t)\geq 0\) for \(t>0.\) Then for any \(0<\rho<1,\) there exists \(0<c_{*}=c_{*}(\rho)<1\) such that for \(0<c<c_{*},\) the infimum \(\gamma_{c}^{\rho}\) defined as (6) is achieved by \(u_{c}\in X,\) which is a weak solution of problem \((SP_{c})\) with some \(\lambda=\lambda_{c}\in\mathbb{R}.\)_
**Definition 1.3**: _We say that \(u_{0}\) is a ground state of problem \((SP_{c})\) on \(S(c)\) if it is a solution to problem \((SP_{c})\) having minimal energy among all the solutions which belongs to \(S(c).\) Namely,_
\[(J|_{S(c)})^{\prime}(u_{0})=0\ \mbox{and}\ J(u_{0})=\inf\{J(u)\ |\ (J|_{S(c)})^{ \prime}(u)=0\ \mbox{and}\ u\in S(c)\}.\]
In Theorem 1.2 we are not sure whether the solution \(u_{c}\) is a ground state. However, if we further assume that condition \((f_{6})\) holds, then such solution is actually a ground state of problem \((SP_{c}).\) We have the following result.
**Theorem 1.4**: _Assume that conditions \((f_{1})\) and \((f_{5})-(f_{6})\) hold. In addition, we assume that \(F(t)\geq 0\) for \(t>0.\) Let \(u_{c}\) be given in Theorem 1.2. Then there exists a constant \(0<\tilde{c}_{*}<c_{*}\) such that for \(0<c<\tilde{c}_{*},\)\(u_{c}\) is a ground state to problem \((SP_{c})\) with some \(\lambda=\lambda_{c}\in\mathbb{R},\) which satisfies \(J(u_{c})=\gamma_{c}^{\rho}>0.\) Furthermore, there holds_
\[\gamma_{c}^{\rho}\to 0\ \mbox{and}\ \int_{\mathbb{R}^{2}}|\nabla u_{c}|^{2}dx \to 0\ \mbox{as}\ c\to 0.\]
By Theorem 1.4, we know that the set of ground states
\[{\cal M}_{c}^{\rho}:=\{u\in S(c)\cap{\cal B}_{\rho}\ |\ J(u)=\gamma_{c}^{\rho}\}\]
is not empty. Then we can consider the stability of the set of ground states.
**Theorem 1.5**: _Under the assumptions of Theorems 1.1 and 1.4, the set of ground states_
\[{\cal M}_{c}^{\rho}:=\{u\in S(c)\cap{\cal B}_{\rho}\ |\ J(u)=\gamma_{c}^{ \rho}\}\neq\emptyset\]
_is stable under the flow corresponding to (1). That is, for any \(\varepsilon>0\), there exists \(\delta>0\) such that for any \(\psi_{0}\in X\) satisfying \(dist_{X}(\psi_{0},{\cal M}_{c}^{\rho})<\delta,\) the solution \(\psi(t,\cdot)\) of (1) with \(\psi(0,\cdot)=\psi_{0}\) satisfies_
\[\sup_{t\in[0,T)}dist_{X}(\psi(t,\cdot),{\cal M}_{c}^{\rho})<\varepsilon,\]
_where \(T\) is the maximal existence time for \(\psi(t,\cdot).\)_
Now, we turn to find the second solution of problem \((SP_{c}).\) In addition to conditions \((f_{1}),(f_{4})\) and \((f_{6}),\) we also need condition \((f_{2})\) which is obviously stronger than condition \((f_{5}).\) Then we have the following result.
**Theorem 1.6**: _Assume that conditions \((f_{1})-(f_{2}),(f_{4})\) and \((f_{6})\) hold. Then there exists \(0<c^{*}<1\) such that for \(0<c<c^{*},\) there exists \(\theta^{*}=\theta^{*}(c)>0\) such that problem \((SP_{c})\) has a second pair of solutions \((\hat{u}_{c},\hat{\lambda}_{c})\in H^{1}(\mathbb{R}^{2})\times\mathbb{R}\) for any \(\theta>\theta^{*},\) which satisfies_
\[J(\hat{u}_{c})>J(u_{c})=\gamma_{c}^{\rho}>0.\]
_In particular, \(\hat{u}_{c}\) is a high-energy solution to problem \((SP_{c})\) with \(\lambda=\hat{\lambda}_{c}.\)_
**Remark 1.1**: \((i)\) _We easily find some examples of exponential critical nonlinearities satisfying conditions \((f_{1})\), \((f_{5})\) and \((f_{7}),\) such as_
\[f(t)=|t|^{p-2}te^{4\pi|t|^{2}}\text{ for all }t\in\mathbb{R},\]
_where \(p>2.\) In particular, if \(2<p\leq 4,\) such functions do not satisfy condition \((f_{2});\)\((ii)\) We choose a primitive function of \(f\) like \(F(t)=|t|^{p}e^{4\pi|t|^{2}}\) for \(p>4.\) Then we have_
\[f(t)=p|t|^{p-2}te^{4\pi|t|^{2}}+8\pi|t|^{p}te^{4\pi|t|^{2}}.\]
_A direct calculation shows that_
\[\frac{tf(t)-2F(t)}{|t|^{\beta}}=(p-2+8\pi|t|^{2})|t|^{p-\beta}e^{4\pi|t|^{2}},\]
_which implies that condition \((f_{6})\) holds if we take \(\beta=p.\) However, we note that we cannot find some \(\mu>6\) such that_
\[tf(t)-\mu F(t)\geq 0\text{ for all }t\in\mathbb{R}\backslash\{0\},\]
_which indicates that condition \((f_{3})\) is not satisfied._
**Remark 1.2**: _It is necessarily mentioned that Olves et al. [4] studied the existence of a mountain-pass type of normalized solution for a class of Schrodinger equations with exponential critical growth in \(\mathbb{R}^{2}\). They used the standard Ambrosetti-Rabinowitz condition on the nonlinearity \(f,\) i.e. \((f_{3})^{\prime}\) there exists a constant \(\mu>4\) such that_
\[0<\mu F(t)\leq tf(t)\text{ for all }t\in\mathbb{R}\backslash\{0\}.\]
_However, we observe that in the study of planar Schrodinger-Poisson systems, we cannot make the constant \(\mu>4\) if the Ambrosetti-Rabinowitz condition is used. The main reason is that the logarithm term changes sign. Indeed, if we keep using the Ambrosetti-Rabinowitz condition in this direction, then we can assume that \((f_{3})^{\prime\prime}\) for any \(\kappa>1,\) there exists \(\mu>4+\frac{2}{\kappa-1}\) such that_
\[0<\mu F(t)\leq tf(t)\text{ for all }t\in\mathbb{R}\backslash\{0\}.\]
_Obviously it is weaker than condition \((f_{3}).\)_
The paper is organized as follows. After giving some preliminary results in Section 2, we prove Theorem 1.1 in Section 3 and Theorems 1.2, 1.4 and 1.5 in Section 4, respectively. Finally, we give the proof of Theorem 1.5 in Section 5.
Preliminary results
For sake of convenience, we set
\[A(u):=\int_{\mathbb{R}^{2}}|\nabla u|^{2}dx\text{ and }V(u):=\int_{\mathbb{R}^{2}} \int_{\mathbb{R}^{2}}\ln|x-y|u^{2}(x)u^{2}(y)dxdy.\]
Then the functional \(J\) defined in (5) can be reformulated as:
\[J(u)=\frac{1}{2}A(u)+\frac{1}{4}V(u)-\int_{\mathbb{R}^{2}}F(u)dx.\]
In what follows, we recall several important inequalities which will be often used in the paper.
**(1) Hardy-Littlewood-Sobolev inequality ([28]):** Let \(t,r>1\) and \(0<\alpha<N\) with \(1/t+(N-\alpha)/N+1/r=2\). For \(\bar{f}\in L^{t}(\mathbb{R}^{N})\) and \(\bar{h}\in L^{r}(\mathbb{R}^{N})\), there exists a sharp constant \(C(t,N,\alpha,r)\) independent of \(u\) and \(v\), such that
\[\int_{\mathbb{R}^{2}}\int_{\mathbb{R}^{2}}\frac{\bar{f}(x)\bar{h}(y)}{|x-y|^{N -\alpha}}dxdy\leq C(t,N,\alpha,r)\|\bar{f}\|_{L^{t}}\|\bar{h}\|_{L^{r}}. \tag{8}\]
**(2) Gagliardo-Nirenberg inequality ([37]):** For every \(N\geq 1\) and \(r\in(2,2^{*})\), here \(2^{*}:=\infty\) for \(N=1,2\) and \(2^{*}:=2N/(N-2)\) for \(N\geq 3\), there exists a sharp constant \(\mathcal{S}_{r}>0\) depending on \(r\) such that
\[\|u\|_{r}\leq\mathcal{S}_{r}^{1/r}\|\nabla u\|_{L^{2}}^{\frac{r-2}{r}}\|u\|_{ L^{2}}^{\frac{2}{r}}, \tag{9}\]
where \(\mathcal{S}_{r}=\frac{r}{2\|U\|_{2}^{r-2}}\) and \(U\) is the ground state solution of the following equation
\[-\Delta u+\frac{2}{r-2}u=\frac{2}{r-2}|u|^{r-2}u.\]
As in the introduction, following [16, 34], we shall work in the Hilbert space
\[X=\left\{u\in H^{1}(\mathbb{R}^{2})\ |\ \|u\|_{*}<\infty\right\},\]
where
\[\|u\|_{*}^{2}:=\int_{\mathbb{R}^{2}}\ln(1+|x|)u^{2}(x)dx,\]
with \(X\) endowed with the norm given by
\[\|u\|_{X}^{2}:=\|u\|_{H^{1}}^{2}+\|u\|_{*}^{2}.\]
Define the symmetric bilinear forms
\[(u,v) \mapsto B_{1}(u,v)=\int_{\mathbb{R}^{2}}\int_{\mathbb{R}^{2}}\ln(1+|x-y |)u(x)v(y)dxdy,\] \[(u,v) \mapsto B_{2}(u,v)=\int_{\mathbb{R}^{2}}\int_{\mathbb{R}^{2}}\ln\left( 1+\frac{1}{|x-y|}\right)u(x)v(y)dxdy,\] \[(u,v) \mapsto B_{0}(u,v)=B_{1}(u,v)-B_{2}(u,v)=\int_{\mathbb{R}^{2}}\int_{ \mathbb{R}^{2}}\ln|x-y|u(x)v(y)dxdy,\]
and define the associated functional on \(X\)
\[V_{1}(u) = B_{1}(u^{2},u^{2})=\int_{\mathbb{R}^{2}}\int_{\mathbb{R}^{2}}\ln(1+ |x-y|)u^{2}(x)u^{2}(y)dxdy,\] \[V_{2}(u) = B_{2}(u^{2},u^{2})=\int_{\mathbb{R}^{2}}\int_{\mathbb{R}^{2}}\ln \left(1+\frac{1}{|x-y|}\right)u^{2}(x)u^{2}(y)dxdy.\]
Obviously, one can see that
\[V(u)=V_{1}(u)-V_{2}(u).\]
Note that
\[\ln(1+|x-y|)\leq\ln(1+|x|+|y|)\leq\ln(1+|x|)+\ln(1+|y|)\mbox{ for all }x,y\in \mathbb{R}^{2}.\]
Then we have
\[B_{1}(uv,wz) \leq \int_{\mathbb{R}^{2}}\int_{\mathbb{R}^{2}}(\ln(1+|x|)+\ln(1+|y|)) |u(x)v(x)||w(y)z(y)|dxdy \tag{10}\] \[\leq \|u\|_{*}\|v\|_{*}\|w\|_{L^{2}}\|z\|_{L^{2}}+\|u\|_{L^{2}}\|v\|_{ L^{2}}\|w\|_{*}\|z\|_{*}\]
for all \(u,v,w,z\in L^{2}(\mathbb{R}^{2}).\) Using the fact of \(0\leq\ln(1+t)\leq t\) for \(t\geq 0,\) it follows from the Hardy-Littlewood-Sobolev inequality (8) that for some \(\bar{K}>0,\)
\[|B_{2}(u,v)|\leq\int_{\mathbb{R}^{2}}\int_{\mathbb{R}^{2}}\frac{1}{|x-y|}u(x) v(y)dxdy\leq\bar{K}\|u\|_{L^{\frac{4}{3}}}\|v\|_{L^{\frac{4}{3}}}. \tag{11}\]
Hence, according to (9) and (11), one can see that for some \(K>0,\)
\[|V_{2}(u)|\leq\bar{K}\|u\|_{L^{\frac{4}{3}}}^{4}\leq KA(u)^{1/2}\|u\|_{L^{2}}^ {3}\mbox{ for all }u\in H^{1}(\mathbb{R}^{2}), \tag{12}\]
and \(V_{2}\) only takes finite values on \(L^{8/3}(\mathbb{R}^{2}).\)
Now we introduce some known results from [15, 16], which are important in our work.
**Lemma 2.1**: _([16, Lemma 2.2]) The following statements are true. \((i)\) The space \(X\) is compactly embedded in \(L^{r}(\mathbb{R}^{2})\) for all \(2\leq r<\infty;\)\((ii)\) The functionals \(V,V_{1},V_{2}\) and \(J\) are of \(C^{1}\) on \(X\). Moreover, \(V_{i}^{\prime}(u)[v]=4B_{i}(u^{2},uv)\) for all \(u,v\in X\) and \(i=1,2.\)\((iii)\)\(V_{1}\) is weakly lower semicontinuous on \(H^{1}(\mathbb{R}^{2});\)\((iv)\)\(V_{2}\) is continuous (in fact, continuously differentiable) on \(L^{8/3}(\mathbb{R}^{2}).\)_
**Lemma 2.2**: _([16, Lemma 2.1]) Let \(\{u_{n}\}\) be a sequence in \(L^{2}(\mathbb{R}^{2})\) such that \(u_{n}\to u\in L^{2}(\mathbb{R}^{2})\backslash\{0\}\) pointwise a.e. on \(\mathbb{R}^{2}\). Moreover, let \(\{v_{n}\}\) be a bounded sequence in \(L^{2}(\mathbb{R}^{2})\) such that_
\[\sup_{n\in\mathbb{N}}B_{1}(u_{n}^{2},v_{n}^{2})<\infty.\]
_Then there exists \(n_{0}\in\mathbb{N}\) and \(C>0\) such that \(\|v_{n}\|_{*}<C\) for \(n\geq n_{0}.\) Moreover, if \(B_{1}(u_{n}^{2},v_{n}^{2})\to 0\) and \(\|v_{n}\|_{2}\to 0\) as \(n\to\infty,\) then_
\[\|v_{n}\|_{*}\to 0\mbox{ as }n\to\infty.\]
**Lemma 2.3**: _([16, Lemma 2.6]) Let \(\{u_{n}\},\{v_{n}\}\) and \(\{w_{n}\}\) be bounded sequence in \(X\) such that \(u_{n}\rightharpoonup u\) weakly in \(X\). Then for every \(z\in X\), we have_
\[B_{1}(v_{n}w_{n},z(u_{n}-u))\to 0\mbox{ as }n\to\infty.\]
**Lemma 2.4**: _([15, Lemma 2.6]) Let \(\{u_{n}\}\subset S(c)\) be a sequence such that \(V_{1}(u_{n})\) is bounded. Then there exists a subsequence of \(\{u_{n}\}\), up to translation, converging to \(u\) in \(L^{2}(\mathbb{R}^{2})\). More precisely, for all \(k\geq 1\), there exists \(n_{k}\to\infty\) and \(x_{k}\in\mathbb{R}^{2}\) such that \(u_{n_{k}}(\cdot-x_{k})\to u\) strongly in \(L^{2}(\mathbb{R}^{2})\). In addition, if the sequence \(\{u_{n}\}\) consists of radial functions, then necessarily the sequence \(x_{k}\in\mathbb{R}^{2}\) is bounded._
We recall the well-known Moser-Trudinger inequality as follows.
**Lemma 2.5**: _([8]) If \(\alpha>0\) and \(u\in H^{1}(\mathbb{R}^{2})\), then we have_
\[\int_{\mathbb{R}^{2}}(e^{\alpha u^{2}}-1)dx<+\infty.\]
_Moreover, if \(\|\nabla u\|_{2}^{2}\leq 1\), \(\|u\|_{2}\leq M<+\infty\) and \(0<\alpha<4\pi\), then there exists a constant \(L(M,\alpha)>0,\) depending only on \(M\) and \(\alpha\), such that_
\[\int_{\mathbb{R}^{2}}(e^{\alpha u^{2}}-1)dx\leq L(M,\alpha).\]
**Lemma 2.6**: _Assume that conditions \((f_{1})-(f_{2})\) hold. Let \(\{u_{n}\}\) be a sequence in \(S(c)\) satisfying \(\limsup_{n\to\infty}\|\nabla u_{n}\|_{2}^{2}<1-c.\) If \(u_{n}\rightharpoonup u\) in \(X\) and \(u_{n}(x)\to u(x)\) a.e. in \(\mathbb{R}^{2}\), then there hold_
\[F(u_{n})\to F(u)\quad\mbox{and}\quad f(u_{n})u_{n}\to f(u)u\quad\mbox{in }L^{1}(\mathbb{R}^{2}).\]
**Proof.** The proof is similar to that of [4, Corollary 3.2], we omit it here.
**Lemma 2.7**: _([2, Lemma 3.4]) Let \(\{u_{n}\}\) be a sequence in \(S(c)\) satisfying \(\limsup_{n\to\infty}\|\nabla u_{n}\|_{2}^{2}<1-c\) and \(J(u_{n})\leq d\) for some \(d\in\mathbb{R}\) and for all \(n\in\mathbb{N}.\) Then, up to a subsequence, \(\{u_{n}\}\) is bounded in \(X.\)_
**Lemma 2.8**: _(The Pohozaev identity) Any weak solution \(u\in X\) to the equation_
\[-\Delta u+\lambda u+(\ln|\cdot|*u^{2})u=f(u) \tag{13}\]
_satisfies the Pohozaev identity_
\[\lambda\|u\|_{2}^{2}+\int_{\mathbb{R}^{2}}\int_{\mathbb{R}^{2}}\ln(|x-y|)|u(x )|^{2}|u(y)|^{2}dxdy+\frac{1}{4}\|u\|_{2}^{4}-\int_{\mathbb{R}^{2}}2F(u)dx=0.\]
_In particular, it satisfies_
\[Q(u):=A(u)-\frac{1}{4}\|u\|_{2}^{4}+\int_{\mathbb{R}^{2}}(2F(u)-f(u)u)dx=0.\]
**Proof.** As the argument in [15, Lemma 2.7] and [26, Lemma 2.1], the proof can be done by multiplying Eq. (13) by \(x\cdot\nabla u\) and integrating by parts. So, we omit it here.
**Lemma 2.9**: _Assume that conditions \((f_{1})\) and \((f_{5})-(f_{6})\) hold. Then we have_
\[g(t,v):=t^{-2}F(tv)-F(v)+\frac{1-t^{p-2}}{p-2}[f(v)v-2F(v)]\geq 0,\mbox{ for }t>0 \mbox{ and }v\in\mathbb{R}. \tag{14}\]
_Moreover, there holds_
\[\frac{F(t)}{|t|^{p-1}t}\mbox{ is nondecreasing on }(-\infty,0)\cup(0,+\infty). \tag{15}\]
**Proof.** For any \(t>0\) and \(v\in\mathbb{R}\), by condition \((f_{1})\), a direct calculation shows that
\[g^{\prime}(t,v)|_{t} = -2t^{-3}F(tv)+t^{-2}f(tv)v-t^{p-3}(f(v)v-2F(v))\] \[= t^{p-3}|v|^{p}\left(\frac{f(tv)tv-2F(tv)}{|tv|^{p}}-\frac{f(v)v -2F(v)}{|v|^{p}}\right).\]
Using this, together with condition \((f_{6})\), yields that \(g^{\prime}(t,v)|_{t}\geq 0\) if \(t\geq 1\) and \(g^{\prime}(t,v)|_{t}<0\) if \(0<t<1\). This implies that \(g(t,v)\geq g(1,v)=0\) for all \(t>0\) and \(v\in\mathbb{R}\). So (14) holds. Furthermore, by (14) and condition \((f_{5})\), one has
\[\lim_{t\to 0}g(t,v)=\frac{1}{p-2}[f(v)v-pF(v)]\geq 0,\mbox{ }\forall v\in \mathbb{R},\]
which leads to
\[\left(\frac{F(t)}{|t|^{p-1}t}\right)^{\prime}=\frac{1}{|t|^{p+1}}[f(t)t-pF(t)] \geq 0,\mbox{ }\forall t\in\mathbb{R}.\]
This shows that (15) holds. We complete the proof.
Define a function \(\Phi:\mathbb{R}\rightarrow\mathbb{R}\) given by
\[\Phi(t)=\frac{t^{2}}{2}A(u)-t^{-2}\int_{\mathbb{R}^{2}}F(tu)dx. \tag{16}\]
Then we have the following lemma.
**Lemma 2.10**: _Assume that conditions \((f_{1})\) and \((f_{5})-(f_{6})\) hold. Then for any \(u\in H^{1}(\mathbb{R}^{2}),\) we have_
\[J(u)-\Phi(t)\geq\frac{1-t^{p-2}}{p-2}Q(u)+\frac{h(t)}{2(p-2)}A(u)-\frac{K}{4} \|u\|_{L^{2}}^{3}A(u)^{1/2}+\frac{1-t^{p-2}}{4(p-2)}\|u\|_{L^{2}}^{4},\]
_for \(t\geq 0\), where_
\[h(t):=2t^{p-2}-(p-2)t^{2}+p-4>0\mbox{ for }t\geq 0.\]
_In particular, there holds_
\[J(u)\geq\frac{1}{p-2}Q(u)+\frac{p-4}{2(p-2)}A(u)-\frac{K}{4}\|u\|_{L^{2}}^{3} A(u)^{1/2}+\frac{1}{4(p-2)}\|u\|_{L^{2}}^{4}.\]
**Proof.** By (12), (16) and Lemma 2.9, for all \(u\in H^{1}(\mathbb{R}^{2}),\) we have
\[J(u)-\Phi(t) = \frac{1-t^{2}}{2}A(u)+\frac{1}{4}V(u)+\int_{\mathbb{R}^{2}}[t^{-2} F(tu)-F(u)]dx \tag{17}\] \[= \frac{1-t^{p-2}}{p-2}Q(u)+\left[\frac{1-t^{2}}{2}-\frac{1-t^{p-2} }{p-2}\right]A(u)\] \[+\frac{1}{4}V(u)+\frac{1-t^{p-2}}{4(p-2)}\|u\|_{L^{2}}^{4}\] \[+\int_{\mathbb{R}^{2}}\left[t^{-2}F(tu)-F(u)+\frac{1-t^{p-2}}{p-2 }\left(f(u)u-2F(u)\right)\right]dx\] \[\geq \frac{1-t^{p-2}}{p-2}Q(u)+\frac{2t^{p-2}-(p-2)t^{2}+p-4}{2(p-2)}A(u)\] \[-\frac{K}{4}\|u\|_{L^{2}}^{3}A(u)^{1/2}+\frac{1-t^{p-2}}{4(p-2)} \|u\|_{L^{2}}^{4}\text{ for }t\geq 0.\]
Set
\[h(t):=2t^{p-2}-(p-2)t^{2}+p-4\text{ for }t\geq 0.\]
A direct calculation shows that \(h^{\prime}(t)=2(p-2)t(t^{p-4}-1).\) Since \(p>4,\) we have \(h^{\prime}(t)\geq 0\) if \(t\geq 1\) and \(h^{\prime}(t)\leq 0\) if \(0\leq t<1,\) which implies that \(h(t)\geq h(1)=0\) for \(t\geq 0.\)
Letting \(t\to 0\) in (17), by condition \((f_{5}),\) we deduce that
\[J(u)\geq\frac{1}{p-2}Q(u)+\frac{p-4}{2(p-2)}A(u)-\frac{K}{4}\|u\|_{L^{2}}^{3}A (u)^{1/2}+\frac{1}{4(p-2)}\|u\|_{L^{2}}^{4}.\]
We complete the proof.
**Remark 2.1**: _Clearly, if condition \((f_{5})\) is replaced by condition \((f_{2}),\) then the above lemma still holds._
## 3 The local well-posedness for the Cauchy problem
We consider the local well-posedness for the Cauchy problem (1). Following the ideas in [31], we can decompose the nonlinearity as
\[\gamma\omega\psi=-\frac{\gamma}{2\pi}\|\psi\|_{2}^{2}(\ln(1+|x|))\psi-\frac{ \gamma}{2\pi}\psi\int_{\mathbb{R}^{2}}\ln\left(\frac{|x-y|}{1+|x|}\right)| \psi(y)|^{2}dy.\]
Since the conservation of mass \(\|\psi(t)\|_{2}=\|\psi_{0}\|_{2},\) thus we set \(m:=\frac{\gamma}{2\pi}\|\psi_{0}\|_{2}^{2}>0,\) and we obtain the following equivalent equation
\[\left\{\begin{array}{l}i\partial_{t}\psi+(\Delta-m\ln(1+|x|))\psi-\frac{ \gamma}{2\pi}\psi\int_{\mathbb{R}^{2}}\ln\left(\frac{|x-y|}{1+|x|}\right)|\psi (y)|^{2}dy+f(\psi)=0,\\ \psi(0,x)=\psi_{0}(x),\end{array}\right. \tag{18}\]
where the operator \(\mathcal{L}:=\Delta-m\ln(1+|x|)\) is a self-adjoint operator on \(C_{0}^{\infty}(\mathbb{R}^{2})\). Since the potential \(\ln(1+|x|)\) is subquadratic, for \(t\in[-T,T],\) we have
\[\left\|e^{it\mathcal{L}}\varphi\right\|_{L^{\infty}}\lesssim|t|^{-1}\left\| \varphi\right\|_{L^{1}}.\]
**Definition 3.1**: _The pair \((q,r)\) is referred to be as a Strichartz admissible if_
\[\frac{2}{q}+\frac{2}{r}=1,\quad\text{for }q,r\in[2,\infty],\text{ and }(q,r,2)\neq(2,\infty,2).\]
**Lemma 3.2** (Strichartz estimats): _For any \(T>0\), the following properties hold: \((i)\) Let \(\varphi\in L^{2}(\mathbb{R}^{2})\). For any admissible pair \((q,r)\), we have_
\[\left\|e^{it\mathcal{L}}\varphi\right\|_{L^{q}((-T,T);L^{r})}\lesssim\left\| \varphi\right\|_{L^{2}}.\]
\((ii)\) _Let \(I\subset(-T,T)\) be an interval and \(t_{0}\in\overline{I}\). If \(F\in L^{\vec{q}^{\prime}}(I;L^{\vec{r}^{\prime}})\), then for any admissible pairs \((q,r)\) and \((\tilde{q},\tilde{r})\), we have_
\[\left\|\int_{t_{0}}^{t}e^{i(t-s)\mathcal{L}}F(s)ds\right\|_{L^{q}(I;L^{r})} \lesssim\left\|F\right\|_{L^{\vec{q}^{\prime}}(I;L^{\vec{r}^{\prime}})}.\]
**Lemma 3.3**: _([31]) Let \(W\) be an arbitrary weight function such that \(\nabla W,\Delta W\in L^{\infty}(\mathbb{R}^{2})\). Then for all \(T>0\) and admissible pair \((q,r)\), we have the following estimates: \((i)\)\(\left\|[\nabla,e^{it\mathcal{L}}]\varphi\right\|_{L^{q}((-T,T);L^{r})}\lesssim \left|T\right|\left\|\varphi\right\|_{L^{2}}\); \((ii)\)\(\left\|[W,e^{it\mathcal{L}}]\varphi\right\|_{L^{q}((-T,T);L^{r})}\lesssim \left|T\right|\left\|(1+\nabla)\varphi\right\|_{L^{2}}\)._
**Lemma 3.4**: _([31]) Let_
\[K(x,y)=\frac{\ln(|x-y|)-\ln(1+|x|)}{1+\ln(1+|y|)}\]
_for \(x,y\in\mathbb{R}^{2}\). For any \(p\in[1,\infty)\) and \(\varepsilon>0\) there exist a function \(W(x,y)\geq 0\) with \(\|W\|_{L^{\infty}_{y}L^{p}_{x}}\leq\varepsilon\) and a constant \(C\) such that_
\[|K(x,y)|\leq C+W(x,y)\]
_for all \((x,y)\in\mathbb{R}^{2+2}\)._
Recall that the following Logarithmic inequality. For more details of its proof, we refer to [9, 22, 23].
**Lemma 3.5**: _Let \(\beta\in(0,1)\). For any \(\delta>\frac{1}{2\pi\beta}\) and any \(0<\zeta\leq 1\), a constant \(C_{\zeta}>0\) exists such that, for any function \(u\in H^{1}(\mathbb{R}^{2})\cap C^{\beta}(\mathbb{R}^{2})\), we have_
\[\|u\|_{L^{\infty}}^{2}\leq\zeta\|u\|_{\zeta}^{2}\log\left(C_{\zeta}+\frac{8^{ \beta}\|u\|_{C^{\beta}}}{\zeta^{\beta}\|u\|_{\zeta}}\right),\]
_where_
\[\|u\|_{\zeta}^{2}=\|\nabla u\|_{L^{2}}^{2}+\zeta^{2}\|u\|_{L^{2}}^{2},\]
_and \(C^{\beta}\) denotes the space of \(\beta\)-Holder continuous functions endowed with the norm_
\[\|u\|_{C^{\beta}}=\|u\|_{L^{\infty}}+\sup_{x\neq y}\frac{|u(x)-u(y)|}{|x-y|^{ \beta}}.\]
**We are ready to prove Theorem 1.1:** Define a Banach space
\[\mathcal{H}_{R}:=\{\psi\in L^{\infty}_{t}([0,T);X)\ |\ \|\psi\|_{\mathcal{H}}\leq R\}\]
with norm
\[\|\psi\|_{\mathcal{H}}:=\|\psi\|_{L^{\infty}_{t}X}+\|f\|_{L^{4}_{t}W^{1,4}}+\left \|\sqrt{\ln(1+|x|)}\psi\right\|_{L^{4}_{t}L^{4}}.\]
We try to solve the following integral equation
\[\psi(t)=e^{it\mathcal{L}}\psi_{0}+i\int_{0}^{t}e^{i(t-s)\mathcal{L}}\left[f( \psi)-\frac{\gamma}{2\pi}\psi\int_{\mathbb{R}^{2}}\ln\left(\frac{|x-y|}{1+|x|} \right)|\psi(y)|^{2}dy\right]ds.\]
Let
\[\Psi(\psi(t))=e^{it\mathcal{L}}\psi_{0}+i\int_{0}^{t}e^{i(t-s)\mathcal{L}} \left[f(\psi)-\frac{\gamma\psi}{2\pi}\int_{\mathbb{R}^{2}}\ln\left(\frac{|x-y| }{1+|x|}\right)|\psi(y)|^{2}dy\right]ds=\Psi_{1}(\psi(t))+\Psi_{2}(\psi(t)),\]
where
\[\Psi_{1}(\psi(t)):=\frac{1}{2}e^{it\mathcal{L}}\psi_{0}-\frac{i\gamma}{2\pi} \int_{0}^{t}e^{i(t-s)\mathcal{L}}\psi\int_{\mathbb{R}^{2}}\ln\left(\frac{|x-y| }{1+|x|}\right)|\psi(y)|^{2}dyds,\]
and
\[\Psi_{2}(\psi(t)):=\frac{1}{2}e^{it\mathcal{L}}\psi_{0}+i\int_{0}^{t}e^{i(t-s )\mathcal{L}}f(\psi)ds.\]
By Lemma 3.4, there exist a nonnegative function \(W\in L^{\infty}_{y}L^{\frac{4}{3}}_{x}\) and a positive constant \(C\) such that
\[|K(x,y)|\leq C+W(x,y).\]
Then we have
\[P\psi=\int K(x,y)(1+\ln(1+|y|))|\psi(y)|^{2}\psi(x)dy\]
and
\[\|P\psi\|_{L^{2}}\lesssim(\|\psi\|_{L^{2}}+\|\psi\|_{L^{4}})\|\sqrt{1+\ln(1+| x|)}\psi\|_{L^{2}}^{2}.\]
It follows from that
\[\|P\psi\|_{L^{1}_{t}L^{2}}\lesssim(T\|\psi\|_{L^{\infty}_{t}L^{2}}+T^{\frac{3 }{4}}\|\psi\|_{L^{4}_{t}L^{4}})\|\sqrt{1+\ln(1+|x|)}\psi\|_{L^{\infty}_{t}L^{2 }}^{2}.\]
By Lemma 3.2, we obtain
\[\|\Psi_{1}(\psi)\|_{L^{\infty}_{t}L^{2}}+\|\Psi_{1}(\psi)\|_{L^{4}_{t}L^{4}} \lesssim\|\psi_{0}\|_{L^{2}}+(T+T^{\frac{3}{4}})\|\psi\|_{\mathcal{H}}^{3}.\]
Next, we estimate \(\nabla\Psi_{1}(\psi)\). Note that
\[\nabla\Psi_{1}(\psi) = e^{it\mathcal{L}}\nabla\psi_{0}-\frac{i\gamma}{2\pi}\int_{0}^{t}e ^{i(t-s)\mathcal{L}}\nabla\left(\psi\int_{\mathbb{R}^{2}}\ln\left(\frac{|x-y| }{1+|x|}\right)|\psi(y)|^{2}dy\right)ds\] \[+[\nabla,e^{it\mathcal{L}}]\psi_{0}-\frac{i\gamma}{2\pi}\int_{0}^ {t}[\nabla,e^{i(t-s)\mathcal{L}}]\left(\psi\int_{\mathbb{R}^{2}}\ln\left( \frac{|x-y|}{1+|x|}\right)|\psi(y)|^{2}dy\right)ds.\]
We obtain
\[\int_{0}^{t}\left\|[\nabla,e^{i(t-s)\mathcal{L}}]P(\psi)\right\|_{L^{2}}ds\leq \int_{0}^{t}(t-s)\left\|P(\psi)\right\|_{L^{2}}ds\leq|t|\left\|P(\psi)\right\|_{ L^{1}_{t}L^{2}}\]
and
\[\|P\nabla\psi\|_{L^{1}_{t}L^{2}}\lesssim(T\|\nabla\psi\|_{L^{\infty}_{t}L^{2}}+ T^{\frac{3}{4}}\|\nabla\psi\|_{L^{4}_{t}L^{4}})\|\sqrt{1+\ln(1+|x|)}\psi\|_{L^{ \infty}_{t}L^{2}}^{2}.\]
Since
\[(\nabla P)\psi=\left(\int_{\mathbb{R}^{2}}\left(\frac{1}{|x-y|}-\frac{1}{1+|x| }\right)|\psi(y)|^{2}dy\right)\psi(x),\]
it follows from Hardy-Littlewood-Sobolev and Sobolev inequalities that
\[\|(\nabla P)\psi\|_{L^{2}} \lesssim \left\|(|x|^{-1}*|\psi|^{2})+(1+|x|)^{-1}\|\psi\|_{L^{2}}^{2} \right\|_{L^{4}}\|\psi\|_{L^{4}}\] \[\lesssim \left(\|\psi\|_{L^{\frac{9}{4}}}^{2}+\|\psi\|_{L^{2}}^{2}\right) \|\psi\|_{L^{4}}\] \[\lesssim \left(\|\nabla\psi\|_{L^{2}}^{2}+\|\psi\|_{L^{2}}^{2}\right)\| \psi\|_{L^{4}}.\]
Thus we have
\[\|(\nabla P)\psi\|_{L^{1}_{t}L^{2}}\lesssim T^{\frac{3}{4}}\left(\|\nabla\psi \|_{L^{\infty}_{t}L^{2}}^{2}+\|\psi\|_{L^{\infty}_{t}L^{2}}^{2}\right)\|\psi \|_{L^{4}_{t}L^{4}}.\]
By Lemma 3.2, we deduce that
\[\|\nabla\Psi_{1}(\psi)\|_{L^{\infty}_{t}L^{2}}+\|\nabla\Psi_{1}\|_{L^{4}_{t}L^ {4}}\lesssim\|\nabla\psi_{0}\|_{X}+(T+T^{\frac{3}{4}})\|\psi\|_{\mathcal{H}}^ {3}.\]
Let us give the estimate of \(\sqrt{\ln(1+|x|)}\Psi_{1}(\psi)\). It holds
\[\sqrt{1+\ln(1+|x|)}\Psi_{1}(\psi)=e^{it\mathcal{L}}\sqrt{1+\ln(1+|x|)}\psi_{0 }+\frac{i\gamma}{2\pi}\int_{0}^{t}e^{i(t-s)\mathcal{L}}\sqrt{1+\ln(1+|x|)}P \psi ds+M_{1},\]
where
\[M_{1}:=[\sqrt{1+\ln(1+|x|)},e^{it\mathcal{L}}]\psi_{0}+\frac{i\gamma}{2\pi} \int_{0}^{t}[\sqrt{1+\ln(1+|x|)},e^{i(t-s)\mathcal{L}}]P\psi ds.\]
Let \(W=\sqrt{1+\ln(1+|x|)}\), we have
\[\|M_{1}\|_{L^{\infty}_{t}L^{2}}+\|M_{1}\|_{L^{4}_{t}L^{4}} \lesssim T\|\psi_{0}\|_{X}+T\|(1+\nabla)(P\psi)\|_{L^{1}_{t}L^{2}}\] \[\lesssim T\|\psi_{0}\|_{X}+T(T+T^{\frac{3}{4}})\|\psi\|_{\mathcal{H}}^{3},\]
and
\[\|P(W\psi)\|_{L^{1}_{t}L^{2}} \lesssim (T\|W\psi\|_{L^{\infty}_{t}L^{2}}+T^{\frac{3}{4}}\|W\psi\|_{L^{ 4}_{t}L^{4}})\|W\psi\|_{L^{\infty}_{t}L^{2}}\] \[\lesssim (T+T^{\frac{3}{4}})\|\psi\|_{\mathcal{H}}^{3}.\]
Finally, we estimate \(\Psi_{2}(\psi)\). By Lemma 3.2, we can deduce that
\[\|\Psi_{2}(\psi)\|_{L^{\infty}_{t}H^{1}}\lesssim\|\psi_{0}\|_{X}+\|f(\psi)\|_{ L^{1}_{t}H^{1}}.\]
By the Holder inequality, for any \(\varepsilon>0\), we have
\[\left\|e^{4\pi(1+\varepsilon)|\psi|^{2}}-1\right\|_{L^{\frac{4}{\eta}}_{t}L^{4}} \lesssim\|e^{3\pi(1+\varepsilon)|\psi|^{2}_{L^{\infty}}}\|_{L^{\frac{4}{\eta}}_{ t}}\|e^{4\pi(1+\varepsilon)|\psi|^{2}}-1\|^{\frac{1}{4}}_{L^{\infty}_{t}L^{1}}.\]
Let \(\|\nabla\psi\|^{2}_{L^{\infty}_{t}L^{2}}<1\) and take \(\varepsilon>0\) small such that
\[(1+\varepsilon)\|\nabla\psi\|_{L^{\infty}_{t}L^{2}}<1.\]
By using Moser-Trudinger inequality, we have
\[\int_{\mathbb{R}^{2}}(e^{4\pi(1+\varepsilon)|\psi|^{2}})dx\lesssim\int_{ \mathbb{R}^{2}}(e^{4\pi(1+\varepsilon)|\nabla\psi|^{2}_{L^{\infty}_{t}L^{2}}( \frac{|\psi|}{|\nabla\psi|})^{2}})dx\lesssim\|\psi\|^{2}_{L^{2}}\lesssim 1.\]
By Lemma 3.5, for any \(\delta>\frac{1}{\pi}\) and \(0<\zeta\leq 1\), we get
\[e^{4\pi(1+\varepsilon)\|\psi\|_{L^{\infty}}}\leq\left(C+2\sqrt{\frac{2}{\zeta }}\frac{\|\psi\|_{C^{\frac{1}{2}}}}{\|\psi\|_{\zeta}}\right)^{\delta 4\pi(1+ \varepsilon)\|\psi\|^{2}_{\zeta}}.\]
Since
\[\|\psi\|^{2}_{\zeta}=\zeta^{2}\|\psi\|^{2}_{L^{2}}+\|\nabla\psi\|^{2}_{L^{2}}< 1+\zeta^{2}\|\psi\|^{2}_{L^{\infty}_{t}H^{1}},\]
we may take \(0<\zeta,\varepsilon\) near to zero, and \(0<\alpha<4\) near to \(4\) such that \(4\pi(1+\varepsilon)\|\psi\|^{2}_{\zeta}<4\pi\). Thus for \(\delta>\frac{1}{\pi}\) near to \(\frac{1}{\pi}\),
\[e^{4\pi(1+\varepsilon)\|\psi\|_{L^{\infty}}}\lesssim(1+\|\psi\|_{C^{\frac{1}{2 }}})^{\alpha}\lesssim 1+\|\psi\|^{\alpha}_{W^{1,4}}.\]
So we have
\[\left\|e^{4\pi(1+\varepsilon)|\psi|^{2}}-1\right\|_{L^{\frac{4}{ \eta}}_{t}L^{4}} \lesssim \|e^{3\pi(1+\varepsilon)\|\psi\|^{2}_{L^{\infty}}}\|_{L^{\frac{ 4}{\eta}}_{t}}\|e^{(1+\varepsilon)|\psi|^{2}}-1\|^{\frac{1}{4}}_{L^{\infty}_{ t}L^{1}}\] \[\lesssim \|e^{3\pi(1+\varepsilon)\|\psi\|^{2}_{L^{\infty}}}\|_{L^{\frac{4 }{\eta}}_{t}}\] \[\lesssim \|1+\|\psi\|^{\alpha}_{W^{1,4}}\|^{\frac{3}{4}}_{L^{1}_{t}}\] \[\lesssim T^{\frac{3}{4}}+T^{\frac{3}{4}(1-\frac{\alpha}{4})}\|\psi\|^{ \frac{3\alpha}{4}}_{L^{4}_{t}W^{1,4}}.\]
By condition \((f_{7})\), Holder inequality and above estimates, we have
\[\|f(\psi)\|_{L^{1}_{t}L^{2}} \lesssim \left\|\psi(e^{4\pi(1+\varepsilon)|\psi|^{2}}-1)\right\|_{L^{1}_ {t}L^{2}}\] \[\lesssim \|\psi\|_{L^{4}_{t}L^{4}}\left\|e^{4\pi(1+\varepsilon)|\psi|^{2}} -1\right\|_{L^{\frac{4}{t}}_{t}L^{4}}\] \[\lesssim \|\psi\|_{\mathcal{H}}\left(T^{\frac{3}{4}}+\|\psi\|^{\frac{3}{ \alpha}\alpha}_{\mathcal{H}}T^{\frac{3}{4}(1-\frac{\alpha}{4})}\right),\]
and
\[\|\nabla f(\psi)\|_{L^{1}_{t}L^{2}_{x}} \lesssim \left\|\nabla\psi(|\psi|+e^{4\pi(1+\varepsilon)|\psi|^{2}}-1) \psi\right\|_{L^{1}_{t}L^{2}}\] \[\lesssim \left\|\nabla\psi|\psi|^{2}\right\|_{L^{1}_{t}L^{2}}+\left\| \nabla\psi(e^{4\pi(1+\varepsilon)|\psi|^{2}}-1)\psi\right\|_{L^{1}_{t}L^{2}}\] \[\lesssim \|\psi\|_{L^{\infty}_{t}H^{1}}\left\|\nabla\psi\right\|_{L^{4}_{ t}L^{4}}\left(\|\psi\|_{L^{\infty}_{t}H^{1}}\,T^{\frac{3}{4}}+\left\|e^{4\pi(1+ \varepsilon)|\psi|^{2}}-1\right\|_{L^{\frac{4}{t}}_{t}L^{4+\varepsilon}}\right)\] \[\lesssim \|\psi\|^{2}_{\mathcal{H}}\left(\|\psi\|_{\mathcal{H}}\,T^{\frac{ 3}{4}}+T^{\frac{3}{4}}+\|\psi\|^{\frac{3}{4}\alpha}_{\mathcal{H}}\,T^{\frac{3}{ 4}(1-\frac{\alpha}{4})}\right).\]
Then we get
\[\|\Psi_{2}(\psi)\|_{L^{\infty}_{t}H^{1}}\lesssim\|\psi_{0}\|_{X}+\|\psi\|_{\cal H }\left(T^{\frac{3}{4}}+\|\psi\|_{\cal H}^{\frac{3}{4}\alpha}\,T^{\frac{3}{4}(1- \frac{\alpha}{4})}\right)+\|\psi\|_{\cal H}^{2}\left(\|\psi\|_{\cal H}\,T^{\frac {3}{4}}+T^{\frac{3}{4}}+\|\psi\|_{\cal H}^{\frac{3}{4}\alpha}\,T^{\frac{3}{4}(1 -\frac{\alpha}{4})}\right).\]
Let us estimate the term \(\sqrt{\ln(1+|x|)}\Psi_{2}(\psi)\). It holds
\[\sqrt{1+\ln(1+|x|)}\Psi_{2}(\psi)=e^{it{\cal L}}\sqrt{1+\ln(1+|x|)}\psi_{0}+i \int_{0}^{t}e^{i(t-s){\cal L}}\sqrt{1+\ln(1+|x|)}f(\psi)ds+M_{2},\]
where
\[M_{2}:=[\sqrt{1+\ln(1+|x|)},e^{it{\cal L}}]\psi_{0}+i\int_{0}^{t}[\sqrt{1+\ln( 1+|x|)},e^{i(t-s){\cal L}}]f(\psi)ds.\]
Let \(W=\sqrt{1+\ln(1+|x|)}\), we have
\[\|M_{2}\|_{L^{\infty}_{t}L^{2}}+\|M_{2}\|_{L^{4}_{t}L^{4}} \lesssim T\|\psi_{0}\|_{X}+T\|(1+\nabla)(f(\psi))\|_{L^{1}_{t}L^{2}}\] \[\lesssim T\left\|\psi\right\|_{\cal H}\left(T^{\frac{3}{4}}+\|\psi\|_{ \cal H}^{\frac{3}{4}\alpha}\,T^{\frac{3}{4}(1-\frac{\alpha}{4})}\right)\] \[+\left\|\psi\right\|_{\cal H}^{2}\left(\|\psi\|_{\cal H}\,T^{ \frac{3}{4}}+T^{\frac{3}{4}}+\|\psi\|_{\cal H}^{\frac{3}{4}\alpha}\,T^{\frac{3 }{4}(1-\frac{\alpha}{4})}\right),\]
and
\[\|f(\psi)W\|_{L^{1}_{t}L^{2}} \lesssim \left\|\psi\sqrt{1+\ln(1+|x|)}(e^{4\pi(1+\varepsilon)|\psi|^{2}} -1)\right\|_{L^{1}_{t}L^{2}}\] \[\lesssim \left\|\psi\sqrt{1+\ln(1+|x|)}\right\|_{L^{4}_{t}L^{4}}\left\|e^{ 4\pi(1+\varepsilon)|\psi|^{2}}-1\right\|_{L^{\frac{4}{4}}_{t}L^{4}}\] \[\lesssim \|\psi\|_{\cal H}\left(T^{\frac{3}{4}}+\|\psi\|_{\cal H}^{\frac{ 3}{4}\alpha}\,T^{\frac{3}{4}(1-\frac{\alpha}{4})}\right).\]
We can conclude that
\[\|\Psi(\psi)\|_{\cal H} \lesssim \|\psi_{0}\|_{X}+(T+T^{\frac{3}{4}})\|\psi\|_{\cal H}^{3}+\|\psi \|_{\cal H}\left(T^{\frac{3}{4}}+\|\psi\|_{\cal H}^{\frac{3}{4}\alpha}\,T^{ \frac{3}{4}(1-\frac{\alpha}{4})}\right)\] \[+\left\|\psi\right\|_{\cal H}^{2}\left(\|\psi\|_{\cal H}\,T^{ \frac{3}{4}}+T^{\frac{3}{4}}+\|\psi\|_{\cal H}^{\frac{3}{4}\alpha}\,T^{\frac{3 }{4}(1-\frac{\alpha}{4})}\right).\]
Thus, if we take \(R\geq 2\|\psi_{0}\|_{X}\), then there exists \(T>0\) such that \(\Psi\) is a contraction map from \({\cal H}_{R}\) to itself. A similar argument shows that \(\Psi\) has a unique fixed point in this space.
## 4 The ground state standing waves
**Lemma 4.1**: _For any \(\rho>0,\) there holds \(S(c)\cap{\cal B}_{\rho}\neq\emptyset\) if \(0<c<\rho,\) where \({\cal B}_{\rho}\) is as in (7)._
**Proof.** Let \(\phi(x)=e^{-\frac{1}{2}|x|^{2}}.\) Then a direct calculation shows that
\[\|\nabla\phi\|_{L^{2}}^{2}=\|\phi\|_{L^{2}}^{2}=\pi.\]
Then for \(0<c<\rho,\) we have \(\bar{\phi}:=\sqrt{\frac{c}{\pi}}\phi\in S(c)\cap{\cal B}_{\rho}.\) We complete the proof.
**Lemma 4.2**: _Assume that conditions \((f_{1})\) and \((f_{5})\) hold and \(F(t)\geq 0\) for \(t>0.\) Then for any \(0<\rho<1,\) we have_
\[\gamma_{c}^{\rho}=\inf_{u\in S(c)\cap\mathcal{B}_{\rho}}J(u)>-\infty\text{ if }0<c<\rho.\]
_Furthermore, there holds_
\[\gamma_{c}^{\rho}\leq\frac{1}{2}c+\frac{\sqrt{\pi}c^{3}}{4}.\]
**Proof.** By conditions \((f_{1})\) and \((f_{5}),\) for any \(\xi>0\) and for fixed \(q>2,\) there exists a constant \(C_{1}=C_{1}(\xi,\alpha,q)>0\) such that
\[F(t)\leq\xi|t|^{2}+C_{1}|t|^{q}(e^{\alpha t^{2}}-1),\text{ }\forall t\in \mathbb{R},\]
which shows that
\[\int_{\mathbb{R}^{2}}F(u)dx\leq\xi\int_{\mathbb{R}^{2}}|u|^{2}dx+C_{1}\int_{ \mathbb{R}^{2}}|u|^{q}(e^{\alpha u^{2}}-1)dx. \tag{19}\]
Let \(u\in S(c)\cap\mathcal{B}_{\rho}\) with \(0<\rho<1.\) Then we can choose \(\alpha>4\pi\) close to \(4\pi\) and \(\bar{\eta}>1\) close to \(1\) such that \(\alpha\bar{\eta}\rho<4\pi.\) By Lemma 2.5, the Holder inequality and the fact that
\[(e^{s}-1)^{t}\leq e^{st}-1\text{ for }s\geq 0\text{ and }t>1,\]
we have
\[\int_{\mathbb{R}^{2}}|u|^{q}(e^{\alpha u^{2}}-1)dx \leq \left(\int_{\mathbb{R}^{2}}|u|^{qq}dx\right)^{1/\eta}\left(\int_{ \mathbb{R}^{2}}(e^{\alpha u^{2}}-1)^{\bar{\eta}}dx\right)^{1/\bar{\eta}} \tag{20}\] \[\leq \|u\|_{L^{qq}}^{q}\left[\int_{\mathbb{R}^{2}}\left(e^{\alpha\bar{ \eta}\rho\left(u/\|\nabla u\|_{L^{2}}\right)^{2}}-1\right)dx\right]^{1/\bar{ \eta}}\] \[\leq C_{2}\|u\|_{L^{qq}}^{q}\]
for some \(C_{2}=C_{2}(\alpha,\bar{\eta},\rho)>0,\) where \(\eta:=\frac{\bar{\eta}}{\bar{\eta}-1}>1.\) Thus, it follows from (19) and (20) that there exists a constant \(K_{1}=C_{1}C_{2}>0\) such that
\[\int_{\mathbb{R}^{2}}F(u)dx\leq\xi\|u\|_{L^{2}}^{2}+K_{1}\|u\|_{L^{qq}}^{q}. \tag{21}\]
Since \(V_{1}(u)\geq 0,\) by (9), (12) and (21), we have
\[J(u) = \frac{1}{2}A(u)+\frac{1}{4}V(u)-\int_{\mathbb{R}^{2}}F(u)dx \tag{22}\] \[\geq \frac{1}{2}A(u)-\frac{1}{4}V_{2}(u)-\xi\|u\|_{L^{2}}^{2}-K_{1}\|u \|_{L^{qq}}^{q}\] \[\geq \frac{1}{2}A(u)-\frac{1}{4}Kc^{\frac{3}{2}}A(u)^{\frac{1}{2}}- \xi c-K_{1}(c\mathcal{S}_{q\eta})^{\frac{1}{9}}A(u)^{\frac{q\eta-2}{2q}},\]
which implies that \(\gamma_{c}^{\rho}=\inf_{u\in S(c)\cap\mathcal{B}_{\rho}}J(u)>-\infty.\)
Moreover, it follows from (10) and the fact of \(F(t)\geq 0\) for \(t>0\) that
\[J(\bar{\phi}) \leq \frac{1}{2}A(\bar{\phi})+\frac{1}{4}V_{1}(\bar{\phi})\] \[= \frac{1}{2}A(\bar{\phi})+\frac{1}{2}\|\bar{\phi}\|_{*}^{2}c^{2}\] \[\leq \frac{1}{2}c+\frac{\sqrt{\pi}c^{3}}{4}.\]
Hence, we have
\[\gamma_{c}^{\rho}\leq\frac{1}{2}c+\frac{\sqrt{\pi}c^{3}}{4}.\]
We complete the proof.
**Lemma 4.3**: _Assume that conditions \((f_{1})\) and \((f_{5})\) hold and \(F(t)\geq 0\) for \(t>0.\) For any \(0<\rho<1,\) if \(S(c)\cap({\cal B}_{\rho}\backslash{\cal B}_{b\rho})\neq\emptyset,\) then there exists \(c_{1}>0\) such that for any \(0<c<c_{1}\),_
\[\inf_{u\in S(c)\cap{\cal B}_{a\rho}}J(u)<\inf_{u\in S(c)\cap({\cal B}_{\rho} \backslash{\cal B}_{b\rho})}J(u),\]
_where \(0<a<b<1\)._
**Proof.** For any \(\rho>0\), it follows from Lemma 4.1 that \(S(c)\cap{\cal B}_{a\rho}\neq\emptyset\) for \(0<c<a\rho.\) By (23), for \(0<c<a\rho,\) we have
\[J(\bar{\phi}) \leq \frac{1}{2}A(\bar{\phi})+\frac{1}{4}V_{1}(\bar{\phi})\] \[< \frac{1}{2}a\rho+\frac{\sqrt{\pi}c^{3}}{4},\]
which implies that
\[\inf_{u\in S(c)\cap{\cal B}_{a\rho}}J(u)<\frac{1}{2}a\rho+\frac{\sqrt{\pi}c^{3 }}{4}. \tag{24}\]
On the other hand, for any \(u\in{\cal B}_{\rho}\backslash{\cal B}_{b\rho},\) by (22) one has
\[J(u) \geq \frac{1}{2}A(u)-\frac{1}{4}Kc^{\frac{3}{2}}A(u)^{\frac{1}{2}}- \xi c-K_{1}(c{\cal S}_{q\eta})^{\frac{1}{\eta}}A(u)^{\frac{q\eta-2}{2\eta}}\] \[\geq \frac{1}{2}b\rho-\frac{\sqrt{\rho}}{4}Kc^{\frac{3}{2}}-\xi c-K_{ 1}(c{\cal S}_{q\eta})^{\frac{1}{\eta}}\rho^{\frac{q\eta-2}{2\eta}},\]
leading to
\[\inf_{u\in S(c)\cap({\cal B}_{\rho}\backslash{\cal B}_{b\rho})}J(u)\geq\frac{ 1}{2}b\rho-\frac{\sqrt{\rho}}{4}Kc^{\frac{3}{2}}-\xi c-K_{1}(c{\cal S}_{q\eta} )^{\frac{1}{\eta}}\rho^{\frac{q\eta-2}{2\eta}}. \tag{25}\]
We note that there exists a constant \(c_{0}=c_{0}(a,b,\rho)>0\) such that for \(0<c<c_{0},\)
\[\frac{\sqrt{\pi}}{4}c^{3}+\frac{\sqrt{\rho}K}{4}c^{\frac{3}{2}}+\xi c+K_{1}(c {\cal S}_{q\eta})^{\frac{1}{\eta}}\rho^{\frac{q\eta-2}{2\eta}}<\frac{1}{2}(b- a)\rho. \tag{26}\]
Thus, it follows from (24)-(26) that for \(0<c<c_{1}:=\min\{a\rho,c_{0}\},\)
\[\inf_{u\in S(c)\cap{\cal B}_{a\rho}}J(u)<\inf_{u\in S(c)\cap({\cal B}_{\rho} \backslash{\cal B}_{b\rho})}J(u).\]
We complete the proof.
**Now we are ready to prove Theorem 1.2:** For any \(0<\rho<1\), let \(\{u_{n}\}\subset S(c)\cap{\cal B}_{\rho}\) be a minimizing sequence for \(\gamma_{c}^{\rho}\). It follows from Lemma 2.7 that \(\{u_{n}\}\) is bounded in \(X\) for \(0<c<1-\rho\). Up to a subsequence, we can assume that \(u_{n}\rightharpoonup u_{c}\) in \(X.\) From Lemma 2.1\((i)\), it follows that \(u_{c}\in S(c)\). Moreover, we have
\[A(u_{c})\leq\liminf_{n\to\infty}A(u_{n})\leq\rho,\]
leading to \(u_{c}\in{\cal B}_{\rho}\). Hence, we have \(u_{c}\in S(c)\cap{\cal B}_{\rho}.\)
By Lemma 2.1\((iii)-(iv),\) we obtain that
\[\lim_{n\to\infty}V_{2}(u_{n})=V_{2}(u_{c})\mbox{ and }V_{1}(u_{c})\leq\liminf_{n \to\infty}V_{1}(u_{n}), \tag{27}\]
respectively. For \(0<c<1-\rho\), it follows from Lemma 2.6 that
\[\lim_{n\to\infty}\int_{{\mathbb{R}}^{2}}F(u_{n})dx=\int_{{\mathbb{R}}^{2}}F(u _{c})dx. \tag{28}\]
Using (27) and (28), gives
\[\gamma_{c}^{\rho}=\lim_{n\to\infty}J(u_{n})\geq J(u_{c})\geq\gamma_{c}^{\rho},\]
which implies that \(J(u_{c})=\gamma_{c}^{\rho}.\)
Since \(J(u_{n})\to J(u_{c})\) and \(V_{2}(u_{n})\to V_{2}(u_{c})\), together with (28) again, we get
\[\frac{1}{2}[A(u_{n})-A(u_{c})]+\frac{1}{4}[V_{1}(u_{n})-V_{1}(u_{c})]=o(1). \tag{29}\]
Taking the \(\liminf\) in (29), we have
\[\frac{1}{2}[\liminf_{n\to\infty}A(u_{n})-A(u_{c})]+\frac{1}{4}[\liminf_{n\to \infty}V_{1}(u_{n})-V_{1}(u_{c})]\leq 0.\]
And using the weak lower semi-continuity of \(A(u)\) and \(V_{1}(u)\), we deduce that
\[\liminf_{n\to\infty}A(u_{n})=A(u_{c})\mbox{ and }\liminf_{n\to\infty}V_{1}(u_{n})=V_{ 1}(u_{c}).\]
Similarly, taking the \(\limsup\) in (29), we get
\[\limsup_{n\to\infty}A(u_{n})=A(u_{c})\mbox{ and }\limsup_{n\to\infty}V_{1}(u_{n})=V_{ 1}(u_{c}).\]
Hence, we obtain that \(A(u_{n})\to A(u_{c})\) and \(V_{1}(u_{n})\to V_{1}(u_{c})\). This shows that \(u_{n}\to u_{c}\) in \(H^{1}({\mathbb{R}}^{2})\).
Next, we claim that \(\|u_{n}-u_{c}\|_{*}\to 0\) as \(n\to\infty\). By Lemma 2.2, we only need to prove that
\[B_{1}(u_{n}^{2},(u_{n}-u_{c})^{2})\to 0\mbox{ as }n\to\infty. \tag{30}\]
Indeed, we have
\[B_{1}(u_{n}^{2},(u_{n}-u_{c})^{2})=V_{1}(u_{n})-2B_{1}(u_{n}^{2},(u_{n}-u_{c} )u_{c})-B_{1}(u_{n}^{2},u_{c}^{2}). \tag{31}\]
Since \(\{u_{n}\}\) is bounded in \(X\) and \(u_{n}\rightharpoonup u_{c}\) in \(X\), it follows from Lemma 2.3 that
\[B_{1}(u_{n}^{2},(u_{n}-u_{c})u_{c})\to 0\mbox{ as }n\to\infty. \tag{32}\]
Since \(u_{n}\to u_{c}\) a.e. in \(\mathbb{R}^{2}\), using Fatou's Lemma gives
\[V_{1}(u_{c})\leq\liminf_{n\to\infty}B_{1}(u_{n}^{2},u_{c}^{2}). \tag{33}\]
Thus, by (31)-(33) one has
\[\limsup_{n\to\infty}B_{1}(u_{n}^{2},(u_{n}-u_{c})^{2})\leq\limsup_{n\to\infty }V_{1}(u_{n})-\liminf_{n\to\infty}B_{1}(u_{n}^{2},u_{c}^{2})\leq\limsup_{n\to \infty}V_{1}(u_{n})-V_{1}(u_{c}),\]
which implies that (30) holds, since \(B_{1}(u_{n}^{2},(u_{n}-u_{c})^{2})\geq 0\) and \(V_{1}(u_{n})\to V_{1}(u_{c})\). Hence, \(u_{n}\to u_{c}\) in \(X\).
Finally, it follows from Lemma 4.3 that \(u_{c}\not\in S(c)\cap\partial\mathcal{B}_{\rho}\) as \(u_{c}\in\mathcal{B}_{\rho}\), where \(\partial\mathcal{B}_{\rho}:=\{u\in X\ |\ A(u)=\rho\}\). Then \(u_{c}\) is indeed a critical point of \(J|_{S(c)}\). Therefore, for \(0<c<c_{*}:=\min\{c_{1},1-\rho\}\), there exists a Lagrange multiplier \(\lambda_{c}\in\mathbb{R}\) such that \((u_{c},\lambda_{c})\) is a couple of weak solutions to problem \((SP_{c}).\) We complete the proof.
**Next, we give the proof of Theorem 1.4:** Motivated by [6]. On the contrary, we assume that there exists \(v_{c}\in S(c)\) such that
\[J|^{\prime}_{S(c)}(v_{c})=0\mbox{ and }J(v_{c})<\gamma_{c}^{\rho}.\]
Then, \(v_{c}\) is a weak solution of the equation
\[-\Delta v_{c}+\bar{\lambda}v_{c}+(\log|\cdot|*v_{c}^{2})v_{c}=f(v_{c}),\]
for some \(\bar{\lambda}\in\mathbb{R}\). By Lemma 2.8, we have
\[Q(v_{c})=A(v_{c})-\frac{1}{4}\|v_{c}\|_{L^{2}}^{4}+\int_{\mathbb{R}^{2}}(2F(v _{c})-f(v_{c})v_{c})dx=0. \tag{34}\]
Moreover, from (4.2) it follows that
\[J(v_{c})<\frac{1}{2}c+\frac{\sqrt{\pi}c^{3}}{4}. \tag{35}\]
Thus, by Lemma 2.10, (34) and (35), we get
\[\frac{1}{2}c+\frac{\sqrt{\pi}c^{3}}{4} > J(v_{c}) \tag{36}\] \[\geq \frac{1}{p-2}Q(v_{c})+\frac{p-4}{2(p-2)}A(v_{c})-\frac{K}{4}\|v_ {c}\|_{L^{2}}^{3}A(v_{c})^{1/2}+\frac{1}{4(p-2)}\|v_{c}\|_{L^{2}}^{4}.\] \[= \frac{p-4}{2(p-2)}A(v_{c})-\frac{K}{4}\|v_{c}\|_{L^{2}}^{3}A(v_{ c})^{1/2}+\frac{1}{4(p-2)}\|v_{c}\|_{L^{2}}^{4}\] \[\geq \frac{p-4}{2(p-2)}A(v_{c})-\frac{K}{4}c^{3/2}A(v_{c})^{1/2}+\frac {1}{4(p-2)}c^{2},\]
which implies that there exists \(0<\bar{c}_{*}\leq c_{*}\) such that \(A(v_{c})<\rho\) for \(0<c<\bar{c}_{*}\). Hence, we have \(v_{c}\in\mathcal{B}_{\rho}\) and thus \(J(v_{c})\geq\gamma_{c}^{\rho}\), which contradicts with \(J(v_{c})<\gamma_{c}^{\rho}.\) This shows that \(u_{c}\) is a ground state of problem \((SP_{c})\) with \(\bar{\lambda}\in\mathbb{R}\).
Finally, similar to (36), we have \(A(u_{c})\to 0\) as \(c\to 0\), and
\[\gamma_{c}^{\rho} = J(u_{c})\geq\frac{p-4}{2(p-2)}A(u_{c})-\frac{K}{4}c^{3/2}A(u_{c} )^{1/2}+\frac{1}{4(p-2)}c^{2} \tag{37}\] \[\geq -\frac{K^{2}(p-2)}{32(p-4)}c^{3}+\frac{1}{4(p-2)}c^{2}\] \[> 0,\]
provided that
\[0<c<\tilde{c}_{*}:=\min\left\{c_{*},\frac{8(p-4)}{K^{2}(p-2)^{2}}\right\}.\]
Moreover, by (4.2) and (37) one has \(\gamma_{c}^{\rho}\to 0\) as \(c\to 0.\) We complete the proof.
**At the end of this section, we give the proof of Theorem 1.5:** Following the classical arguments of Cazenave and Lions [12]. Assume that there exist an \(\varepsilon_{0}>0\), a sequence of initial data \(\{u_{n}^{0}\}\subset X\) and a time sequence \(\{t_{n}\}\subset\mathbb{R}^{+}\) such that the unique solution \(u_{n}\) of system (1) with initial data \(u_{n}^{0}=u_{n}(\cdot,0)\) satisfies
\[\mbox{dist}_{X}(u_{n}^{0},\mathcal{M}_{c}^{\rho})<\frac{1}{n}\mbox{ and }\mbox{dist}_{X}(u_{n}(\cdot,t_{n}),\mathcal{M}_{c}^{\rho})\geq \varepsilon_{0}.\]
Without loss of generality, we may assume that \(\{u_{n}^{0}\}\subset S(c)\). Since \(\mbox{dist}_{X}(u_{n}^{0},\mathcal{M}_{c}^{\rho})\to 0\) as \(n\to\infty\), the conservation laws of the energy and mass imply that \(u_{n}(\cdot,t_{n})\) is a minimizing sequence for \(\gamma_{c}^{\rho}\) provided \(u_{n}(\cdot,t_{n})\subset\mathcal{B}_{\rho}\). Indeed, if \(u_{n}(\cdot,t_{n})\subset(X\backslash\mathcal{B}_{\rho})\), then by the continuity there exists \(\bar{t}_{n}\in[0,t_{n})\) such that \(\{u_{n}(\cdot,\bar{t}_{n})\}\subset\partial\mathcal{B}_{\rho}\). Hence, by Lemma 4.3 one has
\[J(u_{n}(\cdot,\bar{t}_{n}))\geq\inf_{u\in S(c)\cap\partial\mathcal{B}_{\rho}}J (u)>\inf_{u\in S(c)\cap\partial\mathcal{B}_{\rho}}J(u)=\inf_{u\in S(c)\cap \partial\mathcal{B}_{\rho}}J(u)=\gamma_{c}^{\rho},\]
which is a contradiction. Therefore, \(\{u_{n}(\cdot,t_{n})\}\) is a minimizing sequence for \(\gamma_{c}^{\rho}\). Then there exists \(v_{0}\in\mathcal{M}_{c}^{\rho}\) such that \(u_{n}(\cdot,t_{n})\to v_{0}\) in \(X\), which contradicts with \(\mbox{dist}_{X}(u_{n}(\cdot,t_{n}),\mathcal{M}_{c}^{\rho})\geq\varepsilon_{0}.\) We complete the proof.
## 5 The high-energy standing waves
First of all, we prove that the energy functional \(J\) on \(S(c)\) possesses a kind of mountain-pass geometrical structure. For each \(u\in H^{1}(\mathbb{R}^{2})\backslash\{0\}\) and \(t>0\), we set
\[u_{t}(x):=tu(tx)\mbox{ for all }x\in\mathbb{R}^{2},\]
we have the following result.
**Lemma 5.1**: _Assume that conditions \((f_{1})-(f_{2})\) and \((f_{4})\) hold. Let \(u\in S(c)\) be arbitrary but fixed. Then the following statements are true: \((i)\)\(A(u_{t})\to 0\) and \(J(u_{t})\to+\infty\) as \(t\to 0;\) \((ii)\)\(A(u_{t})\to+\infty\) and \(J(u_{t})\to-\infty\) as \(t\to+\infty.\)_
**Proof.** A direct calculation shows that
\[\int_{\mathbb{R}^{2}}|u_{t}|^{2}dx=\int_{\mathbb{R}^{2}}|u|^{2}dx=c,\ A(u_{t})=t^ {2}A(u),\ V(u_{t})=V(u)-c^{2}\ln t, \tag{38}\]
and
\[\int_{\mathbb{R}^{2}}|u_{t}|^{r}dx=t^{r-2}\int_{\mathbb{R}^{2}}|u|^{r}dx\ \mbox{for}\ r>2. \tag{39}\]
Clearly,
\[A(u_{t})\to 0\ \mbox{and}\ \|u_{t}\|_{L^{r}}^{r}\to 0\ \mbox{as}\ t\to 0. \tag{40}\]
Then, there exist \(t_{0}>0\) and \(0<m<1\) such that
\[A(u_{t})\leq m,\ \forall t\in(0,t_{0}].\]
Similar to the argument in Lemma 4.2, by conditions \((f_{1})-(f_{2})\), for any \(\xi>0\) and for fixed \(q>2\), there exists a constant \(K_{2}=K_{2}(\xi,\alpha,q,c)>0\) such that
\[\left|\int_{\mathbb{R}^{2}}F(u_{t})dx\right|\leq\xi\int_{\mathbb{R}^{2}}|u_{t} |^{\tau+1}dx+K_{2}\left(\int_{\mathbb{R}^{2}}|u_{t}|^{q\eta}dx\right)^{1/\eta },\ \forall t\in(0,t_{1}],\]
where \(\eta=\frac{\bar{\eta}}{\bar{\eta}-1}\) with \(\bar{\eta}>1\) closing to \(1\), and together with (39), we have
\[\int_{\mathbb{R}^{2}}F(u_{t})dx\to 0\ \mbox{as}\ t\to 0. \tag{41}\]
Moreover, it follows from (38) that
\[V(u_{t})=V(u)-c^{2}\ln t\to+\infty\ \mbox{as}\ t\to 0. \tag{42}\]
Hence, by (40)-(42), one has
\[J(u_{t})\to+\infty\ \mbox{as}\ t\to 0.\]
On the other hand, it is clear that \(A(u_{t})\to+\infty\) as \(t\to+\infty\), and it follows from conditions \((f_{4})\) that
\[J(u_{t}) = \frac{1}{2}A(u_{t})+\frac{1}{4}V(u_{t})-\int_{\mathbb{R}^{2}}F(u _{t})dx\] \[\leq \frac{t^{2}}{2}A(u)+\frac{1}{4}V(u)-\frac{c^{2}\ln t}{4}-\theta t ^{p-2}\int_{\mathbb{R}^{2}}|u|^{p}dx\] \[\to -\infty\ \mbox{as}\ t\to+\infty,\]
since \(p>4\). We complete the proof.
By Lemma 5.1, there exists \(t_{1}>>1\) such that \(w_{c}:=(u_{c})_{t_{1}}\in S(c)\backslash{\cal B}_{\rho}\) and \(J(w_{c})<0\), where \(u_{c}\) is the ground state obtained in Theorem 1.4 with \(J(u_{c})>0\) for \(0<c<\tilde{c}_{*}.\) Then, following the idea of Jeanjean [24], the energy functional \(J\) has the mountain-pass geometry on \(S(c)\). Define a set of paths
\[\Gamma:=\{h\in C([0,1],S(c))\ |\ h(0)=u_{c},h(1)=w_{c}\}\]
and a minimax value
\[m(c):=\inf_{h\in\Gamma}\max_{\tau\in[0,1]}J(h(\tau)),\]
Clearly, \(\Gamma\neq\emptyset\) and
\[\max_{\tau\in[0,1]}J(h(\tau))>\max\left\{J(u_{c}),J(w_{c})\right\}>0\text{ for }0<c<\tilde{c}_{*}.\]
Next, we introduce an auxiliary functional \(\tilde{J}:S(c)\times\mathbb{R}\rightarrow\mathbb{R}\) given by \((u,l)\to J(\psi(u,l))\), where \(\psi(u,l):=lu(lx)\). To be precise, we have
\[\tilde{J}(u,l) = J(\psi(u,l))\] \[= \frac{l^{2}}{2}\|\nabla u\|_{2}^{2}+\frac{1}{4}(V(u)-c^{2}\ln l)- \frac{1}{l^{2}}\int_{\mathbb{R}^{2}}F(lu)dx.\]
Define a set of paths
\[\tilde{\Gamma}:=\left\{\tilde{h}\in C([0,1],S(c)\times\mathbb{R})\ |\ \tilde{h}(0)=(u_{c},1)\text{ and } \tilde{h}(1)=(w_{c},1)\right\}\]
and a minimax value
\[\tilde{m}(c):=\inf_{\tilde{h}\in\Gamma}\max_{0\leq t\leq 1}\tilde{J}(\tilde{h} (t)).\]
We now claim that \(\tilde{m}(c)=m(c)\). In fact, it follows immediately from the definitions of \(\tilde{m}(c)\) and \(m(c)\) along with the fact that the maps
\[\chi:\Gamma\rightarrow\tilde{\Gamma}\text{ by }h\rightarrow\chi(h):=(h,1)\]
and
\[\Upsilon:\tilde{\Gamma}\rightarrow\Gamma\text{ by }\tilde{h}\rightarrow\Upsilon( \tilde{h}):=\psi\circ\tilde{h}\]
satisfying
\[\tilde{J}(\chi(h))=J(h)\text{ and }J(\Upsilon(\tilde{h}))=\tilde{J}(\tilde{h}).\]
Denote \(\left\|r\right\|_{\mathbb{R}}=|r|\) for \(r\in\mathbb{R}\), \(H:=X\times\mathbb{R}\) endowed with the norm \(\|\cdot\|_{H}^{2}=\|\cdot\|_{X}^{2}+\|\cdot\|_{\mathbb{R}}^{2}\) and \(H^{-1}\) the dual space of \(H\). By Jeanjean [24], we have the following lemma.
**Lemma 5.2**: _([24, Lemma 2.3])Let \(\varepsilon>0\). Assume that \(\tilde{h}_{0}\in\tilde{\Gamma}\) satisfies \(\max_{0\leq t\leq 1}\tilde{J}(\tilde{h}_{0}(t))\leq\tilde{m}(c)+\varepsilon.\) Then there exists a couple of \((u_{0},l_{0})\in S(c)\times\mathbb{R}\) such that \((i)\ \tilde{J}(u_{0},l_{0})\in[\tilde{m}(c)-\varepsilon,\tilde{m}(c)+\varepsilon];\)\((ii)\ \min_{0\leq t\leq 1}\|(u_{0},l_{0})-\tilde{h}_{0}(t)\|_{X}\leq\sqrt{\varepsilon};\)\((iii)\ \|(\tilde{J}|_{S(c)\times\mathbb{R}})^{\prime}(u_{0},l_{0})\|_{H^{-1}}\leq 2 \sqrt{\varepsilon},\) i.e. \(|\left<\tilde{J}^{\prime}(u_{0},l_{0}),z\right>_{H^{-1}\times H}|\leq 2 \sqrt{\varepsilon}\|z\|_{H}\) holds for all_
\[z\in\tilde{T}_{(u_{0},l_{0})}:=\left\{(z_{1},z_{2})\in H\ |\ \left<u_{0},z_{1} \right>=0\right\}.\]
By virtue of Lemma 5.2, we establish the following result.
**Lemma 5.3**: _Assume that conditions \((f_{1})-(f_{2}),(f_{4})\) and \((f_{6})\) hold. Then there exists a sequence \(\{u_{n}\}\subset S(c)\) such that_
\[J(u_{n})\to m(c),\quad(J|_{S(c)})^{\prime}(u_{n})\to 0\quad\mbox{and}\ Q(u_{n}) \to 0\ \mbox{as}\ n\to\infty.\]
**Proof.** The proof is similar to that of [24, pp.1643-1645], we omit it here.
**Lemma 5.4**: _Assume that conditions \((f_{1})-(f_{2}),(f_{4})\) and \((f_{6})\) hold. Then there exist two constants \(0<\hat{c}_{*}\leq\tilde{c}_{*}\) and \(\theta_{0}>0\) such that for \(0<c<\hat{c}_{*}\) and \(\theta>\theta_{0},\)_
\[m(c)<\frac{(p-4)(1-c)+c^{2}}{4(p-2)}.\]
**Proof.** Set
\[h(s)=\left(1-s+st_{1}\right)u_{c}(\left(1-s+st_{1}\right)x)\ \mbox{for}\ s\in[0,1].\]
Clearly, \(h(s)\in\Gamma\). By condition \((f_{4})\), we have
\[m(c) \leq \max_{s\in[0,1]}J(h(s)) \tag{43}\] \[\leq \max_{t\in[1,t_{1}]}\left[\frac{t^{2}}{2}A(u_{c})+\frac{1}{4}V(u_ {c})-\frac{c^{2}}{4}\ln t-\theta t^{p-2}\|u_{c}\|_{L^{p}}^{p}\right]\] \[\leq \max_{t>0}\left[\frac{t^{2}}{2}A(u_{c})-\theta t^{p-2}\|u_{c}\|_{ L^{p}}^{p}\right]+\frac{1}{2}\|u_{c}\|_{*}^{2}c^{2}\] \[= \frac{p-4}{2(p-2)}\left(\frac{\theta(p-2)\|u_{c}\|_{L^{p}}^{p}}{A (u_{c})}\right)^{2/(4-p)}A(u_{c})+\frac{1}{2}\|u_{c}\|_{*}^{2}c^{2}.\]
Moreover, we note that there exists a constant \(0<\hat{c}_{*}\leq\tilde{c}_{*}\) such that for \(0<c<\hat{c}_{*}\),
\[1-c+\frac{1-2(p-2)\|u_{c}\|_{*}^{2}}{p-4}c^{2}>0. \tag{44}\]
Then it follows from (43) and (44) that
\[m(c)<\frac{(p-4)(1-c)+c^{2}}{4(p-2)},\]
for \(0<c<\hat{c}_{*}\) and
\[\theta>\theta_{0}:=\left(\frac{A(u_{c})}{(p-2)\|u_{c}\|_{L^{p}}^{p}}\right) \left[\frac{2A(u_{c})}{1-c+\frac{1-2(p-2)\|u_{c}\|_{*}^{2}}{p-4}c^{2}}\right]^ {(p-4)/2}.\]
We complete the proof.
**Lemma 5.5**: _Assume that conditions \((f_{1})-(f_{2}),(f_{4})\) and \((f_{6})\) hold. Let \(\{u_{n}\}\subset S(c)\) be a (PS)-sequence for the energy functional \(J\) at the level \(m(c)\) with \(Q(u_{n})=o(1)\). Then there exists a positive constant \(c^{*}\leq\hat{c}_{*}\) such that for \(0<c<c^{*}\) and \(\theta>\theta_{0},\)_
\[\limsup_{n\to\infty}A(u_{n})<1-c.\]
**Proof.** By Lemmas 2.10 and 5.4, we have
\[\frac{(p-4)(1-c)+c^{2}}{4(p-2)}+o(1) > m(c)+o(1)=J(u_{n})\] \[\geq \frac{p-4}{2(p-2)}A(u_{n})-\frac{K}{4}c^{3/2}A(u_{n})^{1/2}+\frac{1 }{4(p-2)}c^{2}+o(1),\]
which implies that there exists a positive constant \(c^{*}\leq\hat{c}_{*}\) such that for \(0<c<c^{*}\) and \(\theta>\theta_{0}\),
\[\limsup_{n\to\infty}A(u_{n})<1-c.\]
We complete the proof.
**Lemma 5.6**: _Assume that conditions \((f_{1})-(f_{2}),(f_{4})\) and \((f_{6})\) hold. Let \(\{u_{n}\}\subset S(c)\) be a (PS)-sequence for the energy functional \(J\) at level \(m(c)\) with \(Q(u_{n})=o(1)\). Then, up to a subsequence, \(u_{n}\to\bar{u}_{c}\) in \(X\). In particular, \(\bar{u}_{c}\) is a critical point of \(J\) restricted to \(S(c).\)_
**Proof.** Let \(\{u_{n}\}\subset S(c)\) be a (PS)-sequence for \(J\) at level \(m(c)\) with \(Q(u_{n})=o(1).\) Then it follows from Lemmas 2.7 and 5.5 that \(\{u_{n}\}\) is bounded in \(X.\) Passing to a subsequence if necessary, there exists \(\bar{u}_{c}\in X\) such that \(u_{n}\rightharpoonup\bar{u}_{c}\) weakly in \(X\), \(u_{n}\to\bar{u}_{c}\) in \(L^{r}(\mathbb{R}^{2})\) for all \(r\in[2,\infty)\) by Lemma 2.1\((i)\) and \(u_{n}\to\bar{u}_{c}\) a.e. in \(\mathbb{R}^{2}\). Clearly, \(\bar{u}_{c}\neq 0.\) By the Lagrange multipliers rule, there exists \(\lambda_{n}\in\mathbb{R}\) such that for every \(\varphi\in X\),
\[\int_{\mathbb{R}^{2}}\nabla u_{n}\nabla\varphi dx+\lambda_{n}\int_{\mathbb{R} ^{2}}u_{n}\varphi dx+\left[V_{1}^{\prime}(u_{n})-V_{2}^{\prime}(u_{n})\right] \varphi-\int_{\mathbb{R}^{2}}f(u_{n})\varphi dx=o(1)\|\varphi\|. \tag{45}\]
This shows that
\[\lambda_{n}c:=-A(u_{n})-V(u_{n})+\int_{\mathbb{R}^{2}}f(u_{n})u_{n}dx+o(1). \tag{46}\]
Similar to (21), it follows from (9) and Lemma 2.7 that \(\int_{\mathbb{R}^{2}}f(u_{n})u_{n}dx\) is bounded. Moreover, we obtain that \(V_{1}(u_{n})\) is bounded by (10) and \(V_{2}(u_{n})\) is bounded by (12), respectively. Thus, from (46) it follows that \(\{\lambda_{n}\}\in\mathbb{R}\) is bounded, up to a sequence, we can assume that \(\lambda_{n}\to\bar{\lambda}\in\mathbb{R}\) as \(n\to\infty.\)
Next, we prove that \(u_{n}\to\bar{u}_{c}\) strongly in \(X\), which will thus imply that \(\bar{u}_{c}\) is a critical point of \(J\) restricted to \(S(c)\). By (45), we know that \(\bar{u}_{c}\) is a weak solution to Eq. (13), which indicates that
\[Q(\bar{u}_{c})=A(\bar{u}_{c})-\frac{1}{4}\|\bar{u}_{c}\|_{L^{2}}^{4}+\int_{ \mathbb{R}^{2}}(2F(\bar{u}_{c})-f(\bar{u}_{c})\bar{u}_{c})dx=0 \tag{47}\]
by Lemma 2.8. Now, by \(Q(u_{n})=o(1)\) and (47), we have
\[A(u_{n})-\frac{1}{4}\|u_{n}\|_{L^{2}}^{4}+\int_{\mathbb{R}^{2}}( 2F(u_{n})-f(u_{n})u_{n})dx \tag{48}\] \[= A(\bar{u}_{c})-\frac{1}{4}\|\bar{u}_{c}\|_{L^{2}}^{4}+\int_{ \mathbb{R}^{2}}(2F(\bar{u}_{c})-f(\bar{u}_{c})\bar{u}_{c})dx+o(1).\]
Moreover, by Lemma 2.6, we have
\[F(u_{n})\to F(\bar{u}_{c})\ \mbox{and}\ f(u_{n})u_{n}\to f(\bar{u}_{c})\bar{u}_{c}\ \mbox{in}\ L^{1}(\mathbb{R}^{2}). \tag{49}\]
Thus, it follows from (48) and (49) that \(A(u_{n})\to A(\bar{u}_{c})\), where we have also used the fact of \(u_{n}\to\bar{u}_{c}\) in \(L^{2}(\mathbb{R}^{2})\).
Since \(A(u_{n})\to A(\bar{u}_{c})\) and \(u_{n}\to\bar{u}_{c}\) in \(L^{r}(\mathbb{R}^{2})\) for all \(r\in[2,+\infty)\), by choosing \(\varphi=u_{n}-\bar{u}_{c}\) in (45), one has
\[o(1)=o(1)+\frac{1}{4}\left[V_{1}^{\prime}(u_{n})(u_{n}-\bar{u}_{c})-V_{2}^{ \prime}(u_{n})(u_{n}-\bar{u}_{c})\right]-\int_{\mathbb{R}^{2}}f(u_{n})(u_{n}- \bar{u}_{c})dx.\]
Moreover, we have
\[|V_{2}^{\prime}(u_{n})(u_{n}-\bar{u}_{c})|\leq C_{1}\|u_{n}\|_{L^{\frac{8}{3}} }^{3}\|u_{n}-\bar{u}_{c}\|_{L^{\frac{8}{3}}}\to 0,\]
and
\[\left|\int_{\mathbb{R}^{2}}f(u_{n})(u_{n}-\bar{u}_{c})dx\right|\leq\varepsilon \|u_{n}\|_{L^{2r}}^{\tau}\|u_{n}-\bar{u}_{c}\|_{L^{2}}+K_{\varepsilon}\|u_{n} \|_{L^{r(q-1)}}^{q-1}\|u_{n}-\bar{u}_{c}\|_{L^{r}}\to 0,\]
and
\[|V_{1}^{\prime}(u_{n})(u_{n}-\bar{u}_{c})|=B_{1}(u_{n}^{2},u_{n}(u_{n}-\bar{u }_{c}))=B_{1}(u_{n}^{2},(u_{n}-\bar{u}_{c})^{2})+B_{1}(u_{n}^{2},u(u_{n}-\bar {u}_{c}))\]
with \(B_{1}(u_{n}^{2},u(u_{n}-\bar{u}_{c}))\to 0\) as \(n\to\infty\) by Lemma 2.3. Hence, we get
\[o(1)=o(1)+B_{1}(u_{n}^{2},(u_{n}-\bar{u}_{c})^{2}),\]
which implies that \(B_{1}(u_{n}^{2},(u_{n}-\bar{u}_{c})^{2})\to 0\) as \(n\to\infty\), together with Lemma 2.2, leading to \(\|u_{n}-\bar{u}_{c}\|_{*}\to 0\) as \(n\to\infty\). Therefore, we deduce that \(\|u_{n}-\bar{u}_{c}\|_{X}\to 0\) as \(n\to\infty\). We complete the proof.
**We are ready to prove Theorem 1.6:** By Lemmas 5.3 and 5.4, for \(0<c<\hat{c}_{*}\) and \(\theta>\theta_{0}\), there exists a bounded Palais-Smale sequence \(\{u_{n}\}\subset S(c)\) for \(J\) at level \(m(c)\). Then, it follows from Lemma 5.6 that \(u_{n}\to\bar{u}_{c}\) in \(X\) and \(\bar{u}_{c}\) is a critical point for \(J\) restrict to \(S(c)\), which shows that \(\bar{u}_{c}\) is a mountain pass solution of problem \((SP_{c})\) satisfing
\[J(u_{c})<J(\bar{u}_{c})=m(c).\]
We complete the proof.
## 6 Acknowledgments
J. Sun was supported by the National Natural Science Foundation of China (Grant No. 11671236) and Shandong Provincial Natural Science Foundation (Grant No. ZR2020JQ01).
|
2302.02931 | Bitrate-Constrained DRO: Beyond Worst Case Robustness To Unknown Group
Shifts | Training machine learning models robust to distribution shifts is critical
for real-world applications. Some robust training algorithms (e.g., Group DRO)
specialize to group shifts and require group information on all training
points. Other methods (e.g., CVaR DRO) that do not need group annotations can
be overly conservative, since they naively upweight high loss points which may
form a contrived set that does not correspond to any meaningful group in the
real world (e.g., when the high loss points are randomly mislabeled training
points). In this work, we address limitations in prior approaches by assuming a
more nuanced form of group shift: conditioned on the label, we assume that the
true group function (indicator over group) is simple. For example, we may
expect that group shifts occur along low bitrate features (e.g., image
background, lighting). Thus, we aim to learn a model that maintains high
accuracy on simple group functions realized by these low bitrate features, that
need not spend valuable model capacity achieving high accuracy on contrived
groups of examples. Based on this, we consider the two-player game formulation
of DRO where the adversary's capacity is bitrate-constrained. Our resulting
practical algorithm, Bitrate-Constrained DRO (BR-DRO), does not require group
information on training samples yet matches the performance of Group DRO on
datasets that have training group annotations and that of CVaR DRO on
long-tailed distributions. Our theoretical analysis reveals that in some
settings BR-DRO objective can provably yield statistically efficient and less
conservative solutions than unconstrained CVaR DRO. | Amrith Setlur, Don Dennis, Benjamin Eysenbach, Aditi Raghunathan, Chelsea Finn, Virginia Smith, Sergey Levine | 2023-02-06T17:07:16Z | http://arxiv.org/abs/2302.02931v2 | # Bitrate-Constrained DRO:
###### Abstract
Training machine learning models robust to distribution shifts is critical for real-world applications. Some robust training algorithms (_e.g.,_ Group DRO) specialize to group shifts and require group information on all training points. Other methods (_e.g.,_ CVaR DRO) that do not need group annotations can be overly conservative, since they naively upweight high loss points which may form a contrived set that does not correspond to any meaningful group in the real world (_e.g.,_ when the high loss points are randomly mislabeled training points). In this work, we address limitations in prior approaches by assuming a more nuanced form of group shift: conditioned on the label, we assume that the true group function (indicator over group) is simple. For example, we may expect that group shifts occur along low bitrate features (_e.g.,_ image background, lighting). Thus, we aim to learn a model that maintains high accuracy on simple group functions realized by these low bitrate features, that need not spend valuable model capacity achieving high accuracy on contrived groups of examples. Based on this, we consider the two-player game formulation of DRO where the adversary's capacity is bitrate-constrained. Our resulting practical algorithm, Bitrate-Constrained DRO (BR-DRO), does not require group information on training samples yet matches the performance of Group DRO on datasets that have training group annotations and that of CVaR DRO on long-tailed distributions. Our theoretical analysis reveals that in some settings BR-DRO objective can provably yield statistically efficient and less conservative solutions than unconstrained CVaR DRO.
## 1 Introduction
Machine learning models may perform poorly when tested on distributions that differ from the training distribution. A common form of distribution shift is _group_ shift, where the source and target differ only in the marginal distribution over finite groups or sub-populations, with no change in group conditionals [18, 44] (_e.g.,_ when the groups are defined by spurious correlations and the target distribution upsamples the group where the correlation is absent [49]).
Prior works consider various approaches to address group shift. One solution is to ensure robustness to worst case shifts using distributionally robust optimization (DRO) [4, 7, 17], which considers a two-player game where a learner minimizes risk on distributions chosen by an adversary from a predefined uncertainty set. As the adversary is only constrained to propose distributions that lie within an f-divergence based uncertainty set, DRO often yields overly conservative (pessimistic) solutions [26] and can suffer from statistical
challenges [18]. This is mainly because DRO upweights high loss points that may not form a meaningful group in the real world, and may even be _contrived_ if the high loss points simply correspond to randomly mislabeled examples in the training set. Methods like Group DRO [49] avoid overly pessimistic solutions by assuming knowledge of group membership for each training example. However, these group-based methods provide no guarantees on shifts that deviate from the predefined groups (_e.g.,_ when there is a new group), and are not applicable to problems that lack group knowledge. In this work, we therefore ask: _Can we train non-pessimistic robust models without access to group information on training samples?_
We address this question by considering a more nuanced assumption on the structure of the underlying groups. We assume that, conditioned on the label, group boundaries are realized by high-level features that depend on a small set of underlying factors (_e.g.,_ background color, brightness). This leads to simpler group functions with large margin and simple decision boundaries between groups (Figure 1_(left)_). Invoking the principle of minimum description length [21], restricting our adversary to functions that satisfy this assumption corresponds to a bitrate constraint. In DRO, the adversary upweights points with higher losses under the current learner, which in practice often correspond to examples that belong to a rare group, contain complex patterns, or are mislabeled [14, 58]. Restricting the adversary's capacity prevents it from upweighting individual hard or mislabeled examples (as they cannot be identified with simple features), and biases it towards identifying erroneous data points misclassified by simple features. This also complements the failure mode of neural networks trained with stochastic gradient descent (SGD) that rely on simple spurious features which correctly classify points in the _majority_ group but may fail on _minority_ groups [10].
The main contribution of this paper is Bitrate-Constrained DRO (BR-DRO), a supervised learning procedure that provides robustness to distribution shifts along groups realized by simple functions. Despite not using group information on training examples, we demonstrate that BR-DRO can match the performance of methods requiring them. We also find that BR-DRO is more successful in identifying true minority training points, compared to unconstrained DRO. This indicates that not optimizing for performance on contrived worst-case shifts can reduce the pessimism inherent in DRO. It further validates: (i) our assumption on the simple nature of group shift; and (ii) that our bitrate constraint meaningfully structures the uncertainty set to be robust to such shifts. As a consequence of the constraint, we also find that BR-DRO is robust to random noise in the training data [55], since it cannot form "groups" entirely based on randomly mislabeled points with low bitrate features. This is in contrast with existing methods that use the learner's training error to up-weight arbitrary sets of difficult training points [_e.g.,_ 32, 36], which we show are highly susceptible to label noise (see Figure 1_(right)_). Finally, we theoretically analyze our approach--characterizing how the degree of constraint on the adversary can effect worst risk estimation and excess risk (pessimism) bounds, as well as convergence rates for specific online solvers.
Figure 1: **Bitrate-Constrained DRO**: A method that assumes group shifts along low-bitrate features, and restricts the adversary appropriately so that the solution found is less pessimistic and more robust to unknown group shifts. Our method is also robust to training noise. _(Left)_ In Waterbirds [59], the spurious feature background is a large margin simple feature that separates the _majority_ and _minority_ points in each class. _(Right)_ Prior works [32, 36] that upweight arbitrary points with high losses force the model to memorize noisy mislabeled points while our method is robust to noise and only upweights the true minority group without any knowledge of its identity (see Section 6.2).
Related Work
Prior works in robust ML [e.g., 20, 33, 34] address various forms of adversarial or structured shifts. We specifically review prior work on robustness to group shifts. While those based on DRO optimize for worst-case shifts in an explicit uncertainty set, the robust set is implicit for some others, with most using some form of importance weighting.
**Distributionally robust optimization (DRO).** DRO methods generally optimize for worst-case performance on joint \((\mathbf{x},\mathrm{y})\) distributions that lie in an \(f\)-divergence ball (uncertainty set) around the training distribution [7, 8, 9, 17, 19, 41, 46]. Hu et al. [26] highlights that the conservative nature of DRO may lead to degenerate solutions when the unrestricted adversary uniformly upweights all misclassified points. Sagawa et al. [49] proposes to address this by limiting the adversary to shifts that only differ in marginals over predefined groups. However, in addition to it being difficult to obtain this information, Kearns et al. [28] raise "gerrymandering" concerns with notions of robustness that fix a small number of groups apriori. While they propose a solution that looks at exponentially many subgroups defined over protected attributes, our method does not assume access to such attributes and aims to be fair on them as long as they are realized by simple functions. Finally, Zhai et al. [65] avoid conservative solutions by solving the DRO objective over randomized predictors learned through boosting. We consider deterministic and over-parameterized learners and instead constrain the adversary's class.
**Constraining the DRO uncertainty set.** In the marginal DRO setting, Duchi et al. [18] limit the adversary via easier-to-control reproducing kernel hilbert spaces (RKHS) or bounded Holder continuous functions [35, 62]. While this reduces the statistical error in worst risk estimation, the size of the uncertainty set (scales with the data) remains too large to avoid cases where an adversary can re-weight mislabeled and hard examples from the majority set [14]. In contrast, we restrict the adversary even for large datasets where the estimation error would be low, as this would reduce excess risk when we only care about robustness to rare sub-populations defined by simple functions. Additionally, while their analysis and method prefers the adversary's objective to have a strong dual, we show empirical results on real-world datasets and generalization bounds where the adversary's objective is not necessarily convex.
**Robustness to group shifts without demographics.** Recent works [5, 16, 54] that aim to achieve group robustness without access to group labels employ various heuristics where the robust set is implicit while others require data from multiple domains [3, 64] or ability to query test samples [31]. Liu et al. [36] use training losses for a heavily regularized model trained with empirical risk minimization (ERM) to directly identify minority data points with higher losses and re-train on the dataset that up-weights the identified set. Nam et al. [42] take a similar approach. Other methods [27] propose simple baselines that subsample the majority class in the absence of group demographics and the majority group in its presence. Hashimoto et al. [23] find DRO over a \(\chi^{2}\)-divergence ball can reduce the otherwise increasing disparity of per-group risks in a dynamical system. Since it does not use features to upweight points (like BR-DRO) it is vulnerable to label noise. Same can be said about some other works (_e.g.,_[36, 42]).
**Importance weighting in deep learning.** Finally, numerous works [17, 32, 34, 44] enforce robustness by re-weighting losses on individual data points. Recent investigations [13, 38, 56] reveal that such objectives have little impact on the learned solution in interpolation regimes. One way to avoid this pitfall is to train with heavily regularized models [49, 50] and employ early stopping. Another way is to subsample certain points, as opposed to up-weighting [27]. In this work, we use both techniques while training our objective and the baselines, ensuring that the regularized class is robust to shifts under misspecification [62].
Preliminaries
We introduce the notation we use in the rest of the paper and describe the DRO problem. In the following section, we will formalize our assumptions on the nature of the shift before introducing our optimization objective and algorithm.
**Notation.** With covariates \(\mathcal{X}\subset\mathbb{R}^{d}\) and labels \(\mathcal{Y}\), the given source \(P\) and unknown true target \(Q_{0}\) are measures over the measurable space \((\mathcal{X}\times\mathcal{Y},\Sigma)\) and have densities \(p\) and \(q_{0}\) respectively (w.r.t. base measure \(\mu\)). The learner's choice is a hypothesis \(h:\mathcal{X}\mapsto\mathcal{Y}\) in class \(\mathcal{H}\subset L^{2}(P)\), and the adversary's action in standard DRO is a target distribution \(Q\) in set \(\mathcal{Q}_{P,\kappa}\coloneqq\{Q:Q\ll P,\,D_{f}(Q\,||\,P)\leq\kappa\}\). Here, \(D_{f}\) is the \(f\)-divergence between \(Q\) and \(P\) for a convex function \(f\)1 with \(f(1)=0\). An equivalent action space for the adversary is the set of re-weighting functions:
Footnote 1: For _e.g.,_\(\mathrm{KL}(Q\,||\,P)\) can be derived with \(f(x)=x\log x\) and for Total Variation \(f(x)=|x-1|/2\).
\[\mathcal{W}_{P,\kappa}=\{w:\mathcal{X}\times\mathcal{Y}\mapsto\mathbb{R}:\ w \text{ is measurable under }P,\;\mathbb{E}_{P}[w]=1,\;\mathbb{E}_{P}f(w)\leq\kappa\} \tag{1}\]
For a convex loss function \(l:\mathcal{Y}\times\mathcal{Y}\mapsto\mathbb{R}_{+}\), we denote \(l(h)\) as the function over \((\mathbf{x},\mathrm{y})\) that evaluates \(l(h(\mathbf{x}),\mathrm{y})\), and use \(l_{0-1}\) to denote the loss function \(\mathbb{1}(h(\mathbf{x})\neq\mathrm{y})\). Given either distribution \(Q\in\mathcal{Q}_{P,\kappa}\), or a re-weighting function \(w\in\mathcal{W}_{P,\kappa}\), the risk of a learner \(h\) is:
\[R(h,Q)=\mathbb{E}_{Q}\;[l(h)]\hskip 28.452756ptR(h,w)=\mathbb{E}_{(\mathbf{x}, \mathrm{y})\sim P}\;[l(h(\mathbf{x}),\mathrm{y})\cdot w(\mathbf{x},\mathrm{y} )]=\langle l(h),\;w\rangle_{P} \tag{2}\]
Note the overload of notation for \(R(h,\cdot)\). If the adversary is stochastic it picks a mixed action \(\delta\in\Delta(\mathcal{W}_{P,\kappa})\), which is the set of all distributions over \(\mathcal{W}_{P,\kappa}\). Whenever it is clear, we drop \(P,\kappa\).
**Unconstrained DRO [7].** This is a min-max optimization problem understood as a two-player game, where the learner chooses a hypothesis, to minimize risk on the worst distribution that the adversary can choose from its set. Formally, this is given by Equation 3. The first equivalence is clear from the definitions and for the second since \(R(h,Q)\) is linear in \(Q\), the supremum over \(\Delta(\mathcal{W}_{P,\kappa})\) is a Dirac delta over the best weighting in \(\mathcal{W}_{P,\kappa}\). In the next section, we will see how a bitrate-constrained adversary can only pick certain actions from \(\Delta(\mathcal{W}_{P,\kappa})\).
\[\inf_{h\in\mathcal{H}}\;\sup_{Q\in\mathcal{Q}_{P,\kappa}}\;R(h,Q) \;\;\equiv\;\;\inf_{h\in\mathcal{H}}\;\sup_{w\in\mathcal{W}_{P,\kappa}}\;R(h,w) \;\;\equiv\;\;\inf_{h\in\mathcal{H}}\;\sup_{\delta\in\Delta(\mathcal{W}_{P, \kappa})}\;\mathbb{E}_{w\sim\delta}\,[R(h,w)] \tag{3}\]
**Group Shift.** While the DRO framework in Section 3 is broad and addresses any unstructured shift, we focus on the specific case of group shift. First, for a given pair of measures \(P,Q\) we define what we mean by the group structure \(\mathcal{G}_{P,Q}\) (Definition 3.1). Intuitively, it is a set of sub-populations along which the distribution shifts, defined in a way that makes them uniquely identifiable. For _e.g.,_ in the Waterbirds dataset (Figure 1), there are four groups given by combinations of (label, background). Corollary 3.2 follows immediately from the definition of \(\mathcal{G}_{P,Q}\). Using this definition, the standard group shift assumption [49] can be formally re-stated as Assumption 3.3.
**Definition 3.1** (group structure \(\mathcal{G}_{P,Q}\)).: _For \(Q\ll P\) the group structure \(\mathcal{G}_{P,Q}\)=\(\{G_{k}\}_{k=1}^{K}\) is the smallest finite set of disjoint groups \(\{G_{k}\}_{k=1}^{K}\) s.t. \(Q(\cup_{k=1}^{K}G_{k})\)=\(1\) and \(\forall k\) (i) \(G_{k}\in\Sigma\), \(Q(G_{k})>0\) and (ii) \(p(\mathbf{x},\mathrm{y}\mid G_{k})=q(\mathbf{x},\mathrm{y}\mid G_{k})>0\;a.e.\) in \(\mu\). If such a structure exists then \(\mathcal{G}_{P,Q}\) is well defined._
**Corollary 3.2** (uniqueness of \(\mathcal{G}_{P,Q}\)).: \(\forall P,Q\)_, the structure \(\mathcal{G}(P,Q)\) is unique if it is well defined._
**Assumption 3.3** (standard group shift).: _There exists a well-defined group structure \(\mathcal{G}_{P,Q_{0}}\) s.t. target \(Q_{0}\) differs from \(P\) only in terms of marginal probabilities over all \(G\in\mathcal{G}_{P,Q_{0}}\)._
Bitrate-Constrained DRO
We begin with a note on the expressivity of the adversary in Unconstrained DRO and formally introduce the assumption we make on the nature of shift. Then, we build intuition for why unconstrained adversaries fail but restricted ones do better under our assumption. Finally, we state our main objective and discuss a specific instance of it.
#### How expressive is unconstrained adversary?
Note that the set \(\mathcal{W}_{P,\kappa}\) includes all measurable functions (under \(P\)) such that the re-weighted distribution is bounded in \(f\)-divergence (by \(\kappa\)). While prior works [17, 52] shrink \(\kappa\) to construct confidence intervals, this _only controls_ the total mass that can be moved between measurable sets \(G_{1},G_{2}\in\Sigma\), but _does not restrict_ the choice of \(G_{1}\) and \(G_{2}\) itself. As noted by Hu et al. [26], such an adversary is highly expressive, and optimizing for the worst case only leads to the solution of empirical risk minimization (ERM) under \(l_{0-1}\) loss. Thus, we can conclude that DRO recovers degenerate solutions because the worst target in \(\mathcal{W}_{P,\kappa}\) lies far from the subspace of naturally occurring targets. Since it is hard to precisely characterize natural targets we make a nuanced assumption: the target \(Q_{0}\) only upsamples those rare subpopulations that are misclassified by simple features. We state this formally in Assumption 4.2 after we define the bitrate-constrained function class \(\mathcal{W}(\gamma)\) in Definition 4.1.
**Definition 4.1**.: _A function class \(\mathcal{W}(\gamma)\) is bitrate-constrained if there exists a data independent prior \(\pi\), s.t. \(\mathcal{W}(\gamma)=\{\mathbb{E}[\delta]:\;\delta\in\Delta(\mathcal{W}),\; \text{KL}(\delta\;||\;\pi)\leq\gamma\}\)._
**Assumption 4.2** (simple group shift).: _Target \(Q_{0}\) satisfies Assumption 3.3 (group shift) w.r.t. source \(P\). Additionally, For some prior \(\pi\) and a small \(\gamma^{*}\), the re-weighting function \(q_{0}/p\) lies in a bitrate-constrained class \(\mathcal{W}(\gamma^{*})\). In other words, for every group \(G\in\mathcal{G}(P,Q_{0})\), \(\exists w_{G}\in\mathcal{W}(\gamma^{*})\) s.t. \(\mathbb{1}((\mathbf{x},\mathrm{y})\in G)=w_{G}\) a.e.. We refer to such a \(G\) as a **simple group** that is realized in \(\mathcal{W}(\gamma^{*})\)._
Under the principle of minimum description length [21] any deviation from the prior (_i.e.,_\(\text{KL}(\delta\;||\;\pi)\)) increases the _description length_ of the encoding \(\delta\in\Delta(\mathcal{W})\), thus we refer to \(\mathcal{W}(\gamma)\) as being _bitrate-constrained_ in the sense that it contains functions (means of distributions) that can be described with a limited number of bits given the prior \(\pi\). See Appendix A.3 for an example of a bitrate-constrained class of functions. Next we present arguments for why identifiability of simple (satisfy Assumption 4.2) minority groups can be critical for robustness.
#### Neural networks can perform poorly on simple minorities.
For a fixed target \(Q_{0}\), let's say there exists two groups: \(G_{\min}\) and \(G_{\max}\in\mathcal{G}(P,Q_{0})\) such that \(P(G_{\min})\ll P(G_{\max})\). By Assumption 4.2, both \(G_{\min}\) and \(G_{\max}\) are simple (realized in \(\mathcal{W}(\gamma^{*})\)), and are thus separated by some simple feature. The learner's class \(\mathcal{H}\) is usually a class of overparameterized neural networks. When trained with stochastic gradient descent (SGD), these are biased towards learning simple features that classify a majority of the data [53, 56]. Thus, if the simple feature separating \(G_{\min}\) and \(G_{\max}\) itself correlates with the label \(y\) on \(G_{\max}\), then neural networks would fit on this feature. This is precisely the case in the Waterbirds example, where the groups are defined by whether the simple feature background correlates with the label (Figure 1). Thus our assumption on the nature of shift complements the nature of neural networks perform poorly on simple minorities.
#### The bitrate constraint helps identify simple unfair minorities in \(\mathcal{G}(P,Q_{0})\).
Any method that aims to be robust on \(Q_{0}\) must up-weight data points from \(G_{\min}\) but without knowing its identity. Since the unconstrained adversary upsamples any group of data points with high loss and low probability, it cannot distinguish between a rare group that is realized by simple functions in \(\mathcal{W}(\gamma^{*})\) and a rare group of examples that share no feature in common or may even be mislabeled. On the other hand, the group of mislabeled examples cannot be separated from the rest by functions in \(\mathcal{W}(\gamma^{*})\). Thus, a bitrate constraint adversary can only identify simple groups and upsamples those that incur high losses - possibly due to the simplicity bias of neural networks.
BR-DRO objective.According to Assumption 4.2, there cannot exist a target \(Q_{0}\) such that minority \(G_{\min}\in\mathcal{G}(P,Q_{0})\) is not realized in bitrate constrained class \(\mathcal{W}(\gamma^{*})\). Thus, by constraining our adversary to a class \(\mathcal{W}(\gamma)\) (for some \(\gamma\) that is user defined), we can possibly evade issues emerging from optimizing for performance on mislabeled or hard examples, even if they were rare. This gives us the objective in Equation 4 where the equalities hold from the linearity of \(\langle\cdot,\cdot\rangle\) and Definition 4.1.
\[\inf_{h\in\mathcal{H}}\sup_{\begin{subarray}{c}\delta\in\Delta( \mathcal{W})\\ \text{KL}(\delta\;\|\;\pi)\leq\gamma\end{subarray}}\mathbb{E}_{w\sim\delta}R(h, w)\;=\;\inf_{\begin{subarray}{c}\delta\in\Delta(\mathcal{W})\\ \text{KL}(\delta\;\|\;\pi)\leq\gamma\end{subarray}}\langle l(h),\mathbb{E}_{ \delta}[w]\rangle_{P}\;\;=\;\inf_{h\in\mathcal{H}}\sup_{w\in\mathcal{W}(\gamma )}R(h,w) \tag{4}\]
BR-DRO **in practice.** We parameterize the learner \(\mathbf{\theta}_{h}\in\Theta_{h}\) and adversary \(\mathbf{\theta}_{w}\in\Theta_{w}\) as neural networks2. In practice, we implement the adversary either as a one hidden layer variational information bottleneck (VIB) [2], where the Kullback-Leibler (KL) constraint on the latent variable \(\mathbf{z}\) (output of VIB's hidden layer) directly constrains the bitrate; or as an \(l_{2}\) norm constrained linear layer. The objective for the VIB (\(l_{2}\)) version is obtained by setting \(\beta_{\text{vib}}\neq 0\) (\(\beta_{l_{2}}\neq 0\)) in Equation 5 below. See Appendix A.2 for details. Note that the objective in Equation 5 is no longer convex-concave and can have multiple local equilibria or stationary points [39]. The adversary's objective also does not have a strong dual that can be solved through conic programs--a standard practice in DRO literature [43]. Thus, we provide an algorithm where both learner and adversary optimize BR-DRO iteratively through stochastic gradient ascent/descent (Algorithm 1 in Appendix A.1).
Footnote 2: We use \(\theta_{h},\theta_{w}\) and \(l(\theta_{h})\) to denote \(w(\mathbf{\theta}_{w};(\mathbf{x},\mathbf{y})),h(\mathbf{\theta}_{h};\mathbf{x})\) and \(l(h(\mathbf{\theta}_{h};\mathbf{x}),\mathbf{y})\) respectively.
\[\min_{\mathbf{\theta}_{h}\in\Theta_{h}}\langle l(\mathbf{\theta}_{h}), \mathbf{\theta}_{w}^{*}\rangle_{P}\quad\text{ s.t. }\quad\mathbf{\theta}_{w}^{*}=\operatorname*{arg\,max}_{\mathbf{\theta}_{w}\in\Theta_{ w}}\;\;L_{\text{adv}}(\mathbf{\theta}_{w};\mathbf{\theta}_{h},\beta_{\text{vib}}, \beta_{l_{2}},\eta) \tag{5}\] \[L_{\text{adv}}(\mathbf{\theta}_{w};\mathbf{\theta}_{h},\beta_{\text{vib }},\beta_{l_{2}},\eta)=\langle l(\mathbf{\theta}_{h})-\eta,\mathbf{\theta}_{w}\rangle_{ P}-\beta_{\text{vib}}\;\mathbb{E}_{P}\text{KL}(p(\mathbf{z}\;|\;\mathbf{x};\mathbf{ \theta}_{w})\;||\;\mathcal{N}(\mathbf{0},\mathbf{I}_{\text{d}}))-\beta_{ \mathbf{l_{2}}}\|\mathbf{\theta}_{\mathbf{w}}\|_{\mathbf{2}}^{2}\]
**Training.** For each example, the adversary takes as input: (i) the last layer output of the current learner's feature network; and (ii) the input label. The adversary then outputs a weight (in \([0,1]\)). The idea of applying the adversary directly on the learner's features (instead of the original input) is based on recent literature [29, 48] that suggests re-training the prediction head is sufficient for robustness to shifts. The adversary tries to maximize weights on examples with value \(\geq\eta\) (hyperparameter) and minimize on others. For the learner, in addition to the example it takes as input the adversary assigned weight for that example from the previous round and uses it to reweigh its loss in a minibatch. Both players are updated in a round (Algorithm 1).
## 5 Theoretical Analysis
The main objective of our analysis of BR-DRO is to show how adding a bitrate constraint on the adversary can: (i) give us tighter statistical estimates of the worst risk; and (ii) control the pessimism (excess risk) of the learned solution. First, we provide worst risk generalization guarantees using the PAC-Bayes framework [15], along with a result for kernel adversary. Then, we provide convergence rates and pessimism guarantees for the solution found by our online solver for a specific instance of \(\mathcal{W}(\gamma)\) For both these, we analyze the constrained form of the conditional value at risk (CVaR) DRO objective [32] below.
**Bitrate-Constrained CVaR DRO.** When the uncertainty set \(\mathcal{Q}\) is defined by the set of all distributions \(Q\) that have bounded likelihood _i.e., \(\|q/p\|_{\infty}\leq 1/\alpha_{0}\)_, we recover the original CVaR DRO objective [19]. The bitrate-constrained version of CVaR DRO is given in Equation 6 (see Appendix C for derivation). Note that, slightly different from Section 3, we define \(\mathcal{W}\) as the set of all measurable functions \(w\): \(\mathcal{X}\times\)\(\mathcal{Y}\mapsto[0,1]\), since the other convex restrictions in Equation 1 are handled by dual variable \(\eta\). As in Section 4, \(\mathcal{W}(\gamma)\) is derived from \(\mathcal{W}\) using Definition 4.1. In Equation 6, if we replace the bitrate-constrained class \(\mathcal{W}(\gamma)\) with
the unrestricted \(\mathcal{W}\) then we recover the variational form of unconstrained CVaR DRO in Duchi et al. [17].
\[\mathcal{L}^{*}_{\text{cvar}}(\gamma)=\inf_{h\in\mathcal{H},\eta\in\mathbb{R}} \sup_{w\in\mathcal{W}(\gamma)}R(h,\eta,w)\;\;\text{where},\;R(h,\eta,w)=(1/ \alpha_{0})\langle l(h)-\eta,w\rangle_{P}+\eta \tag{6}\]
**Worst risk estimation bounds for BR-DRO.** Since we are only given a finite sampled dataset \(\mathcal{D}\sim P^{n}\), we solve the objective in Equation 6 using the empirical distribution \(\hat{P}_{n}\). We denote the plug-in estimates as \(\hat{h}^{\gamma}_{D},\hat{\eta}^{\gamma}_{D}\). This incurs an estimation error for the true worst risk. But when we restrict our adversary to \(\Delta(\mathcal{W},\gamma)\), for a fixed learner \(h\) we reduce the worst-case risk estimation error which scales with the bitrate \(\text{KL}(\cdot\;||\;\pi)\) of the solution (deviation from prior \(\pi\)). Expanding this argument to every learner in \(\mathcal{H}\), with high probability we also reduce the estimation error for the worst risk of \(\hat{h}^{\gamma}_{D}\). Theorem 5.1 states this generalization guarantee more precisely.
**Theorem 5.1** (worst-case risk generalization).: _With probability \(\geq 1-\delta\) over \(\mathcal{D}\sim P^{n}\), the worst bitrate-constrained \(\alpha_{0}\)-CVaR risk for \(\hat{h}^{\gamma}_{D}\) can be upper bounded by the following oracle inequality:_
\[\sup_{w\in\mathcal{W}(\gamma)}R(\hat{h}^{\gamma}_{D},\hat{\eta}^{\gamma}_{D},w )\;\lesssim\;\mathcal{L}^{*}_{\text{cvar}}(\gamma)+\frac{M}{\alpha_{0}}\sqrt {\left(\gamma+\log\left(\frac{1}{\delta}\right)+(d+1)\log\left(\frac{L^{2}n}{ \gamma}\right)+\log n\right)/(2n-1)},\]
_when \(l(\cdot,\cdot)\) is \([0,M]\)-bounded, \(L\)-Lipschitz and \(\mathcal{H}\) is parameterized by convex set \(\Theta\subset\mathbb{R}^{d}\)._
_Proof._ _See Appendix C.3._
Informally, Theorem 5.1 tells us that bitrate-constraint \(\gamma\) gracefully controls the estimation error \(\mathcal{O}(\sqrt{(\gamma+\mathcal{C}(\mathcal{H}))/n})\) (where \(\mathcal{C}(\mathcal{H})\) is a complexity measure) if we know that Assumption 4.2 is satisfied. While this only tells us that our estimator is consistent with \(\mathcal{O}_{p}(1/\sqrt{n})\), the estimate may itself be converging to a degenerate predictor, _i.e.,_\(\mathcal{L}^{*}_{\text{cvar}}(\gamma)\) may be very high. For example, if the adversary can cleanly separate mislabeled points even after the bitrate constraint, then presumably these noisy points with high losses would be the ones mainly contributing to the worst risk, and up-weighting these points would result in a learner that has memorized noise. Thus, it becomes equally important for us to analyze the excess risk (or the pessimism) for the learned solution. Since this is hard to study for any arbitrary bitrate-constrained class \(\mathcal{W}(\gamma)\), we shall do so for the specific class of reproducing kernel Hilbert space (RKHS) functions.
**Special case of bounded RKHS.** Let us assume there exists a prior \(\Pi\) such that \(\mathcal{W}(\gamma)\) in Definition 4.1 is given by an RKHS induced by Mercer kernel \(k:\mathcal{X}\times\mathcal{X}\mapsto\mathbb{R}\), s.t. the eigenvalues of the kernel operator decay polynomially, _i.e.,_\(\mu_{j}\lesssim j^{-2/\gamma}\) (\(\gamma<2\)). Then, if we solve for \(\hat{h}^{\gamma}_{D},\hat{\eta}^{\gamma}_{D}\) by doing kernel ridge regression over norm bounded (\(\|f\|_{\mathcal{W}(\gamma)}{\leq}B\leq 1\)) smooth functions \(f\) then we can control: (i) the pessimism of the learned solution; and (ii) the generalization error (Theorem 5.2). Formally, we refer to pessimism for estimates \(\hat{h}^{\gamma}_{D},\hat{\eta}^{\gamma}_{D}\) as excess risk defined as:
\[\text{excess risk}\coloneqq\sup_{w\in\mathcal{W}(\gamma)}|\inf_{h,\eta}R(h, \eta,w)-R(\hat{h}^{\gamma}_{D},\hat{\eta}^{\gamma}_{D},w)|. \tag{7}\]
**Theorem 5.2** (bounded RKHS).: _For \(l,\mathcal{H}\) in Theorem 5.1, and for \(\mathcal{W}(\gamma)\) described above \(\exists\gamma_{0}\) such that for all sufficiently bitrate-constrained \(\mathcal{W}(\gamma)\) i.e., \(\gamma\leq\gamma_{0}\), w.h.p. \(1-\delta\) worst risk generalization error:_
\[\sup_{w\in\mathcal{W}(\gamma)}R(\hat{h}^{\gamma}_{D},\hat{\eta}^{\gamma}_{D}, w)\lesssim(1/n)\left(\log(1/\delta)+(d+1)\log(nB^{-\gamma}L^{\gamma/2})\right)\]
_and the excess risk is \(\mathcal{O}(B)\) for \(\hat{h}^{\gamma}_{D},\hat{\eta}^{\gamma}_{D}\) defined above._
_Proof._ _See Appendix C.4._
Thus, in the setting described above we have shown how bitrate-constraints given indirectly by \(\gamma,R\) can control both the pessimism and statistical estimation errors. Here, we directly analyzed the estimates
but did not describe the specific algorithm used to solve the objective in Equation 6 with \(\hat{P}_{n}\). Now, we look at an iterative online algorithm to solve the same objective and see how bitrate-constraints can also influence convergence rates in this setting.
**Convergence and excess risk analysis for an online solver.** In the following, we provide an algorithm to solve the objective in Equation 6 and analyze how bitrate-constraint impacts the solver and the solution. For convex losses, the min-max objective in Equation 6 has a unique solution and this matches the unique Nash equilibrium for the generic online algorithm (game) we describe (Lemma 5.3). The algorithm is as follows: Consider a two-player zero-sum game where the learner uses a no-regret strategy to first play \(h\in\mathcal{H},\eta\in\mathbb{R}\) to minimize \(\mathbb{E}_{w\sim\delta}R(h,\eta,w)\). Then, the adversary plays follow the regularized leader (FTRL) strategy to pick distribution \(\delta\in\Delta(\mathcal{W}(\gamma))\) to maximize the same. Our goal is to analyze the bitrate-constraint \(\gamma\)'s effect on the above algorithm's convergence rate and the pessimistic nature of the solution found. For this, we need to first characterize the bitrate-constraint class \(\mathcal{W}(\gamma)\). If we assume there exists a prior \(\Pi\) such that \(\mathcal{W}(\gamma)\) is Vapnik-Chervenokis (VC) class of dimension \(O(\gamma)\), then in Theorem 5.4, we see that the iterates of our algorithm converge to the equilibrium (solution) in \(\mathcal{O}(\sqrt{\gamma\log n/T})\) steps. Clearly, the degree of bitrate constraint can significantly impact the convergence rate for a generic solver that solves the constrained DRO objective. Theorem 5.4 also bounds the excess risk (Equation 7) on \(\hat{P}_{n}\).
**Lemma 5.3** (Nash equilibrium).: _For strictly convex \(l(h)\), \(l(h)\in[0,M]\), the objective in Equation 6 has a unique solution which is also the Nash equilibrium of the game above when played over compact sets \(\mathcal{H}\times[0,M]\), \(\Delta(\mathcal{W},\gamma)\). We denote this equilibrium as \(h^{*}_{D}(\gamma),\eta^{*}_{D}(\gamma),\delta^{*}_{D}(\gamma)\)._
**Theorem 5.4**.: _At time step \(t\), if the learner plays \((h_{t},\eta_{t})\) with no-regret and the adversary plays \(\delta_{t}\) with FTRL strategy that uses a negative entropy regularizer on \(\delta\) then average iterates \((\bar{h}_{T},\bar{\eta}_{T},\bar{\delta}_{T})=(1/T)\sum_{t=1}^{T}(h_{t},\eta_{ t},\delta_{t})\) converge to the equilibrium \((h^{*}_{D}(\gamma),\eta^{*}_{D}(\gamma),\delta^{*}_{D}(\gamma))\) at rate \(\mathcal{O}(\sqrt{\gamma\log n/T})\). Further the excess risk defined above is \(\mathcal{O}((M/\alpha_{0})\left(1-\frac{1}{n^{\gamma}}\right))\)._
_Proof._ _See Appendix C.6._
## 6 Experiments
Our experiments aim to evaluate the performance of BR-DRO and compare it with ERM and group shift robustness methods that do not require group annotations for training examples. We conduct empirical analyses along the following axes: (i) worst group performance on datasets that exhibit known spurious correlations; (ii) robustness to random label noise in the training data; (iii) average performance on hybrid covariate shift datasets with unspecified groups; and (iv) accuracy in identifying minority groups. See Appendix B for additional experiments and details3.
Footnote 3: The code used in our experiments can be found at [https://github.com/ars22/bitrate_DRO](https://github.com/ars22/bitrate_DRO).
**Baselines.** Since our objective is to be robust to group shifts without group annotations on training examples, we explore baselines that either optimize for the worst minority group (CVaR DRO [32]) or use training losses to identify specific minority points (LfF [42], JTT [36]). Group DRO [49] is treated as an oracle. We also compare with the simple re-weighting baseline (RWY) proposed by Idrissi et al. [27].
**Implementation details.** We train using Resnet-50 [24] for all methods and datasets except CivilComments, where we use BERT [63]. For our VIB adversary, we use a 1-hidden layer neural network encoder and decoder (one for each label). As mentioned in Section 4, the adversary takes as input the learner model's features and the true label to generate weights. All implementation and design choices for baselines were adopted directly from Idrissi et al. [27], Liu et al. [36]. We provide model selection methodology and other details in Appendix B.
**Datasets.** For experiments in the known groups and label noise settings we use: (i) Waterbirds [59] (background is spurious), CelebA [37] (binary gender is spuriously correlated with label "blond"); and CivilComments (WILDS) [11] where the task is to predict "toxic" texts and there are 16 predefined groups [30]. We use FMoW and Camelyon17 [30] to test methods on datasets that do not have explicit group shifts. In FMoW the task is to predict land use from satellite images where the training/test set comprises of data before/after 2013. Test involves both subpopulation shifts over regions (_e.g.,_ Africa, Asia) and domain generalization over time (year). Camelyon17 presents a domain generalization problem where the task is to detect tumor in tissue slides from different sets of hospitals in train and test sets.
### Is BR-DRO robust to group shifts without training data group annotations?
Table 1 compares the average and worst group accuracy for BR-DRO with ERM and four group shift robustness baselines: JTT, LtF, SUBY, and CVaR DRO. First, we see that unconstrained CVaR DRO underperforms other heuristic algorithms. This matches the observation made by Liu et al. [36]. Next, we see that adding bitrate constraints on the adversary via a KL term or \(l_{2}\) penalty significantly improves the performance of BR-DRO (VIB) or BR-DRO (\(l_{2}\)), which now matches the best performing baseline (JTT). Thus, we see the less conservative nature of BR-DRO allows it to recover a large portion of the performance gap between Group DRO and CVaR DRO. Indirectly, this partially validates our Assumption 4.2, which states that the minority group is identified by a low bitrate adversary class. In Section 6.4 we discuss exactly what fraction of the minority group is identified, and the role played by the strength of bitrate-constraint.
### Br-DRO is more robust to random label noise
Several methods for group robustness (_e.g.,_ CVaR DRO, JTT) are based on the idea of up weighting points with high training losses. The goal is to obtain a learner with matching performance on every (small) fraction of points in the dataset. However, when training data has mislabeled examples, such an approach will likely yield degenerate solutions. This is because the adversary directly upweights any example where the learner has high loss, including datapoints with incorrect labels. Hence, even if the learner's prediction matches the (unknown) true label, this formulation would force the learner to memorize incorrect labelings at the expense of learning the true underlying function. On the other hand, if the adversary is sufficiently bitrate constrained, it cannot upweight the arbitrary set of randomly mislabeled points, as this would require it to memorize those points. Our Assumption 4.2 also dictates that the distribution shift would not upsample such high bitrate noisy examples. Thus, our constraint on the adversary ensures BR-DRO is robust to label noise in the training data and our assumption on the target distribution retains its robustness to test time distribution shifts.
\begin{table}
\begin{tabular}{r|c c c c c c} & \multicolumn{2}{c}{Waterbirds} & \multicolumn{2}{c}{CelebA} & \multicolumn{2}{c}{CivilComments} \\ Method & Avg & WG & Avg & WG & Avg & WG \\ \hline ERM & 97.1 (0.1) & 71.0 (0.4) & 95.4 (0.2) & 46.9 (1.0) & 92.3 (0.2) & 57.2 (0.9) \\ LfF [42] & 90.7 (0.2) & 77.6 (0.5) & 85.3 (0.2) & 77.4 (0.7) & 92.4 (0.1) & 58.9 (1.1) \\ RWY [27] & 93.7 (0.3) & 85.8 (0.5) & 84.9 (0.2) & 80.4 (0.3) & 91.7 (0.2) & 67.7 (0.7) \\ JTT [36] & 93.2 (0.2) & 86.6 (0.4) & 87.6 (0.2) & 81.3 (0.5) & 90.8 (0.3) & 69.4 (0.8) \\ CVaR DRO [32] & 96.3 (0.2) & 75.5 (0.4) & 82.2 (0.3) & 64.7 (0.6) & 92.3 (0.2) & 60.2 (0.8) \\ \hline BR-DRO (VIB) (ours) & 94.1 (0.2) & 86.3 (0.3) & 86.7 (0.2) & 80.9 (0.4) & 90.5 (0.2) & 68.7 (0.9) \\ BR-DRO (\(l_{2}\)) (ours) & 93.8 (0.2) & 86.4 (0.3) & 87.7 (0.3) & 80.4 (0.6) & 91.0 (0.3) & 68.9 (0.7) \\ \hline Group DRO [49] & 93.2 (0.3) & 91.1 (0.3) & 92.3 (0.3) & 88.4 (0.6) & 88.5 (0.3) & 70.0 (0.5) \\ \end{tabular}
\end{table}
Table 1: BR-DRO **recovers worst group performance gap between CVaR DRO and Group DRO:** On Waterbirds, CelebA and CivilComments we report test average (Avg) and test worst group (WG) accuracies for BR-DRO and baselines. In (\(\cdot\)) we report the standard error of the mean accuracy across five runs.
In Figure 1(b) we highlight this failure mode of unconstrained up-weighting methods in contrast to BR-DRO. We first induce random label noise [14] of varying degrees into the Waterbirds and CelebA training sets. Then we run each method and compare worst group performance. In the absence of noise we see that the performance of JTT is comparable with BR-DRO, if not slightly better (Table 1). Thus, both BR-DRO and JTT perform reasonably well in identifying and upsampling the simple minority group in the absence of noise. In its presence, BR-DRO significantly outperforms JTT and other approaches on both Waterbirds and CelebA, as it only upsamples the minority examples misclassified by simple features, ignoring the noisy examples for the reasons above. To further verify our claims, we set up a noisily labeled synthetic dataset (see Appendix B for details). In Figure 1(a) we plot training samples as well as the solutions learned by BR-DRO and and JTT on synthetic data. In Figure 1(_right_) we also plot exactly which points are upweighted by BR-DRO and JTT. Using both figures, we note that JTT mainly upweights the noisy points (in red) and memorizes them using \(\mathbf{x}_{\text{noise}}\). Without any weights on minority, it memorizes them as well and learns component along spurious feature. On the contrary, when we restrict the adversary with BR-DRO to be sparse (\(l_{1}\) penalty), it only upweights minority samples, since no sparse predictor can separate noisy points in the data. Thus, the learner can no longer memorize the upweighted minority and we recover the robust predictor along core feature.
### How does BR-DRO perform on more general covariate shifts?
In Table 2 we report the average test accuracies for BR-DRO and baselines on the hybrid dataset FMoW and domain generalization dataset Camelyon17. Given its hybrid nature, on FMoW we also report worst region accuracy. First, we note that on these datasets group shift robustness baselines do not do better than ERM. Some are either too pessimistic (_e.g.,_ CVaR DRO), or require heavy assumptions (_e.g.,_ Group DRO) to be robust to domain generalization. This is also noted by Gulrajani and Lopez-Paz [22]. Next, we see
\begin{table}
\begin{tabular}{r|c c|c} Method & \multicolumn{2}{c|}{FMoW} & \multicolumn{1}{c}{Camelyon17} \\ & \multicolumn{1}{c}{Avg} & W-Reg & Avg \\ \hline ERM & 53.3 (0.1) & 32.4 (0.3) & 70.6 (1.6) \\ JTT [36] & 52.1 (0.1) & 31.8 (0.2) & 66.3 (1.3) \\ LfF [42] & 49.6 (0.2) & 31.0 (0.3) & 65.8 (1.2) \\ RWY [27] & 50.8 (0.1) & 30.9 (0.2) & 69.9 (1.3) \\ Group DRO [49] & 51.9 (0.2) & 30.4 (0.3) & 68.5 (0.9) \\ CVaR DRO [32] & 51.5 (0.1) & 31.0 (0.3) & 66.8 (1.3) \\ \hline BR-DRO (VIB) (ours) & 52.0 (0.2) & 31.8 (0.2) & 70.4 (1.5) \\ BR-DRO (\(l_{2}\)) (ours) & 53.1 (0.1) & 32.3 (0.2) & 71.2 (1.0) \\ \end{tabular}
\end{table}
Table 2: BR-DRO **does better than Group DRO and other baselines on two WillDS datasets where the precise nature of shift is unknown:** Average (Avg) and worst region (W-Reg for FMoW) test accuracies on Camelyon17 and FMoW. In (\(\cdot\)) we report the standard error of the mean accuracy across five runs.
Figure 2: (_Left_) **Visualization (2d) of noisy synthetic data and learned predictors:** We compare the decision boundaries (projected onto core and spurious features) learned by JTT with BR-DRO when the adversary is restricted to a sparse predictor. While our method recovers the core feature the baselines memorize the minority points. (_Right_) BR-DRO **is robust to random label noise in training data:** Across varying levels of the fraction of noise in training data we compare performance of BR-DRO with ERM and methods (JTT, CVaR DRO) that naively up weight high loss datapoints.
that BR-DRO (\(l_{2}\) version) does better than other group shift baselines on both both worst region and average datasets and matches ERM performance on Camelyon17. One explanation could be that even though these datasets test models on new domains, there maybe some latent groups defining these domains that are simple and form a part of latent subpopulation shift. Investigating this claim further is a promising line of future work.
### What fraction of minority is recovered by BR-DRO?
We claim that our less pessimistic objective can more accurately recover (upsample) the true minority group if indeed the minority group is simple (see Assumption 4.2 for our definition of simple). In this section, we aim to verify this claim. If we treat examples in the top 10% (chosen for post hoc analysis) fraction of examples as our predicted minorities, we can check precision and recall of this decision on the Waterbirds and CelebA datasets. Figure 3 plots these metrics at each training epoch for BR-DRO (with varying \(\beta_{\text{vib}}\)), JTT and CVaR DRO. Precision of the random baseline tells us the true fraction of minority examples in the data. First we note that BR-DRO consistently performs much better on this metric than unconstrained CVaR DRO. In fact, as we reduce strength of \(\beta_{\text{vib}}\) we recover precision/recall close to the latter. This controlled experiment shows that the bitrate constraint is helpful (and very much needed) in practice to identify rare simple groups. In Figure 3 we observe that asymptotically, the precision of BR-DRO is better than JTT on both datasets, while the recall is similar. Since importance weighting has little impact in later stages with exponential tail losses [13, 56], other losses (_e.g.,_ polytail Wang et al. [61]) may further improve the performance of BR-DRO as it gets better at identifying the minority classes when trained longer.
## 7 Conclusion
In this paper, we proposed a method for making machine learning models more robust. While prior methods optimize robustness on a per-example or per-group basis, our work focuses on features. In doing so, we avoid requiring group annotations on training samples, but also avoid the excessively conservative solutions that might arise from CVaR DRO with fully unconstrained adversaries. Our results show that our method avoids learning spurious features, is robust to noise in the training labels, and does better on other forms of covariate shifts compared to prior approaches. Our theoretical analysis also highlights other provable benefits in some settings like reduced estimation error, lower excess risk and faster convergence rates for certain solvers.
**Limitations.** While our method lifts the main limitation of Group DRO (access to training group annotations), it does so at the cost of increased complexity. Further, to tune hyperparameters, like prior work we assume access to a some group annotations on validation set but also get decent performance (on
Figure 3: By considering the fraction of points upweighted by our adversary (top 10%) as the positive class we analyze the precision and recall of this class with respect to the minority group. and do the same for JTT, random baseline and CVaR DRO. BR-DRO achieves highest precision and matches recall with JTT asymptotically. We also find that increasing bitrate constraint \(\beta_{\text{vib}}\) helps improving precision/recall.
some datasets) with only a balanced validation set (see Appendix B). Adapting group shift methods to more generic settings remains an important and open problem.
**Acknowledgement.** The authors would like to thank Tian Li, Saurabh Garg at Carnegie Mellon University, and Yoonho Lee at Stanford University for helpful feedback and discussion.
|
2305.18716 | Cosmology in nonlocal gravity | In this chapter we review the recent developments of realizing $R^2$-like
inflation in the framework of a most general UV nonlocal extension of
Einstein's general theory of relativity (GR). It is a well-motivated robust
approach towards quantum gravity. In the past decades, nonlocal gravitational
theories which are quadratic in curvature have been understood to be ghost-free
and super-renormalizable around maximally symmetric spacetimes. However, in the
context of early Universe cosmology we show that one must go beyond the
quadratic curvature nonlocal gravity in order to achieve a consistent
ghost-free framework of Universe evolution from quasi de Sitter to Minkowski
spacetime. In this regard, we discuss a construction of a most general nonlocal
gravity action that leads to $R^2$-like inflation and discuss the corresponding
observational predictions for the scalar and tensor spectral tilts,
tensor-to-scalar ratio, and the primordial non-Gaussianities. We present an
analysis of how the nonlocal inflationary cosmology goes beyond the established
notions of effective field theories of inflation. Finally, we comment on some
open questions and prospects of higher curvature nonlocal gravity on its way of
achieving the UV completion. | Alexey S. Koshelev, K. Sravan Kumar, Alexei A. Starobinsky | 2023-05-30T03:44:55Z | http://arxiv.org/abs/2305.18716v1 | # Cosmology in nonlocal gravity
###### Abstract
In this chapter we review the recent developments of realizing \(R^{2}\)-like inflation in the framework of a most general UV nonlocal extension of Einstein's general theory of relativity (GR). It is a well-motivated robust approach towards quantum gravity. In the past decades, nonlocal gravitational theories which are quadratic in curvature have been understood to be ghost-free and super-renormalizable around maximally symmetric spacetimes. However, in the context of early Universe cosmology we show that one must go beyond the quadratic curvature nonlocal gravity in order to achieve a consistent ghost-free framework of Universe evolution from quasi de Sitter to Minkowski spacetime. In this regard, we discuss a construction of a most general nonlocal gravity action that leads to \(R^{2}\)-like inflation and discuss the corresponding observational predictions for the scalar and tensor spectral tilts, tensor-to-scalar ratio, and the primordial non-Gaussianities. We present an analysis of how the nonlocal inflationary cosmology goes beyond the established notions of effective field theories of inflation. Finally, we comment on some open questions and prospects of higher curvature nonlocal gravity on its way of achieving the UV completion.
Keywords:Models of quantum gravity, nonlocality and inflationary cosmology +
Footnote †: corresponding author
+
Footnote †: corresponding author
+
Footnote †: corresponding author
Alexey S. Koshelev School of Physical Science and Thechology, ShanghaiTech University, 201210 Shanghai, China
Departamento de Fisica, Centro de Matematica e Aplicacoes (CMA-UBI), Universidade da Beira Interior, 6200 Covilha, Portugal
e-mail: [email protected]
K. Sravan Kumar Institute of Cosmology & Gravitation, University of Portsmouth, Dennis Sciama Building, Burnaby Road, Portsmouth, PO1 3FX, United Kingdom
e-mail: [email protected]
Alexei A. Starobinsky L. D. Landau Institute for Theoretical Physics RAS, Chernogolovka, Moscow region 142432, Russian Federation Kazan Federal University, Kazan 420008, Republic of Tatarstan, Russian Federation
e-mail: [email protected] |
2304.00270 | A coupled magneto-structural continuum model for multiferroic
$\mathrm{BiFeO}_3$ | A continuum approach to study magnetoelectric multiferroic $\mathrm{BiFeO}_3$
(BFO) is proposed. Our modeling effort marries the ferroelectric (FE) phase
field method and micromagnetic simulations in order to describe the entire
multiferroic order parameter sector (polarization, oxygen antiphase tilts,
strain, and magnetism) self-consistently on the same time and length scale. In
this paper, we discuss our choice of ferroelectric and magnetic energy terms
and demonstrate benchmarks against known behavior. We parameterize the lowest
order couplings of the structural distortions against previous predictions from
density functional theory calculations giving access to simulations of the FE
domain wall (DW) topology. This allows us to estimate the energetic hierarchy
and thicknesses of the numerous structural DWs. We then extend the model to the
canted antiferromagnetic order and demonstrate how the ferroelectric domain
boundaries influence the resulting magnetic DWs. We also highlight some
capabilities of this model by providing two examples relevant for applications.
We demonstrate spin wave transmission through the multiferroic domain
boundaries which identify rectification in qualitative agreement with recent
experimental observations. As a second example of application, we model
fully-dynamical magnetoelectric switching, where we find a sensitivity on the
Gilbert damping with respect to switching pathways. We envision that this
modeling effort will set the basis for further work on properties of arbitrary
3D nanostructures of BFO (and related multiferroics) at the mesoscale. | John Mangeri, Davi Rodrigues, Sudipta Biswas, Monica Graf, Olle Heinonen, Jorge Íñiguez | 2023-04-01T09:30:24Z | http://arxiv.org/abs/2304.00270v1 | # A coupled magneto-structural continuum model for multiferroic BiFeO\({}_{3}\)
###### Abstract
A continuum approach to study magnetoelectric multiferroic BiFeO\({}_{3}\) (BFO) is proposed. Our modeling effort marries the ferroelectric (FE) phase field method and micromagnetic simulations in order to describe the entire multiferroic order parameter sector (polarization, oxygen antiphase tilts, strain, and magnetism) self-consistently on the same time and length scale. In this paper, we discuss our choice of ferroelectric and magnetic energy terms and demonstrate benchmarks against known behavior. We parameterize the lowest order couplings of the structural distortions against previous predictions from density functional theory calculations giving access to simulations of the FE domain wall (DW) topology. This allows us to estimate the energetic hierarchy and thicknesses of the numerous structural DWs. We then extend the model to the canted antiferromagnetic order and demonstrate how the ferroelectric domain boundaries influence the resulting magnetic DWs. We also highlight some capabilities of this model by providing two examples relevant for applications. We demonstrate spin wave transmission through the multiferroic domain boundaries which identify rectification in qualitative agreement with recent experimental observations. As a second example of application, we model fully-dynamical magnetoelectric switching, where we find a sensitivity on the Gilbert damping with respect to switching pathways. We envision that this modeling effort will set the basis for further work on properties of arbitrary 3D nanostructures of BFO (and related multiferroics) at the mesoscale.
## I Introduction
The phenomenological description of ferroic phase transitions is characterized by the onset of one or more order parameters below a critical temperature. In the case of ferroelectric materials, the order parameter is an electric dipole condensed from unstable phonon modes [1; 2]. For ferromagnets, a net nonzero magnetization arises as ordering dominates thermal spin fluctuations below the Curie point [3]. In both cases, the theoretical portrayal of a single order parameter (and its conjugate electric or magnetic field) has been quite successful in illustrating and driving interest in a plethora of functional materials properties of technological relevance.
Multiferroics are compounds where multiple order parameters coexist and are coupled together in non-trivial ways. Magnetoelectric (ME) multiferroics exhibit ferroelectricity along with a magnetic ordering (which can be ferromagnetic [4], antiferromagnetic [5], ferrimagnetic [6], helimagnetic [7], etc.). In the context of applications for electronics, these types of structures are very promising since the coupling can provide a pathway to controlling the magnetic (electric) state with an electric (magnetic) field [5; 8; 9]. Or it is proposed that this coupling can give rise to new properties not present in either ferroelectric or magnetic state alone [8]. For most ME multiferroics however, this intrinsic coupling can be quite weak leading to an interest in searching for materials candidates where this is not the case.
A particular ME multiferroic, the perovskite BiFeO\({}_{3}\) (BFO), has been demonstrated to host appreciable spin-orbit coupling between its ferroelectric (FE) and antiferromagnetic (AFM) ordering. In bulk, BFO undergoes a phase transition to a rhombohedral ferroelectric phase upon cooling below 1100 K [10; 11] along with a Neel temperature of around 640 K resulting in collinear G-type AFM order [10]. Due to its high transition temperatures, it is a promising material for applications at ambient conditions. In BFO, the polarization \(\mathbf{P}\) displays an 8-fold symmetry of domain states aligned along the pseudocubic [111] or equivalent directions. The rhombohedral polar distortion (displacement of the Bi\({}^{3+}\) and Fe\({}^{3+}\) atoms relative to the oxygen atoms) is also accompanied by a spontaneous antiphase tilting of the FeO\({}_{6}\) octahedral oxygen cages about the polar axis. As such, the presence of the antiphase tilts at adjacent iron sites underpin an antisymmetric Dzyaloshinskii-Moriya interaction (DMI) which causes a canting of the anti-aligned Fe spins [12; 13]. Therefore, BFO displays a weak net ferromagnetic moment \(\mathbf{M}\) due to _noncollinearity_ in its magnetic structure. In many samples or in bulk, this canted moment forms a long-period cycloid with a period of around 64 nm [14; 15; 16; 17].
Due to its exceptional properties, BFO has been proposed to be used in a number of novel device concepts including beyond-CMOS logic gates [18; 19], tunneling magnetoresistance/stamtric van der Waals [20; 21; 22], THz radiation emitters [23; 24], enhanced piezoelectric elements [25; 26], ultrafast acoustic modulators [27], or linear electrooptical components [28; 29]. As miniaturization is a significant con
cern for next generation device proposals, the thicknesses of these ME films synthesized for the aforementioned applications are in the range of a few 10s of nm to a few \(\mu\)m's [16].
As highlighted in recent work [30; 31], the observed spin cycloid abruptly changes propagation direction at the FE domain walls (DWs) indicating its strong coupling to the polar order. Local measurement techniques suggest that the 109\({}^{\circ}\)-71\({}^{\circ}\)-109\({}^{\circ}\) sequence of FE DWs display a Bloch-like character with \(\mathbf{P}\) rotating across the DW with some sense of chirality [31; 32] leading to open questions as to the driving force of this phenomena as well as if the ME coupling can also yield chiral magnetic textures at these DWs. Additionally, there have been other experimental observations of unexplained mesoscopic phenomena in BFO. Piezoforce microscopy measurements have revealed metastable states in epitaxial thin films where instead of the 8-fold possibility of domain orientations, there are 12 which also display an appreciable population of charged domain boundaries which are controllable by electric field cycling [33].
A sought-after property of ME multiferroics is the ability to deterministically switch the magnetization with electric fields [5]. Due to the time and length scales involved in the practical implementations of ME switching, the dynamics of the coupled polar-magnetic texture is unclear. Supporting theory utilizing atomistic methods can become computationally intractable due to too many atoms in the simulation box or a difficulty of modeling real interfacial or time-dependent phenomena. As such, these methodologies can be difficult to implement to investigate the aforementioned experimentally relevant scenarios.
In order to investigate the mesoscopic picture of ME multiferroics taking into account both the ferroelectric physics and the micromagnetic formalism to describe the AFM behavior [34], we are motivated to develop a continuum model of BFO and its nanostructures. The goal is to coarse-grain the materials physics into a predictive capability for large length and time scales in a single calculation. While the phase field method has been particularly useful in understanding the ferroelectric domain topology and its response to external stimuli in BFO [35; 36], a natural forward progression is to extend this type of continuum modeling to the spins in the material with micromagnetic simulations [37; 38]. This would give access to new information about the collective spin excitations in the presence of (and coupled to) the topological defects (for example its domain walls or the recently experimentally resolved solitons [39] in BFO).
To explore these questions in this work, we propose a coupled multiferroic continuum model that marries the well-known FE phase field and micromagnetism self-consistently on the same time and length-scale. In Sections II.1 and II.2, we report a comprehensive description of the relevant governing equations and energy terms for the lattice contribution. We study the FE DWs in Section II.4 and establish our predictions of \(\mathbf{P}\) order parameter profiles (including also the spontaneous octahedral tilt and strain fields) for a number of different low-energy DWs in BFO. This allows us to parameterize the model-specific gradient coefficients by comparing to density functional theory (DFT) calculations [40]. Good agreement is demonstrated with respect to the energy hierarchy of the different low-energy DWs. We also report our model's predictions of Bloch rotational components, residual strain fields, and thicknesses of different DW types.
In Sections II.5, II.6, and II.7 we expand the model to include the magnetic order. We simulate the magnetic ground states in the presence of homogeneous and inhomogeneous structural order building on the results from the previous section. We evaluate the influence of different types of polar domain boundaries also yielding estimates of the DW thicknesses, topology, and energies of the magnetic texture. Then in Section III, we provide two illustrative examples of the capabilities of our simulations: (i) spin-wave transport through the multiferroic DW boundaries highlighting their rectifying nature; (ii) fully-coupled _dynamical_ switching of the magnetization order with a time-dependent electric field through the ME effect demonstrating a non-trivial sensitivity on physical parameters. While our model (and the examples provided) is certainly not exhaustive, we hope that this work will set the basis for further studies on properties of arbitrary 3D BFO nanostructures (and related multiferroics) at the continuum approximation of theory.
## II Multiferroic continuum model
We consider a zero temperature limit free energy density functional defined as a sum of Landau-type energy density from the structural distortions of the lattice (\(f_{\mathrm{int}}\)), the magnetic energy density due to the spin subsystem (\(f_{\mathrm{sp}}\)) and the magnetostructural coupling (\(f_{\mathrm{MP}}\)) in single crystal BFO,
\[f=f_{\mathrm{latt}}(\mathbf{P},\mathbf{A},\mathbf{\varepsilon})+f_{\mathrm{sp}}( \mathbf{L},\mathbf{m})+f_{\mathrm{MP}}(\mathbf{L},\mathbf{m},\mathbf{P}, \mathbf{A}), \tag{1}\]
where lower case \(f\) denotes a free energy _density_. In our continuum description, we need some formal definitions of the order parameters. The electric polarization \(\mathbf{P}\) is connected to the displacement of Bi\({}^{3+}\) and Fe\({}^{3+}\) atoms relative to the oxygen anions. The vector \(\mathbf{A}\) describes the rotations of the FeO\({}_{6}\) cages where the antiphase correlation between adjacent unit cells is implicitly assumed. The spontaneous homogeneous strain that arises below the phase transition is the rank two tensor \(\mathbf{\varepsilon}\) with symmetric components \(\varepsilon_{ij}=\varepsilon_{ji}\),
\[\varepsilon_{ij}=\frac{1}{2}\left(\frac{\partial u_{i}}{\partial x_{j}}+ \frac{\partial u_{j}}{\partial x_{i}}\right), \tag{2}\]
where the variable \(u_{i}\) is the component of the elastic displacement vector \(\mathbf{u}\) which is solved for in our problem setup.
For the spin system, BFO is an antiferromagnet with anti-aligned spins at first-neighboring Fe sites (G-type) leading to two distinct sublattices \(\mathbf{m}_{1}\) and \(\mathbf{m}_{2}\). The quantity \(\mathbf{L}\) is the AFM Neel vector which we define as \(\mathbf{L}=(\mathbf{m}_{1}-\mathbf{m}_{2})/2\). Additionally, we have the total magnetic moment \(\mathbf{m}=(\mathbf{m}_{1}+\mathbf{m}_{2})/2\) which accounts for the weak nonvanishing magnetization that arises due to the DMI. The quantities \(\mathbf{L}\) and \(\mathbf{m}\) are constrained such that \(|\mathbf{L}|+|\mathbf{m}|=1\) with, in general, \(|\mathbf{L}|\gg|\mathbf{m}|\) and \(\mathbf{L}\cdot\mathbf{m}=0\) reflecting the presence of a strong AFM coupling between the sublattices but with a weak noncollinearity in \(\mathbf{m}_{1}\) and \(\mathbf{m}_{2}\). The total weak magnetization can be computed as \(\mathbf{M}=M_{s}\mathbf{m}\) where \(M_{s}\) is the saturation magnetization density of the Fe sublattice (\(4.0\mu\)B/Fe) [41; 42; 43].
### Lattice energy
We define the free energy density corresponding to the structural distortions of the lattice as \(f_{\text{latt}}\),
\[f_{\text{latt}} =f_{P}+f_{A}+f_{AP}+f_{P\varepsilon} \tag{3}\] \[+f_{A\varepsilon}+f_{\varepsilon}+f_{\nabla P}+f_{\nabla A}.\]
The energy expansion of \(f_{P},f_{A}\) and \(f_{AP}\) contains only the terms allowed by symmetry to the fourth order [44],
\[f_{\text{P}} =A_{P}\left(P_{x}^{2}+P_{y}^{2}+P_{z}^{2}\right)+B_{P}\left(P_{x} ^{2}+P_{y}^{2}+P_{z}^{2}\right)^{2} \tag{4}\] \[+C_{P}\left(P_{x}^{2}P_{y}^{2}+P_{y}^{2}P_{z}^{2}+P_{x}^{2}P_{z}^ {2}\right),\] \[f_{\text{A}} =A_{A}\left(A_{x}^{2}+A_{y}^{2}+A_{z}^{2}\right)+B_{A}\left(A_{x} ^{2}+A_{y}^{2}+A_{z}^{2}\right)^{2}\] \[+C_{A}\left(A_{x}^{2}A_{y}^{2}+A_{y}^{2}A_{z}^{2}+A_{x}^{2}A_{z}^ {2}\right).\]
and
\[f_{\text{PA}} =B_{PA}\left(P_{x}^{2}+P_{y}^{2}+P_{z}^{2}\right)\left(A_{x}^{2}+ A_{y}^{2}+A_{z}^{2}\right) \tag{5}\] \[+C_{PA}\left(P_{x}^{2}A_{x}^{2}+P_{y}^{2}A_{y}^{2}+P_{z}^{2}A_{z}^ {2}\right)\] \[+C^{\prime}_{PA}\left(P_{x}P_{y}A_{x}A_{y}+P_{y}P_{z}A_{y}A_{z}+P_ {x}P_{z}A_{x}A_{z}\right).\]
Additionally, the elastic, electrostrictive (\(\mathbf{P}\)-\(\varepsilon\)), and rotosrictive (\(\mathbf{A}\)-\(\varepsilon\)) energy is included as
\[f_{\varepsilon} =\frac{1}{2}C_{11}\left(\varepsilon_{xx}+\varepsilon_{yy}+ \varepsilon_{zz}\right) \tag{6}\] \[+C_{12}\left(\varepsilon_{xx}\varepsilon_{yy}+\varepsilon_{yy} \varepsilon_{zz}+\varepsilon_{xx}\varepsilon_{zz}\right)\] \[+\frac{1}{2}C_{44}\left(\varepsilon_{xy}^{2}+\varepsilon_{yz}^{2 }+\varepsilon_{xz}^{2}\right),\] \[f_{P\varepsilon} =q_{11}\left(\varepsilon_{xx}P_{x}^{2}+\varepsilon_{yy}P_{y}^{2}+ \varepsilon_{zz}P_{z}^{2}\right)\] \[+q_{12}\left[\varepsilon_{xx}\left(P_{y}^{2}+P_{z}^{2}\right)+ \varepsilon_{yy}\left(P_{x}^{2}+P_{z}^{2}\right)+\varepsilon_{zz}\left(P_{x} ^{2}+P_{y}^{2}\right)\right]\] \[+q_{44}\left(\varepsilon_{yz}P_{y}P_{z}+\varepsilon_{xz}P_{z}P_{ z}+\varepsilon_{xy}P_{x}P_{y}\right),\]
and
\[f_{A\varepsilon}=r_{11}\left(\varepsilon_{xx}A_{x}^{2}+\varepsilon _{yy}A_{y}^{2}+\varepsilon_{zz}A_{z}^{2}\right) \tag{7}\] \[+r_{12}\left[\varepsilon_{xx}\left(A_{y}^{2}+A_{z}^{2}\right)+ \varepsilon_{yy}\left(A_{x}^{2}+A_{z}^{2}\right)+\varepsilon_{zz}\left(A_{x}^{ 2}+A_{y}^{2}\right)\right]\] \[+r_{44}\left(\varepsilon_{yz}A_{y}A_{z}+\varepsilon_{xz}A_{z}A_{ x}+\varepsilon_{xy}A_{x}A_{y}\right)\]
respectively. Finally, to evaluate inhomogeneous phases (i.e DWs), we include the lowest-order Lifshitz invariants [45; 46; 47] for the structural distortions to Eq. (3),
\[f_{\nabla\mathbf{P}}=\frac{G_{11}}{2}\left(P_{x,x}^{2}+P_{y,y}^{2 }+P_{z,z}^{2}\right) \tag{8}\] \[+G_{12}\left(P_{x,x}P_{y,y}+P_{y,y}P_{z,z}+P_{x,x}P_{z,z}\right)\] \[+\frac{G_{44}}{2}\left[\left(P_{x,y}+P_{y,x}\right)^{2}+\left(P_ {y,z}+P_{z,y}\right)^{2}+\left(P_{x,z}+P_{z,x}\right)^{2}\right]\]
and
\[f_{\nabla\mathbf{A}}=\frac{H_{11}}{2}\left(A_{x,x}^{2}+A_{y,y}^{ 2}+A_{z,z}^{2}\right) \tag{9}\] \[+H_{12}\left(A_{x,x}A_{y,y}+A_{y,y}A_{z,z}+A_{x,x}A_{z,z}\right)\] \[+\frac{H_{44}}{2}\left[\left(A_{x,y}+A_{y,x}\right)^{2}+\left(A_ {y,z}+A_{z,y}\right)^{2}+\left(A_{x,z}+A_{z,x}\right)^{2}\right]\]
for both the \(\mathbf{P}\) and \(\mathbf{A}\) order parameters respectively. A comma in the subscript denotes a partial derivative with respect to the specified spatial directions. The bulk homogeneous contribution to the energy (i.e. the terms _not_ involving \(f_{\nabla P}\) and \(f_{\nabla A}\)) has been previously parameterized with DFT calculations [44]; we refer the reader to this publication for the relevant coefficients.
However, in the case of the gradient energy, the set of coefficients \(\{G_{ij},\,H_{ij}\}\) are difficult to obtain directly from DFT (see for example the approach outlined in Refs. [48; 49; 50]) - so we employ a fitting procedure in Sec. II.4 to evaluate them. We should emphasize that if a different bulk homogeneous phenomenological potential is used (i.e. Refs. [51; 36; 52]), then the gradient coefficients obtained would be different since they depend strongly on the energetics of the order parameters in the vicinity of the DW.
### Governing equations
To find the polar ground states, we evolve the coupled time dependent Landau-Ginzburg (TDLG) equations,
\[\frac{\partial\mathbf{P}}{\partial t}=-\Gamma_{P}\frac{\delta f_{\text{latt}}}{ \delta\mathbf{P}} \tag{10}\]
and
\[\frac{\partial\mathbf{A}}{\partial t}=-\Gamma_{A}\frac{\delta f_{\text{latt}}}{ \delta\mathbf{A}} \tag{11}\]
along with satisfying the stress-divergence equation for mechanical equilibrium,
\[\sum_{j=x,y,z}\frac{\partial\sigma_{ij}}{\partial x_{j}}=0, \tag{12}\]
where \(\sigma_{ij}=\sigma_{ji}=\partial f_{\rm latt}/\partial\varepsilon_{ij}\) is the elastic stress of the material. We write the components of \(\sigma_{ij}\) as
\[\sigma_{ij}=\sum_{k,l=x,y,z}C_{ijkl}\left(\varepsilon_{kl}+\varepsilon_{kl}^{ \rm eig}\right) \tag{13}\]
where \(\varepsilon_{kl}\) is the elastic strain from Eq. (2) and the eigenstrain is related to the spontaneous strain via,
\[\varepsilon_{ij}^{\rm eig}=\sum_{k,l=x,y,z}\left(Q_{ijkl}P_{k}P_{l}+R_{ijkl}A_ {k}A_{l}\right), \tag{14}\]
where \(Q_{ijkl}\) and \(R_{ijkl}\) are the electrostrictive and rotostrictive coefficients. These are related to our free energy density coefficients \(q_{ijkl}\) and \(r_{ijkl}\) as (in Voight notation),
\[Q_{11}=\frac{1}{3}\left(\frac{2\left(q_{11}-q_{12}\right)}{C_{11}-C_{12}}+ \frac{q_{11}+2q_{12}}{C_{11}+2C_{12}}\right), \tag{15}\]
\[Q_{12}=\frac{1}{3}\left(-\frac{q_{11}-q_{12}}{C_{11}-C_{12}}+\frac{q_{11}+2q_ {12}}{C_{11}+2C_{12}}\right), \tag{16}\]
and
\[Q_{44}=\frac{q_{44}}{4C_{44}}, \tag{17}\]
with similar definitions for the quantities involving \(R_{ijkl}\). We also investigate electrostatic phenomena in our model through the Poisson equation,
\[\epsilon_{b}\nabla^{2}\Phi_{\rm E}=\nabla\cdot{\bf P}, \tag{18}\]
where \(\Phi_{\rm E}\) is the electrostatic potential which defines the electric field \({\bf E}=-\nabla\Phi_{\rm E}\) in the usual way. The parameter \(\epsilon_{b}=30\,\epsilon_{0}\) is the relative background dielectric constant [53]. Eq. (18) is solved at every time step of the evolution of Eq. (10) and (11). In Sec. II.4 we are searching for the local minima due to the relaxation dynamics of Eq. (10) and (11) and as such the time relaxation constants \(\Gamma_{P}\) and \(\Gamma_{A}\) are set to unity.
To enforce periodicity on the strain tensor components in our representative volume element that includes DWs, we separate the strain fields calculated from Eq. (2) and (12) into homogeneous (global) and inhomogeneous (local) parts. This is done utilizing the method formulated by Biswas and co-workers in Ref. [[54]] which relaxes the stress components along the periodic directions and thus allows corresponding deformation to occur. Here, the homogeneous contribution of the total strain obeys the following integrated quantity at every time step of the relaxation,
\[\int\limits_{V}d^{3}{\bf r}\;\sigma_{ij}^{\rm total}=0, \tag{19}\]
where \(V\) is the volume of our simulation containing the DW profiles. The total stress tensor, \(\sigma_{ij}^{\rm total}\), is calculated from the sum of homogeneous, inhomogeneous, and eigenstrain components \(\varepsilon_{ij}^{\rm total}=\varepsilon_{ij}^{\rm inhom}+\varepsilon_{ij}^{ \rm hom}+\varepsilon_{ij}^{\rm eig}\) for all _periodic_ directions \(i\) and corresponding periodic component \(j\) at every time step of the simulation.
### Numerical implementation
Equations (10), (11), (12), (18), and (19) are cast into their weak formulation sufficient for the finite element analysis. Our method uses linear Lagrange shape functions for the coupled variable system. The finite element mesh spacing is selected to be \(\Delta x\approx 0.1\) nm for all calculations in this work. This small mesh spacing helps resolve the thin DWs in BFO to smoothness which are discussed extensively in Section II.4 and II.7. We implement Newmark-beta time integration [55] with convergence between time steps achieved when the nonlinear residuals calculated during the Newton-Raphson iteration (with block Jacobi preconditioning) have been reduced by \(10^{-8}\) relative tolerance. If convergence is not obtained, we use adaptive time stepping with reduction factor of 0.5. The finite element method (FEM) implementation of this work is available within Ferret[56] which is an add-on module for the open source Multiphysics Object Oriented Simulation Environment (MOOSE) framework [57].
In the absence of order parameter gradients, the homogeneous FE states of \({\bf P}\) parallel to \({\bf A}\) which we denote as \({\bf P}\uparrow\uparrow{\bf A}\) can be obtained numerically. To perform this calculation, we evolve Eq. (10) and (11) simultaneously solving Eq. (12) (at every time step) until the relative change in total volume integrated energy density \(F\) between adjacent time steps is less than \(5\times 10^{-7}\) eV/s. The bulk potential predicts the spontaneous values of the order parameters upon minimization that are \(P_{s}=|{\bf P}|=0.945\;{\rm C}/{\rm m}^{2}\) and \(A_{s}=|{\bf A}|=13.398^{\circ}\). The spontaneous normal and shear strains that correspond to these values are \(\varepsilon_{n}=\varepsilon_{ii}=1.308\times 10^{-2}\) and \(\varepsilon_{s}=\varepsilon_{ij}=2.95\times 10^{-3}\) for \(i\neq j\) in agreement with Ref. [[44]]. The free energy density of the ground state given by Eq. (3) is -15.5653 eV \(\cdot\) nm\({}^{-3}\). The energy functional used also describes identical energy minima when \({\bf P}\uparrow\downarrow{\bf A}\) (which is equivalent to a \(180^{\circ}\) phase reversal of the tilt field). Since the rotostrictive strains defined in Eq. (14) are invariant upon full reversal of \({\bf A}\), then these numbers are left unchanged. In the next Section II.4 we evaluate the inhomogeneous textures of the DWs and parameterize the gradient coefficients \(\{G_{ij},H_{ij}\}\) used in our model.
### Calculation of gradient coefficients
In order to study the domain wall topology involving spatial variations of \({\bf P}\), \({\bf A}\), and strain, a good parameter set estimate of the gradient coefficients \((G_{11},H_{11},...)\) of Eq. (8) and Eq. (9) is needed. To achieve this, we consult DFT calculations reported by Dieguez and co-workers in Ref. [[40]]. It was shown that an assortment of metastable states are allowed in BFO and that this zoology of different DW types forms an energy hierarchy. Due to electrostatic compatibility, this collection of states has specific requirements on the components of the order parameters that modulate across the domain boundary. For example, the lowest energy configurations
which we denote (see Table 1) as 2/1(100) and 3/0(110) are the 109\({}^{\circ}\) and 180\({}^{\circ}\) DWs respectively. In this notation, it is indicated that, for the 2/1 DW, two components of \(\mathbf{P}\) and one component of \(\mathbf{A}\) switch sign across the boundary whose plane normal is (100), whereas for the 3/0 DW, \(\mathbf{P}\) undergoes a full reversal where \(\mathbf{A}\) is unchanged across the (110)-oriented boundary plane. We label the pairs of the domains characterizing the DW as \(\mathbf{P}^{\mathrm{I}}/\mathbf{A}^{\mathrm{I}}\) and \(\mathbf{P}^{\mathrm{II}}/\mathbf{A}^{\mathrm{II}}\) in this table. This determines which terms in Eq. (8) and (9) are primary contributions to the DW energy. This is particularly advantageous as it has allowed us to separate the computation of specific DWs in the analysis of fitting the gradient coefficients to the DFT results.
To obtain the (100)- or (110)-oriented DWs within our phase field scheme, we choose an initial condition for the components of the order parameters to be a \(\sin(\mathrm{x})\) or \(\sin(\mathrm{x}+\mathrm{y})\) profile respectively. We then relax Eq. (10), and (11) until convergence along with satisfying the conditions of mechanical equilibrium of Eq. (12) at every time step. The periodic boundary conditions on the components of \(\mathbf{P},\mathbf{A}\), and \(\mathbf{u}\) for (100)- or (110)-oriented domain walls are enforced along the [100] and [110] directions respectively. We compute the DW energy with
\[F_{\mathrm{DW}}=\frac{F-F_{0}}{N\cdot S} \tag{20}\]
where \(F_{0}\) is the corresponding monodomain energy from Eq. (3) integrated over the computational volume \(V\). The energy \(F\) is computed from the solution that contains the DW profile. The number of DWs in the simulation box is \(N\) and \(S\) the surface area of the DW plane. We find convergence on the computed energies within 1 mJ/m\({}^{2}\) provided that the DW-DW distances are greater than 30 nm due to long-range strain interactions. For fourth-order thermodynamic potentials, a fit function of the form \(W_{k}\tanh\left[\left(r-r_{0}\right)/t_{k}\right]\) is sufficient to fit the evolution of order parameters that switch across the DW [58] where \(W_{k}\) is the value of the switched spontaneous order parameters far from a DW plane localized at \(r_{0}\) and \(t_{k}\) corresponds to the thickness of the polar or octahedral tilt parameters for \(k=P,A\) respectively.
As a first example, consider the lowest energy DW predicted by DFT, the so-called 109\({}^{\circ}\) 2/1 (100) DW which is indeed frequently observed in thin film samples of BFO [19; 59]. The primary gradient coefficients governing the energy of the wall are the \(H_{11}\) and \(G_{44}\) coefficients owing to the fact that \(A_{x,x}\), \(P_{y,x}\), and \(P_{z,x}\) are nonzero (see Table 1). The resulting DW profile for the 2/1 (100) wall is presented in Fig. 1(a). The profile is a smooth rotation of both \(A_{x}\) and \(P_{y}=P_{z}\) across the wall region. The inset on the left reveals that the _non-switching_ component \(P_{x}\) experiences a slight decrease (\(\approx-3\%\)) at the wall. The quantitative value of the modulation of the non-switched component is consistent with DFT results of the same DW type [60].
The small change of \(P_{x}\) corresponds with a built-in \(\Phi_{\mathrm{E}}\) shown in the right inset panel which is of comparable order (\(\approx 10\) mV) to those estimated from DFT [60]. Fitting the \(\mathbf{P}\)-\(\mathbf{A}\) profile shows that the DW is quite thin (thickness \(2t_{P}\approx 0.08\) nm). Hence, we obtain DWs with marked Ising character. We provide an energy profile scan across the primary coefficients \(H_{11}\) and \(G_{44}\) in (b). The dashed white line outlines the predictions from DFT results in Ref. [40]. We also should mention that the dependence on other coefficients is quite weak due to the relatively small gradients in non-switching components. These calculations (and others not shown here) reveal that the choice of \(\{G_{ij},H_{ij}\}\) is not unique, i.e. one can find the same DW energies (with very similar profiles) for different combinations of the primary coefficients. Therefore, it is necessary to visit other DW configurations to constrain the values of the entire set.
Figure 1: (a) \(\mathbf{P}\) and \(\mathbf{A}\) 2/1 (100)-oriented DW profile. The left inset shows the \(x\)-component of \(\mathbf{P}\) decrease across the DW where the right inset demonstrates the built-in \(\Phi_{\mathrm{E}}\) (in mV) arising from this small rotation. (b) Energy surface as a function of primary gradient coefficients \(H_{11}\) and \(G_{44}\) with DW-DW distance of \(\approx 160\) nm. For panels (a) the solution coincides with our best estimates of \(G_{44}\) and \(H_{11}\) (listed in Table 2).
Next, we present **P-A** profiles of three higher energy (100)-oriented domain walls (1/1, 0/3, and 2/2) in Fig 2 (a), (b) and (c) respectively. These three calculations correspond to those using our best estimates of the gradient coefficients \(\{G_{ij},H_{ij}\}\) in Table 2. In all three cases, we find the presence of a small changes in the non-switching components of the order parameters shown in circles for \(A_{k}\) and diamonds for \(P_{k}\). For example, in the 71\({}^{\circ}\) 1/1 shown in DW Fig 2 (a), \(P_{y}\) (in red) which does not change sign, grows at the DW by about 15%. This is in contrast to the \(P_{x}\) component (in blue) which only grows by 2.5% demonstrating the influence of the weak built-in field which reduces the magnitude of this component to keep this wall neutral. Similar changes on the order of about 10% are also seen in \(A_{x}=A_{y}\) components shown in blue. This DW-induced change in **P** seems to
\begin{table}
\begin{tabular}{c|c c|c c c c|c c c|c c} & & & & & & & & & & & & \\ \(\mathbf{P}^{\mathrm{I}}/\mathbf{A}^{\mathrm{I}}\) & Type & DW & \(\mathbf{P}^{\mathrm{II}}/\mathbf{A}^{\mathrm{II}}\) & \(P_{i,j}\) & \(G_{ij}\) & \(A_{i,j}\) & \(H_{ij}\) & \(2t_{P}\) & \(2t_{A}\) & \(F_{\mathrm{DW}}^{\mathrm{(DFT)}}\) & \(F_{\mathrm{DW}}^{\mathrm{(FEM)}}\) \\ \hline
[111]/[111] & 0/0 & - & [111]/[111] & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) & \(-\) \\
[111]/[111] & 0/3 & (100) & [111]/[111] & \(-\) & \(-\) & \(A_{x,x},A_{y,x},A_{z,x}\) & \(H_{44},H_{11}\) & \(-\) & 0.39 & 227 & 293 \\
[111]/[111] & 1/1 & (100) & [111]/[111] & \(P_{x,x}\) & \(G_{44}\) & \(A_{x,x}\) & \(H_{44}\) & 0.33 & 0.52 & 151 & 162 \\
[111]/[111] & 1/2 & (100) & [111]/[111] & \(P_{x,x}\) & \(G_{11}\) & \(A_{y,x},A_{z,x}\) & \(H_{44}\) & 0.25 & 0.25 & 147 & 159 \\
[111]/[111] & 2/1 & (100) & [111]/[111] & \(P_{y,x},P_{z,x}\) & \(G_{44}\) & \(A_{x,x}\) & \(H_{11}\) & 0.08 & 0.06 & 62 & 60 \\
[111]/[111] & 2/2 & (100) & [111]/[111] & \(P_{y,x},P_{z,x}\) & \(G_{44}\) & \(A_{y,x},A_{z,x}\) & \(H_{44}\) & 0.42 & 0.34 & 319 & 314 \\
[111]/[111] & 3/0 & (110) & [111]/[111] & \(P_{x,x}P_{y,x},P_{z,x}\) & \(G_{11},G_{12}\) & \(-\) & \(-\) & 0.28 & \(-\) & 74 & 78 \\ & & & & & \(P_{x,y}P_{y,y},P_{z,y}\) & \(G_{44}\) & & & & & & \\
[111]/[111] & 3/3 & (110) & [111]/[111] & \(P_{x,x}P_{y,x},P_{z,x}\) & \(G_{11},G_{12}\) & \(A_{x,x},A_{y,x},A_{z,x}\) & \(H_{11},H_{12}\), & 0.22 & 0.33 & 255 & 263 \\ & & & & & \(P_{x,y}P_{y,y},P_{z,y}\) & \(G_{44}\) & \(A_{x,y},A_{y,y},A_{z,y}\) & \(H_{44}\) & & & & \\ \end{tabular}
\end{table}
Table 1: Types of (100)- and (110)-oriented domain walls, their primary derivatives and corresponding gradient coefficients, and comparison of energies calculated from DFT[40] with those in this work. Adjacent domain configurations for **P** and **A** utilize the I and II superscript notation as discussed in the main text. Energy is presented in mJ/m\({}^{2}\) and DW thicknesses (\(2t_{k}\), \(k=P,A\)) are given in nm.
Figure 2: **P-A** profiles in arclengths perpendicular to the (100)-oriented DW plane for the (a) 1/1 (71\({}^{\circ}\)), (b) 0/3 (180\({}^{\circ}\) in **A**), and (c) 2/2 (109\({}^{\circ}\)) type boundaries. Below in (d), (e),and (f) are the spontaneous strain fields for the normal and shear components along the same arclength. Far from the DW, the solutions converge to the values (\(P_{s},A_{s},\varepsilon_{s},\varepsilon_{n}\)), of the ground state.
be the largest in the 0/3-type DW shown in (b). Due to the influence of built in electric fields from the solution of the Poisson equation (and our best estimates of the anisotropic gradient coefficients), the value of \(P_{x}\) component grows by about 5% whereas the \(P_{y}=P_{z}\) components _diminish_ by almost -35% (shown in black). Again, we also find changes in the non-switching components in the \(109^{\circ}\)\(2/2\) wall, with \(P_{x}\) (blue diamonds) growing by about 2%; by contrast and \(A_{x}\) decreases by \(-6.4^{\circ}\) (blue circles).
In panels (d), (e), and (f) of Fig. 2, we depict the corresponding spontaneous strain profiles corresponding to the cases in panels (a), (b), and (c) respectively. Importantly, far from the DW plane, the spontaneous values of the normal (triangles) and shear (squares) components of the strain converge to their respective values of the single domain state. However, the strained state of the DW causes various components of \(\varepsilon_{ij}\) to grow or depress by large percentages to accommodate the electro- and rotostrictive coupling intrinsic in this structure. In the case of the 1/1 DW in (d), the value of the \(\varepsilon_{zz}\) (in black) shrinks until eventually changing sign (smoothly) at the domain boundary. For the 2/2 DW, there is a large tensile strain in \(\varepsilon_{xx}\) (in blue) growing by about a factor of three across the wall.
Also presented in Table 1 are the DW thicknesses associated to the corresponding order parameters, which differ between \(\mathbf{P}\) and \(\mathbf{A}\). We should note that the thicknesses of the DW corresponding to \(\mathbf{P}\) and \(\mathbf{A}\) differ. This arises because our resulting fit parameters are anisotropic (i.e. \(H_{11}\ll H_{44}\)) and also the presence of growth/decrease in non-switching components of \(\mathbf{P}\) and \(\mathbf{A}\) due to the roto- and electrostrictive coupling. Nevertheless, as seen in the table, the domain walls are quite thin (\(2t_{k}\approx 0.05-0.5\) nm) which agrees quite-well with the available literature on BFO suggesting atomistically thin DWs [60; 61; 62; 40]. The smaller value of the DW thickness in the \(\mathbf{A}\) (as compared to \(\mathbf{P}\)) also shows good qualitative agreement with measurements from experiments using Z-contrast scanning transmission electron microscopy [62].
We extend this type of analysis iteratively for the possible DWs listed in Table 1 so that we can converge our set of coefficients yielding reasonable \(F_{\mathrm{DW}}\) values comparable to DFT; importantly, capturing the energy hierarchy [60; 63; 40] predicted for the collection of walls. Our best estimates of the gradient coefficients found through our fitting procedure are presented in Table 2. We find that \(H_{11}\ll-H_{12}<H_{44}\) in agreement with similar studies on BFO [63; 36]. This is an important relationship that results from harmonic models of antiferrodistortive cubic perovskite materials which has been connected to an asymmetry in the phonon bands at the R point [64; 65; 45]. Another result from our fits is that the energy hierarchy yields \(F_{\mathrm{DW}}(109^{\circ})<F_{\mathrm{DW}}(180^{\circ})<F_{\mathrm{DW}}(71 ^{\circ})\) for the lowest energy walls [66; 36; 40; 63; 40].
### Antiferromagnetic energy terms
Now we turn to the AFM order present in BFO. To encapsulate the magnetic behavior of single crystalline BFO, we propose a continuum-approximation to the magnetic free energy density. We consider the total free energy density of the magnetic subsystem (\(f_{\mathrm{mag}}\)) to be a sum of the terms responsible for the nominally collinear AFM sublattices (\(f_{\mathrm{sp}}\)) and those producing the noncollinearity (canted magnetism) by coupling to the structural order (\(f_{\mathrm{MP}}\)). We first consider the magnetic energy due to the spin subsystem that is not coupled to the structural order,
\[\begin{split} f_{\mathrm{sp}}=& D_{e}\left(\mathbf{L}^{2}- \mathbf{m}^{2}\right)\\ +& A_{e}\left[(\nabla L_{x})^{2}+(\nabla L_{y})^{2}+( \nabla L_{z})^{2}\right]\\ +&\sum_{\eta=1}^{2}K_{1}^{c}\left(m_{\eta,x}^{2}m_{ \eta,y}^{2}m_{\eta,z}^{2}\right),\end{split} \tag{21}\]
where \(D_{e}<0\) controls the strength of the short-range superexchange energy which favors the spins to have collinear AFM ordering [67]. At our coarse-grained level of theory, we only consider the first nearest-neighbor exchange coupling which has been calculated from first-principles methods [42] to be approximately 6 meV/f.u. corresponding to \(D_{e}=-23.4\,\mathrm{meV/nm^{3}}\) in our simulations. The second term describes the AFM non-local exchange stiffness proposed in Ref. [[15]] with \(A_{e}=18.7\) meV/nm (or \(3\times 10^{-7}\) ergs/cm). The third term corresponds to a weak single-ion anisotropy [41] with \(K_{1}^{c}=2.2\times 10^{-3}\,\mathrm{\mu eV/nm^{-3}}\); this term reflects the cubic symmetry of the lattice and breaks the continuous degeneracy of the magnetic easy-plane into a six-fold symmetry.
The remaining terms are due to the magnetostructural coupling,
\[f_{\mathrm{MP}}=f_{\mathrm{DMI}}(\mathbf{A})+f_{\mathrm{easy}}(\mathbf{P})+ f_{\mathrm{anis}}(\mathbf{A}), \tag{22}\]
where
\[f_{\mathrm{DMI}}=D_{0}\mathbf{A}\cdot\left(\mathbf{L}\times\mathbf{m}\right), \tag{23}\]
is due to the antisymmetric DMI which acts to break the collinearity by competing energetically with the first term of Eq. (21). It should be emphasized here that the local oxygen octahedral environments of adjacent Fe atoms underpins the DMI _vector_[68; 69; 70; 12]. Therefore, the \(\mathbf{A}\) order parameter enables the DMI coupling. Ref. [[13]]
\begin{table}
\begin{tabular}{c c c c c c} \(H_{11}\) & \(H_{12}\) & \(H_{44}\) & \(G_{11}\) & \(G_{12}\) & \(G_{44}\) \\
0.005 & \(-1.0\) & \(4.0\) & \(28.0\) & \(-15.0\) & \(0.5\) \\ \end{tabular}
\end{table}
Table 2: Best estimates of the six independent lowest-order Lifshitz invariant coefficients \(G_{ij}\) and \(H_{ij}\) found through our fitting procedure. Units are given in \([10^{-9}\mathrm{J}\cdot\mathrm{m}^{3}\cdot\mathrm{C}^{-2}]\) and \(10^{-9}\mathrm{J}\cdot\mathrm{m}^{3}\cdot\mathrm{deg}^{-2}]\) respectively.
provides an estimate of the DMI energy corresponding to 304 \(\mu\)eV/f.u. It should be mentioned that the weak canting between \(\mathbf{m}_{1}\) and \(\mathbf{m}_{2}\) arises from a competition between \(D_{e}\) and \(D_{0}\) and that different estimates of their values can provide the same degree of canting of the sublattices provided they have the same ratio \(D_{e}/D_{0}\). We come back to this in the next section.
BFO is an easy-plane antiferromagnet [13], in which the magnetic sublattices lie in a plane defined by the direction of \(\mathbf{P}\). We include the magnetocrystalline anisotropy term [67] requisite for easy-plane AFMs as,
\[f_{\text{easy}}=\sum_{\eta=1}^{2}K_{1}\left(\mathbf{m}_{\eta}\cdot\hat{ \mathbf{P}}\right)^{2} \tag{24}\]
with the usual definition of \(K_{1}>0\) enforcing the easy-plane condition for \(\mathbf{m}_{\eta}\) with \(\eta=1,2\). Using DFT methods, Dixit and co-workers [13], determined that the relative energy difference between aligning the magnetic sublattices along \(\mathbf{P}\) or in the plane normal to \(\mathbf{P}\) is \(-2.0\) meV/f.u. Therefore, we choose \(K_{1}=31.25\)meV/nm\({}^{3}\) for our simulations.
We further couple the magnetic energy surface to the structural order by allowing the weak single-ion anisotropy to also depend on the antiphase tilts \(\mathbf{A}\)[41] through,
\[f_{\text{anis}}=\sum_{\eta=1}^{2}a|\mathbf{A}|^{2}\left(m_{\eta,x}^{2}m_{\eta,y}^{2}m_{\eta,z}^{2}\right) \tag{25}\]
which is in addition to the term in Eq. (21). The choice of \(K_{1}^{c}>0\) and \(0<a|\mathbf{A}|^{2}<K_{1}^{c}\) corresponds to a small energy barrier between the 6-fold possible orientations of the weak magnetization \(\mathbf{m}\) thus breaking the continuous degeneracy in the easy-plane. These coefficients can be obtained from DFT calculations as shown in Refs [13] and [41]. Therefore, we choose our coefficients (see Table 3) such that the relative energy density barrier for the six-fold symmetry is 0.01 meV/nm\({}^{3}\) which is a reasonable approximation based on the aforementioned works. We find no influence of this choice of coupling constant on the results presented in this manuscript. The coefficients for \(f_{\text{sp}}\) and \(f_{\text{MP}}\) are listed in Table 3.
We should note that a long-period (\(\lambda\approx 64\) nm) cycloidal rotation of the weak magnetization is often observed in BFO samples [15; 16; 17]. It is possible to eliminate the cycloidal order by doping [72], epitaxial strain [14; 16; 73], applied electric fields [74], or by some processing techniques (i.e. via a critical film thickness) [16] during synthesis. The spin-cycloid could be incorporated into our model by including coupling terms associated with a proposed spin-current mechanism [15; 43; 75]. However, in order to provide the simplest model of the ME multiferroic effects, we have neglected them in this work.
### Micromagnetics and homogeneous spin ground states
In order to find the spin ground states in the presence of an arbitrary structural fields, we consider the Landau-Lifshitz-Bloch (LLB) equation [76] that governs the sublattices \(\mathbf{m}_{\eta}\),
\[\begin{split}\frac{d\mathbf{m}_{\eta}}{dt}&=- \frac{\gamma}{1+\alpha^{2}}\left(\mathbf{m}_{\eta}\times\mathbf{H}_{\eta} \right)\\ &-\frac{\gamma\alpha}{1+\alpha^{2}}\mathbf{m}_{\eta}\times\left( \mathbf{m}_{\eta}\times\mathbf{H}_{\eta}\right)\\ &+\frac{\gamma\tilde{\alpha}_{\parallel}}{\left(1+\alpha^{2} \right)}m_{\eta}^{2}\left[m_{\eta}^{2}-1\right]\mathbf{m}_{\eta}.\end{split} \tag{26}\]
where \(\alpha\) is the phenomenological Gilbert damping parameter and \(\gamma\) is the electronic gyromagnetic coefficient equal to \(2.2101\times 10^{5}\) rad. m A\({}^{-1}\) s\({}^{-1}\). The effective fields are defined as \(\mathbf{H}_{\eta}=-\mu_{0}^{-1}M_{s}^{-1}\delta f/\delta\mathbf{m}_{\eta}\) with \(\mu_{0}\) the permeability of vacuum. The saturation magnetization density of the BFO sublattices is \(M_{s}=4.0\)\(\mu\)B/Fe [41; 42; 43]. The third term arises from the LLB approximation in the zero temperature limit where \(\tilde{\alpha}_{\parallel}\) is a damping along the longitudinal direction of \(\mathbf{m}_{\eta}\). We implement the LLB equation as a numerical resource in order to provide a restoring force and bind the quantities \(\mathbf{m}_{\eta}\) to the unit sphere (\(|\mathbf{m}_{\eta}|=1\)). In this context, we consider our spin subsystem to be at \(T=0\) K in results presented throughout in this paper. In the Appendix, we provide a short derivation of the LLB torque in the zero temperature limit. We set \(\tilde{\alpha}_{\parallel}=10^{3}\) in all results in this work to satisfy the constraint on \(\mathbf{m}_{\eta}\).
To look for homogeneous spin ground states, we consider \(\alpha=0.05\) and evolve Eq. 26 (utilizing the numerical approach described in Sec. II.3) until the relative change in the total energy computed from the summation of Eq. (21) and Eq. (22) between adjacent time steps is \(\Delta F<10^{-8}\) eV/\(\mu\)s. Also, we stress that the influence of \(\tilde{\alpha}_{\parallel}\) is negligible in all results presented in this work provided that its unitless value is around \(10^{3}\) or above. To verify that our ground states predict the magnetic ordering consistent with the literature of BFO, we define two angular variables \(\phi^{\text{WFM}}=\cos^{-1}\left(\mathbf{m}_{1}\cdot\mathbf{m}_{2}\right)\) and \(\theta_{\eta}=\cos^{-1}\left(\mathbf{m}_{\eta}\cdot\hat{\mathbf{P}}\right)\). The former tracks the degree of canting between the sublattices and the latter tracks the orientation of the magnetization with respect to \(\hat{\mathbf{P}}=\mathbf{P}/P_{s}\), the magnetic easy-plane normal. As an example, we first set \(\mathbf{P}\uparrow\uparrow\mathbf{A}\) along the \([111]\) direction
\begin{table}
\begin{tabular}{c c c c} \(A_{e}\) & 18.7 & \([\)meVnm\({}^{-3}]\) & Ref. [15] \\ \(D_{e}\) & -23.4 & \([\)meVnm\({}^{-3}]\) & Ref. [42] \\ \(D_{0}\) & 0.0046 & \([\)meVdeg\({}^{-1}\)nm\({}^{-3}]\) & this work \\ \(K_{1}\) & 31.25 & \([\)meVnm\({}^{-3}]\) & Ref. [13] \\ \(K_{1}^{c}\) & 0.0022 & \([\)meVnm\({}^{-3}]\) & \\ \(a\) & 0.00015 & \([\)meVdeg\({}^{-1}\)nm\({}^{-3}]\) & \\ \end{tabular}
\end{table}
Table 3: Spin free energy density materials coefficients used in this work.
to be static. The time evolution (ringdown) of Eq. (26) is highlighted in Fig. 3(a) for \(\theta_{\eta}\) showing that the sublattices have relaxed into the easy plane defined by \(\mathbf{\hat{P}}\) with \(\theta_{1}=\theta_{2}=90.0^{\circ}\) In (b) the time dependence of the canting angle \(\phi^{\mathrm{WFM}}\) during the relaxation is shown. At the conclusion of the ringdown, \(\phi^{\mathrm{WFM}}\) reaches a value of \(\approx 1.22^{\circ}\)). This demonstrates that the angular quantities \(\{\theta_{\eta},\phi^{\mathrm{WFM}}\}\) detail an orthogonal system of the \(\{\mathbf{P},\mathbf{m},\mathbf{L}\}\) vectors as often discussed in the literature [5].
As a further benchmark, we probe the influence of the ratio \(D_{e}/D_{0}\) on the values of \(\phi^{\mathrm{WFM}}\). This test, shown in Fig. 4 highlights the energetic competition between the AFM superexchange and the sublattice DMI. From Ref. [42] we have \(D_{e}=23.4\) meV/nm\({}^{3}\) and our analysis demonstrates \(\phi^{\mathrm{WFM}}=1.22^{\circ}\) provided \(D_{0}A_{s}=0.036\) meV/nm\({}^{3}\). We thus have the weak moment \(M_{s}|\mathbf{m}|=0.03\)\(\mu\)B/Fe, which agrees well with the available literature [13; 42; 77; 78].
By setting \(\mathbf{P}\uparrow\uparrow\mathbf{A}\) along the eight polar directions possible in BFO, we can find six magnetic states for each of them. The corresponding 48 multiferroic domains are listed in Table 4. For all \(\mathbf{m}\) orientations calculated, the canted angle is precisely \(\phi^{\mathrm{WFM}}=1.22^{\circ}\). Additionally, when \(\mathbf{A}\) is reversed fully (\(\mathbf{P}\uparrow\downarrow\mathbf{A}\)), which is an acceptable ground state in our potential, the sign of \(\mathbf{m}\) will change but not the sign of the Neel vector \(\mathbf{L}\). Hence, we have a total of 96 possible domain variants. Due to the DMI these quantities listed in this table are canted slightly from their listed values (hence the use of \(\simeq\) symbol).
### Antiferromagnetic domain walls
Using low-energy electron microscopy (X-PEEM), AFM domain boundary contrast can be visualized [79] within a single ferroelectric domain. To better understand the capabilities of our modeling effort, we attempt to stabilize an AFM DW (i.e., one with switched \(\mathbf{L}\)) corresponding to the above experimental observations. We
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \((\mathbf{P}\uparrow\uparrow\mathbf{A})\):[111] & [\(\bar{1}\bar{1}1\)] & [\(\bar{1}\bar{1}\bar{1}\)] & [\(\bar{1}\bar{1}\bar{1}\)] & [\(\bar{1}\bar{1}\)] & [\(\bar{1}\bar{1}\)] & [\(\bar{1}\bar{1}\)] & [\(\bar{1}\bar{1}\)] & [\(\bar{1}\bar{1}\)] \\ \hline \multirow{3}{*}{\(\mathbf{L}\simeq\)} & [\(\bar{1}\bar{1}0\)] & [\(\bar{1}0\)] & [\(\bar{1}0\)] & [\(\bar{1}0\)] & [\(\bar{1}0\)] & [\(\bar{1}0\)] & [\(\bar{1}10\)] & [\(\bar{1}10\)] \\ & [\(\bar{1}0\)] & [\(\bar{1}10\)] & [\(\bar{1}10\)] & [\(\bar{1}0\)] & [\(\bar{1}0\)] & [\(\bar{1}0\)] & [\(\bar{1}0\)] & [\(\bar{1}01\)] \\ & [\(\bar{1}\bar{1}\)] & [\(\bar{1}0\)] & [\(\bar{1}\bar{1}\)] & [\(\bar{1}0\)] & [\(\bar{1}\bar{1}\)] & [\(\bar{1}0\)] & [\(\bar{1}0\)] & [\(\bar{1}0\)] \\ & [\(\bar{1}\bar{1}\)] & [\(\bar{1}\bar{1}\)] & [\(\bar{1}\bar{1}\)] & [\(\bar{1}0\)] & [\(\bar{1}\bar{1}\)] & [\(\bar{1}\bar{1}\)] & [\(\bar{1}\bar{1}\)] & [\(\bar{1}\bar{1}\)] \\ & [\(\bar{0}\bar{1}\)] & [\(\bar{0}\bar{1}\)] & [\(\bar{0}\bar{1}\)] & [\(\bar{0}\bar{1}\)] & [\(\bar{0}\bar{1}\)] & [\(\bar{0}\bar{1}\)] & [\(\bar{0}\bar{1}\)] & [\(\bar{0}\bar{1}\)] \\ \hline \multirow{3}{*}{\(\mathbf{m}\simeq\)} & [\(\bar{1}\bar{1}2\)] & [\(\bar{1}2\)] & [\(\bar{1}\bar{2}1\)] & [\(\bar{1}\bar{2}1\)] & [\(\bar{1}\bar{1}2\)] & [\(\bar{1}\bar{2}1\)] & [\(\bar{1}\bar{1}2\)] & [\(\bar{1}\bar{1}2\)] \\ & [\(\bar{1}2\)] & [\(\bar{1}\bar{1}2\)] & [\(\bar{1}\bar{1}2\)] & [\(\bar{1}\bar{2}1\)] & [\(\bar{2}\bar{1}\)] & [\(\bar{2}\bar{1}\)] & [\(\bar{2}\bar{1}\)] & [\(\bar{2}\bar{1}\)] \\ & [\(\bar{2}\bar{1}\)] & [\(\bar{2}\bar{1}\)] & [\(\bar{2}\bar{1}\)] & [\(\bar{2}\bar{1}\)] & [\(\bar{2}\bar{1}\)] & [\(\bar{2}\bar{1}\)] & [\(\bar{2}\bar{1}\)] & [\(\bar{2}\bar{1}\)] \\ & [\(\bar{1}\bar{1}\)] & [\(\bar{2}\bar{1}\)] & [\(\bar{2}\bar{1}\)] & [\(\bar{2}\bar{1}\)] & [\(\bar{2}\bar{1}\)] & [\(\bar{2}\bar{1}\)] & [\(\bar{2}\bar{1}\)] & [\(\bar{2}\bar{1}\)] \\ & [\(\bar{1}\bar{1}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] \\ & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] \\ & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] \\ & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] \\ & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] & [\(\bar{1}\bar{2}\)] \\ \hline \hline \end{tabular}
\end{table}
Table 4: Six-fold symmetric magnetic ground states for each \((\mathbf{P}\uparrow\uparrow\mathbf{A})\) domain orientation. Note that these listed directions are not corrected for the DMI interaction and therefore \(\mathbf{m}_{1}\neq-\mathbf{m}_{2}\) (hence \(\simeq\)). All dot products yield an orthogonal system for \(\{\mathbf{P},\mathbf{m},\mathbf{L}\}\). Full reversal of \(\mathbf{A}\) changes the sign on \(\mathbf{m}\) but not \(\mathbf{L}\). The small corrections, due to DMI, are on the order of the canting angle \(\phi^{\mathrm{WFM}}\) (\(\approx 1.22^{\circ}\)).
Figure 4: Dependence of canting angle \(\phi^{\mathrm{WFM}}\) on the ground state DMI free energy density (\(D_{0}A_{s}\)) for different choices of the AFM superexchange parameter \(D_{e}\).
set \(\mathbf{P}\uparrow\uparrow\mathbf{A}\) along \([11\bar{1}]\) to be homogeneous (and fixed in time) within the computational box. Then, a sin(x) profile is chosen for the sublattices \(\mathbf{m}_{\eta}\) corresponding to two possible Neel orientations of Table 4 for a (100)-oriented domain boundary with homogeneous \(\mathbf{P}\). After relaxation Eq. (26) with large Gilbert damping \(\alpha=0.8\), we find that the AFM wall is not stable and the system evolves to a homogeneous state with \(\mathbf{L}\) corresponding to one of the six possible orientations allowed in the domain. If the non-local exchange interaction governed by \(A_{e}\)[15] is reduced by a factor of ten, then we find that the solution corresponds to AFM domain walls with a \(120^{\circ}\) rotation of \(\mathbf{L}\), i.e \(\mathbf{L}^{\mathrm{I}}=[011]\) and \(\mathbf{L}^{\mathrm{II}}=[1\bar{1}0]\). We estimate that the corresponding DW in \(\mathbf{L}\) has a characteristic width of 20 nm and a corresponding DW energy of 7.55 mJ/m\({}^{2}\) using Eq. (20).
Let us now consider how the structural DWs affect the net magnetization. The modulation of \(\mathbf{P}\) and \(\mathbf{A}\) across the domain boundary drastically alters the magnetrostructural coupling energy surface due to Eq. (22) causing the AFM order to choose preferential orientations associated with those calculated in Table 4. Careful inspection of Table 4 suggests that only certain low energy magnetic DWs (i.e., those minimizing the gradient of \(\mathbf{L}\)) should be observed for the different FE domain walls listed in Table 1. Using our previously established notation for adjacent DW states, the lowest energy FE DW (2/1) corresponding to a \(\mathbf{P}^{\mathrm{I}}/\mathbf{A}^{\mathrm{I}}=[\bar{1}11]/[\bar{1}11]\) to \(\mathbf{P}^{\mathrm{II}}/\mathbf{A}^{\mathrm{II}}=[\bar{1}\bar{1}\bar{1}]/[111]\) change will only allow \(\mathbf{m}^{\mathrm{I}}=[211]\) or \(\mathbf{m}^{\mathrm{I}}=[\bar{2}\bar{1}\bar{1}]\) and \(\mathbf{m}^{\mathrm{II}}=[2\bar{1}\bar{1}]\) or \(\mathbf{m}^{\mathrm{II}}=[\bar{2}11]\) respectively with no changes to the Neel vector \(\mathbf{L}\). This coincides with a \(71^{\circ}\) rotation of \(\mathbf{m}\) consistent with a \(71^{\circ}\) change of the oxygen octahedral tilt field \(\mathbf{A}\) albeit having a \(109^{\circ}\)\(\mathbf{P}\) switch.
To calculate the magnetic textures numerically, we fix in time the FE order parameters \(\mathbf{P}\)-\(\mathbf{A}\) corresponding to a specific DW in Sec. II.4. We choose the 1/1 (100) and 2/1 (100) structural walls as they are most commonly observed in experiment. Again, we use a large Gilbert damping \(\alpha=0.8\) and look for the ground states utilizing Eq. (26). In Fig. 5, we display the weak \(\mathbf{m}\) moment as a function of the distance to the DW plane for the 1/1 (a) and 2/1 (b) walls after relaxation. In both cases, the \(\mathbf{m}\) rotates by \(71^{\circ}\) - \([1\bar{1}\bar{2}]\) to \([\bar{1}\bar{1}\bar{2}]\) in (a) and \([2\bar{1}1]\) to \([2\bar{1}\bar{1}]\) in (b) - with a sharp interface region. This is expected as the DMI term is driven by the \(\mathbf{A}\) vector forcing \(\mathbf{m}\) to also change by \(71^{\circ}\). The large value of \(A_{e}\) causes the Neel vector to be nearly constant across the DW corresponding to \([1\bar{1}0]\) in (a) and \([0\bar{1}1]\) in (b) as it satisfies both conditions of the ground state in adjacent domains. Fitting the switched components of \(\mathbf{m}\) to the aforementioned tanh(x) profile from Sec. II.4 yields \(2t_{m}=0.5\) nm. We can calculate a thickness of \(2t_{m}=0.06\) nm in the 2/1 (100) case demonstrating a nearly atomistically thin DW in the magnetic texture. A comparison to Table 1 shows that we have an equality of \(t_{m}\approx t_{A}\) in both 1/1 (100) and 2/1 (100) walls.
The component of \(\mathbf{m}\) that does not switch, black in (a) and blue in (b), changes by about \(\approx+6\%\) and \(-20\%\) respectively across the DW region indicating rotational
Figure 5: Net magnetization \(\mathbf{m}\) textures presented in normalized units across the (a) 1/1, and (b) 2/1 DWs of (100)-orientation. Both of these sequences of DWs produce \(71^{\circ}\) rotations of \(\mathbf{m}\). Angular deviations from the ground state values of \(\phi^{\mathrm{WFM}}\), \(\theta_{1}\), and \(\theta_{2}\) for 1/1 (c) indicate a much longer range coupling of the spin across the ME boundary than in the 2/1 case in (d).
components of \(\mathbf{m}\). This leads to a deviations of the angular quantities \(\{\phi^{\mathrm{WFM}},\theta_{1},\theta_{2}\}\) from their ground state values. We plot these quantities in panels (c) and (d) of Fig. 5. We see that, in the 1/1 (100) case in (c), the sublattices cant slightly (\(\approx\pm 1^{\circ}\)) out of the easy plane to facilitate this magnetic reversal. The weak magnetization canting angle \(\phi^{\mathrm{WFM}}\) (shown in blue) also reduces its magnitude by about \(0.25^{\circ}\). This is different from the behavior of the angular quantities of the 2/1 (100) DW shown in panel (d) which decrease their values by about \(0.4^{\circ}\) in the same fashion indicating canting out of the easy-plane in the same direction for both sublattices resulting in a slight reduction of \(\mathbf{m}\). We stress that these quantities should be meaningful since they are on the order of \(\phi^{\mathrm{WFM}}\) in the ground state and that in the 1/1 (100) case, the modulations extend more than a few unit cells from the DW (\(\pm 2\) nm).
By using Eq. (20), we can estimate the energy of the magnetic DW of the 1/1 (100) and 2/1 (100) cases. For the 1/1 and 2/1 walls, we calculate \(F_{\mathrm{DW}}^{\mathrm{mag}}=0.71\) and \(0.70\) m/m\({}^{2}\) respectively. The energy difference between these two \(71^{\circ}\)\(\mathbf{m}\) DWs is quite small despite having a very different profile of \(\theta_{\eta}\) and \(\phi^{\mathrm{WFM}}\). The variation of \(\theta_{\eta}\) in panel (c) for the 1/1 (100) case causes a large relative increase in the easy-plane anisotropy for both sublattice contributions as compared to (d) for the 2/1 (100) DW. However, as seen in the panel (d), there is more identifiably sharp structure (i.e., modulations of \(\phi^{\mathrm{WFM}}\) and \(\theta_{\eta}\) occur within \(\pm 0.2\) nm of the DW) as \(\mathbf{m}\) switches by \(71^{\circ}\). This leads to an increase in the DMI energy relative to the 1/1 case. We have only presented data on these two types of magnetic boundaries in the presence of the \(\mathbf{P}\)-\(\mathbf{A}\) DWs. Higher energy DWs can also be investigated with our approach, but we leave this for future work.
## III Applications: Spin waves and magnetoelectric switching
### Spin waves through multiferroic domain boundaries
The field of spintronics relies on the generation, control, and read-out of traveling packets of spin [80]. In AFMs, the spin precessional processes can occur at low energy and ultrafast frequencies (THz and above) thus leading to competitive advantages in information processing design as compared to standard CMOS technology [81; 82]. The basic concept of wave transmission and reflection phenomena is key to understanding how to optimize spin wave transport in these systems. Recently, researchers established non-volatile control of thermal magnon transport in BFO using electric fields [19]. Their work demonstrates that the \(109^{\circ}\) FE DWs act as a barrier to spin transport across a length-scale comprising many 100s of nm and dampen the detected magnon signal useful for the device. We will illustrate the usefulness of our approach by showing how it can enable a mesoscopic simulation of this situation
We consider the two of the commonly observed DWs in BFO experiments, the \(109^{\circ}\) 2/1 and \(71^{\circ}\) 1/1 (100)-oriented boundaries [83; 5; 19]. The reader is referred to Table 1 and the previous section for the initial conditions of the order parameters. There is a large relative difference between the lattice and spin DW energies. This suggests that any application of an external magnetic field \(\mathbf{H}_{\mathrm{appl}}\) should not appreciably influence the \(\mathbf{P}\) and \(\mathbf{A}\) subsystem. Therefore, we fix in time the structural order parameters in this section. We couple \(\mathbf{H}_{\mathrm{appl}}\) to act on the net magnetization through the Zeeman free energy density,
\[f_{\mathrm{Zeeman}}=-\mathbf{m}\cdot\mathbf{H}_{\mathrm{appl}} \tag{27}\]
and add it to the total free energy of the spin configuration.
In order to perturb the system, we consider gaussian spin wave beams generated by a field of the form [84],
\[\mathbf{H}_{\mathrm{appl}} =H_{0}\,\mathrm{sinc}[k_{0}(x-x_{0})]\,e^{-p_{0}(x-x_{0})^{2}}\] \[\times\mathrm{sinc}[\omega_{0}(t-t_{0})]\,\hat{\mathbf{h}}\]
where field amplitude \(H_{0}=184\) kOe, excitation location \(x_{0}\), gaussian intensity profile parameter \(p_{0}=0.16\) nm\({}^{-2}\), and \(k_{0}=10\) nm\({}^{-1}\) control the perturbation distribution in spacetime. The director \(\hat{\mathbf{h}}\) orients the magnetic field with respect to \(\mathbf{m}\). Finally, we cut-off the pulse at \(t_{0}=1\) ps and excite the spin waves at a frequency \(\omega_{0}\).
Eq. (26) is evolved with \(\alpha=0\) and Eq (28). We enforce periodicity in our computational volume along the \(x,y,z\) for the \(\mathbf{m}_{1}\) and \(\mathbf{m}_{2}\) variables. The time-integration of Eq. (26) is set for \(dt<2\) fs time steps to ensure numerical convergence for the fast AFM dynamics in the system. We verify that our calculations are in the linear limit by adjusting the \(H_{0}\) and determine that the perturbed amplitudes of \(\mathbf{m}_{\eta}\) scale linearly. Finally, we monitor the system total free energy \(F_{\mathrm{sp}}+F_{\mathrm{MP}}\) and \(|\mathbf{m}_{\eta}|\) (via the LLB term) and verify that they are constant to within floating point accuracy for all time in our \(\alpha=0\) simulation.
In Fig. 6(a), we track the _excess_ free energy density \(f_{\mathrm{exc}}(t,x)=f_{\mathrm{mag}}(t,x)-f_{\mathrm{mag}}(t=0,x)\). Therefore, \(f_{\mathrm{exc}}\) corresponds to a small energy that is injected into our computational volume by the spin excitation at time \(t\). A few snapshots of the \(f_{\mathrm{exc}}(t,x)\) due to the propagating wavefront (at two different \(\omega_{0}\)) are presented in Fig. 6(a) in sequential panels from top to bottom for \(t=4.5,17.1,24.6,27.1,\) and \(34.5\) ps. Here in panel (a), the DW is marked at \(x_{\mathrm{DW}}=22\) nm and is impacted by the spin wave at around \(t=24.6\) ps. The excess energy density loss after the wavefront travels through the DW can be calculated by numerically time integrating \(f_{\mathrm{exc}}(t,x)\) at distances of \(\Delta x=7\) nm left and right from the DW plane located at \(x_{\mathrm{DW}}\).
We then compute their ratio \(R\),
\[R=\frac{\int f_{\mathrm{exc}}(t,x_{\mathrm{DW}}+\Delta x)dt-\int f_{\mathrm{ exc}}(t,x_{\mathrm{DW}}-\Delta x)dt}{\int f_{\mathrm{exc}}(t,x_{\mathrm{DW}}+ \Delta x)dt} \tag{28}\]
to determine which percentage of the excess energy due to the incoming wave is reflected or absorbed by the DW, i.e., the degree of rectification. We see in Fig. 6(b) that \(R\) varies substantially across several decades in frequency with an asymptote for low frequencies corresponding to about 35 % and 50 % rectification for the 2/1 and 1/1 walls respectively. The relative difference between rectification arises from the excitation of the DW region by the spin wave (seen in Fig. 6(a) for \(t>24.6\) ps). To verify this, we track the time integrated \(f_{\mathrm{exc}}\) at the DW revealing almost all of the excess energy is absorbed by the DW. In this analysis, we find that only a small portion of \(f_{\mathrm{exc}}\) due to this spin wave is reflected (not shown). When the frequency is increased, a maximum in DW excess energy absorption in \(R\) is acquired around 1-2 THz before \(R\) abruptly decreases indicating that the DW becomes more transparent to the spin wave. Similar frequency-dependent transmission ratios have been reported in the literature for noncollinear AFMs [85]. We should mention that we did not find any meaningful influence of \(\hat{\mathbf{h}}\) or \(k_{0}\) on our results except changing the relative rectification between the two types of walls, but a more detailed study of the parameters is warranted.
Finally, we should comment on how this agrees with experimental observations. In the work of Parsonet _et al_[19], the propagating thermal magnon signal inferred from the inverse spin Hall effect was seen to decay exponentially as a function of distance from the source. This was postulated as due to the 2/1 (100)-oriented DWs in the system acting as a barrier to spin currents with \(\mathbf{k}||[100]\) whose number increased upon electrode separation; we can conclude that our results support this conclusion qualitatively. It remains to be seen if domain engineering techniques guided by similar calculations could possibly help control the rectification that impedes efficient control of magnon signals in ME spintronics. Also, we should mention that as opposed to our approach detailed in this section, in principle \(\mathbf{P}\) and \(\mathbf{A}\) could vary in time. This would lead to electromagnonic effects localized to the DWs upon excitation of inhomogeneous magnetic and/or electric fields (as highlighted in recent Refs. [86; 87]). It is possible that the coupled vibrations of the structural order could further influence the spin transport. However, this is a outside the scope of this work and warrants future studies.
### Magnetoelectric switching of the AFM order
A considerable demand in AFM spintronics is to find an adequate approach to manipulate the magnetic order with external stimuli. In the case of BFO, since this material displays an intrinsic electric dipole moment, it has been proposed to use an electric field to manipulate and control the magnetic texture. The technological benefits to the prospect of electric field control of magnetism has been considered for some time [19; 88; 89; 5; 90]. While low-frequency deterministic switching of \(\mathbf{m}\) with an electric
Figure 6: (a) Excess energy density \(f_{\mathrm{exc}}\) due to a spin wave traveling in the \(\mathbf{k}||[100]\) direction. The excitation frequency is \(\omega_{0}=0.5\) and 5 THz for the solid and dashed lines respectively. In this simulation, the 2/1 (109\({}^{\circ}\)) DW located at approximately 22 nm indicated by the arrow. The wavefront reaches the DW at around 27 ps. (b) Calculated spin wave rectification \(R\) as a function of \(\omega_{0}\) of the 1/1 (blue) and 2/1 (red) DWs using Eq. (28) after time integrating \(f_{\mathrm{exc}}\) at a distance of \(\Delta x=7\) nm left and right from the DW.
field has been experimentally demonstrated [5], the dynamical processes of the coupled polar-magnetic order is still a topic of research [37; 38]. We aim to highlight one such use of this modeling effort for the case of ME switching (i.e., using an electric field to switch \(\mathbf{m}\)).
We now consider a fully-dynamical simulation where all system variables \(\{\mathbf{P},\mathbf{A},\mathbf{u},\mathbf{m}_{1}\), \(\mathbf{m}_{2}\}\) depend on time. As we are now interested in real dynamics, the time relaxation constants \(\Gamma_{P}=200\) Fm\({}^{-1}\)s\({}^{-1}\) and \(\Gamma_{A}=83188\) deg\({}^{2}\)m\({}^{3}\)J\({}^{-1}\)s\({}^{-1}\) in Eq. (10) and (11) are taken from Ref. [91]. For our switching simulations, our initial condition of the \(\mathbf{P}\uparrow\uparrow\mathbf{A}\) system is along the \([111]\) direction and _homogeneous_. Since this is a homogeneous calculation, this can be considered the macrospin limit of Eq. (26). Since the dynamics of the AFM order are in general very fast (characteristic frequencies of 100s of GHz to the THz regime) [81], we introduce a time stepping constraint on the evolution of Eq. (26) for dt \(<0.1\) ps to ensure numerical convergence. There is no spin dissipation from conduction electrons in BFO due to its insulating nature. Therefore, we choose \(\alpha\) of order \(10^{-3}\) which is a reasonable assumption for BFO [92; 93] and other magnetic insulators [94; 95; 96; 97].
As one example to switch the \(z\) component of \(\mathbf{P}\), we choose our electric field \(\mathbf{E}\) to be \(\mathbf{E}(\omega)=\langle 0,0,E_{0}\sin\left(\omega t\right)\rangle\) with \(E_{0}=-1800\) MV/m. This is a large value compared to coercive fields of \(E_{c}=20-40\) MV/m observed in switching experiments of thin film BFO heterostructures [5; 98]. However, it is well-known that the coercive field needed to fully switch components of \(\mathbf{P}\) in perovskite FEs is intrinsically linked to the occurence of various phenomena [99; 100; 101; 102; 103] that are not present in our homogeneous switching simulations. We select an \(\mathbf{E}\) frequency of \(\omega=600\) MHz. The field is abruptly turned off after \(\mathbf{P}\) has switched in order to facilitate only one switching event for analysis. The initial state is homogeneous \(\mathbf{P}\uparrow\uparrow\mathbf{A}\) along \([111]\) with \(\mathbf{L}||[\bar{1}01]\) and \(\mathbf{m}||[1\bar{1}1]\) as one of the possibilities listed in Table 4. In order to investigate if the ME switching has dependency on \(\alpha\), we pick two different values \(\alpha=0.003\) and \(\alpha=0.01\) and evolve Eq. (10) and (11) in the presence of the field.
Here we see in Fig. 7(a) and (b), the application of \(\mathbf{E}\) along the \(z\) direction switches the \(\mathbf{P}\) (and also \(\mathbf{A}\), not shown) orientation to \([11\bar{1}]\) within \(1000\) ps (dashed black line). We use the notation \(i\to f\) to denote initial states \(i\) and final states \(f\) for the \(\{\mathbf{m},\mathbf{L}\}\) system. The change of the energy surface through the magnetostructural coupling causes \(\mathbf{L}\) to switch orientation from \([\bar{1}01]\) to \([0\bar{1}1]\)
Figure 7: A switching event corresponding to a \(71^{\circ}\) rotation of \(\mathbf{P}\) (shown in the black dashed line) by application of an \(\mathbf{E}\) with \(\omega=600\) MHz and \(E_{0}=1800\) MV/cm. The Néel \(\mathbf{L}\) components (normalized) are shown corresponding to \(\alpha=0.003\) and \(\alpha=0.01\) for (a) and (b). The switch (in \(\mathbf{L}\)) occurs from \([\bar{1}01]\rightarrow[0\bar{1}1]\) (a) and \([\bar{1}01]\rightarrow[101]\) (b). The value of \(\mathbf{m}\) settles into the minima corresponding to a \([1\bar{2}1]\rightarrow[\bar{2}1\bar{1}]\) and \([1\bar{2}1]\rightarrow[1\bar{2}\bar{1}]\) transitions respectively in (c) and (d). The insets in (c) and (d) show the similar ringdown time dependence near the \(P_{z}\) switch (occuring at \(t\approx 925\) ps).
in (a) and \([\bar{1}01]\rightarrow[101]\) in (b). At the same time, the direction of \(\mathbf{m}\) undergoes \([1\bar{2}1]\rightarrow[\bar{2}1\bar{1}]\) and \([1\bar{2}1]\rightarrow[1\bar{2}\bar{1}]\) transitions in (c) and (d). When one compares the dynamics between the left and right panels of Fig. 7, it is evident that the choice of \(\alpha\) influences the final \(\{\mathbf{m},\mathbf{L}\}\) state despite having nearly identical ringdown patterns at the temporal vicinity of the \(\mathbf{P}\) switch shown in the insets of (c) and (d). Shortly after the switch (\(t>1000\) ps), the magnetization evolves differently, due to different maximum amplitudes, leading to transition pathways which overcome different energy barriers.
We also consider an _instantaneous_ limit of the switching process where the \(P_{z}\) is switched immediately. In Fig. 8(a) and (b) which correspond to the same \(\alpha\) values as in 7(a) and (b), the switch is set to occur at \(t=200\) ps (shown in the insets). The relaxation of Eq. (26) with the damping set to \(\alpha=0.003\) and \(0.01\) creates many oscillations with a characteristic ringdown frequency of approximately 127 GHz. We find that indeed, the same situation happens presented in Fig. 8(a) and (b) as in Fig. 7(a) and (b) with the final states of \(\mathbf{L}\) determined by its initial orientation and the final configuration of \(\mathbf{P}\). The vector \(\mathbf{m}\) (not shown) has trajectories \([1\bar{2}1]\rightarrow[\bar{2}\bar{1}\bar{1}]\) and \([1\bar{2}\bar{1}]\rightarrow[1\bar{2}\bar{1}]\) in Fig. 8(a) and (b) respectively. In the simulations corresponding to Fig. 7, the switching of \(\mathbf{m}\) occurs in about a 200 ps time window, whereas with the instantaneous calculation, the switching pathway requires at least 1 ns to ring down \(\{\mathbf{m},\mathbf{L}\}\) with realistic material values of \(\alpha=0.003\). This is far above the theoretical switching limit of 30 ps proposed by Liao and co-workers [37; 38] who also utilized a LLG model for the AFM order coupled to a Landau-type parameterization.
We stress that both of these numerical simulations are exercises for illustrative purposes and are simplified versions of the dynamic processes that would happen in an experiment. Our calculations already suggest two things: (1) the Gilbert damping \(\alpha\) controls the maximum amplitude of the oscillations and thus the final state, hence it needs to be understood in BFO to have a repeatable effect and that (2) the dynamics of the structural switching does not seem to be essential in controlling the switching pathway (i.e. comparing the explicit time dependent \(\mathbf{E}\) calculations vs, the instantaneous \(\mathbf{P}\)-\(\mathbf{A}\) switches). A more detailed investigation remains for the future.
## IV Conclusions and outlook
We have presented a continuum model for BFO able to treat the polar, octahedral tilt, spontaneous strain, and the AFM order in a single calculation. This model is built upon micromagnetic and FE phase field approximations to the system order parameters. Our model is benchmarked against the known behavior in this material - specifically, we have parameterized the FE DW profiles along with their spontaneous strain fields obtaining an energy hierarchy of possible states in agreement with DFT calculations [40]. We also provide simulations of \(\{\mathbf{L},\mathbf{m}\}\) in the presence of low energy FE DWs revealing delicate features in the angular quantities characterizing the canted magnetism. Next, we illustrated the usefulness of the model with two simple applications i) AFM spin waves traversing the multiferroic domain boundary highlighting a rectifying nature in qualitative agreement with recent experiments [19] and ii) fully-dynamical ME switching in real-time which, interestingly, reveals a sensitivity of switching pathways on the Gilbert damping.
There are many other phenomena in BFO that could be built upon our model. As is often discussed in the literature regarding BFO, is the appearance of a long-period spin cycloid [15; 16] in which the proposed origin is underpinned by an asymmetric spin-current mechanism [17; 104; 71; 17] which is necessary to stabilize these patterns. While the results of this paper are for the weakly non-collinear AFM order, one can appreciate that the presence of the spin cycloid might affect the outcome of our illustrative examples. We also emphasize that the DMI expression in Eq. (23) is _isotropic_ and that the application of strain should break the symmetry which can lead to different AFM sublattice ordering as detailed in Ref. [13]. In principle, both the spin-flexoelectric (spin cycloid) and magnetoelastic (epitaxial strain) contributions could influence the antisymmetric exchange leading to drastically altered magnetization textures in the simulations. In general also, this type of multiferroic modeling
Figure 8: Instantaneous switching of the Néel vector \(\mathbf{L}\) for (a) \(\alpha=0.003\) and (b) \(\alpha=0.01\). The switch occurs abruptly at \(t=200\) ps causing the AFM order to rapidly oscillate. The initial state is \(\mathbf{L}||[\bar{1}01]\) leading to final states of \(\mathbf{L}||[101]\) and \(\mathbf{L}||[011]\) in (a) and (b) respectively.
could be extended to other noncollinear antiferromagnets such as those where electric fields have been shown to manipulate the magnetic state despite lack of spontaneous FE order [105, 106].
The model is built within the Ferret[56] module atop the open-source Multiphysics Object Oriented Simulation Environment (MOOSE) framework [57]. As a nod to open-science, we provide representative examples for all of the results in this paper to be hosted on a GitHub website [107]. Ferret is part of a forward-integrating toolset called the Continuous Integration, Verification, Enhancement, and Testing (CIVET) [108] utility which preserves reproducibility of our results by ensuring underlying code changes to the MOOSE software stack do not break the module. The sets of governing equations and energy terms in this paper, which are applicable in 3D and for any geometry, are available and documented as C++ objects within the open-source software repository. While our modeling effort is certainly not exhaustive, we believe it will be a useful platform for development of continuum simulations of BFO and other multiferroics in length and time-scales are not accessible by atomistic methodologies.
###### Acknowledgements.
The authors thank Natalya Fedorova for valuable input. J. M. has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement SCALES - 897614. Work by O.H. supported by Basic Energy Sciences Division of the Department of Energy. D.R.R. acknowledges funding from the Ministerio dell'Universita e della Ricerca, Decreto Ministeriale n. 1062 del 10/08/2021 (PON Ricerca e Innovazione).
## Appendix
In micromagnetic simulations in which thermal fluctuations are not included (so-called athermal simulations), the magnetization density must remain constant in magnitude, thus preserving the unit norm of the magnetization director \(\hat{\mathbf{m}}=\mathbf{M}/|\mathbf{M}|\). In finite-difference codes on regular meshes, such as MuMax3 [109], enforcing this constraint is simple: each simulation cell \(k\) contains one unit vector \(\hat{\mathbf{m}}_{k}\) that can simply be renormalized after, _e.g._, each time step. In contrast, in weak-formulation FEM codes the continuum magnetic degrees of freedom are approximated by shape functions on irregular mesh cells and integrated over, and normalization of \(\mathbf{m}(\mathbf{r})\) is not as easily interpreted or made meaningful as in finite-difference codes.
One can overcome this problem, for example using, a representation of \(\mathbf{m}(\mathbf{r})\) in spherical coordinates [110], but numerical solutions of the equations of motion can become unstable leading to serious convergence issues. Another possibility is to introduce the constraint through a Lagrangian multiplier or using special shape functions on the tangent plane of the magnetization director vector field [111, 112]. We chose a different path, which is physically grounded in the Landau-Lifshitz-Bloch (LLB) formulation [113, 114, 76]. The key point in the LLB formulation is that longitudinal fluctuations in the magnetization director are allowed, but countered by a penalty for deviations away from the thermodynamic average of the magnitude \(m(T)\) at a temperature \(T\). The longitudinal fluctuations add a term to the equation of motion that is given by
\[\frac{\gamma\alpha_{\parallel}}{(1+\alpha^{2})m(\mathbf{r})^{2}}\left[\hat{ \mathbf{m}}(\mathbf{r})\cdot\left(\mathbf{H}_{eff}+\zeta_{\parallel}\right) \right]\hat{\mathbf{m}}(\mathbf{r}). \tag{29}\]
where \(\hat{\mathbf{m}}(\mathbf{r})\) is the local magnetization director with an equilibrium value \(m_{e}(T)\) that depends on temperature, \(\zeta_{\parallel}\) is a thermal field, and \(\alpha\) is the usual dimensionless Gilbert damping. The longitudinal damping \(\alpha_{\parallel}\) depends on \(T\) through
\[\alpha_{\parallel}=\alpha\frac{2T}{3T_{c}^{\text{MFA}}} \tag{30}\]
with \(T_{c}^{\text{MFA}}\) the mean-field Curie temperature, and the effective field \(\mathbf{H}_{\text{eff}}\) includes the longitudinal susceptibility \(\chi_{\parallel}\),
\[\mathbf{H}_{\text{eff}} =\mathbf{H}_{\text{ext}}+\mathbf{H}_{\text{ani}}+\mathbf{H}_{ \text{ex}}+\frac{1}{2\chi_{\parallel}}\left(1-\frac{m_{i}^{2}}{m_{e}^{2}} \right)\hat{m}_{i}\] \[=\mathbf{H}_{0}+\frac{1}{2\chi_{\parallel}}\left(1-\frac{m_{i}^{ 2}}{m_{e}^{2}}\right)\hat{m}_{i}. \tag{31}\]
Here \(\mathbf{H}_{\text{ext}}\), \(\mathbf{H}_{\text{ani}}\), and \(\mathbf{H}_{\text{ex}}\) are the usual external, anisotropy, and exchange fields. Ignoring the thermal field, the contribution to \(d\hat{\mathbf{m}}/dt\) is then
\[\frac{\gamma\alpha_{\parallel}}{(1+\alpha^{2})m_{i}^{2}}\left[\hat{m}_{i} \cdot\left(\mathbf{H}_{0}+\frac{1}{2\chi_{\parallel}}(1-\frac{m_{i}^{2}}{m_{e }^{2}})\hat{m}_{i}\right)\right]\hat{m}_{i}. \tag{32}\]
At \(T=0\) with \(m_{e}^{2}=1\) we can simplify the last term in the above equation to get
\[\frac{\gamma\tilde{\alpha}_{\parallel}}{(1+\alpha^{2})}\left(1-m^{2}\right)m ^{2}\hat{\mathbf{m}} \tag{33}\]
where \(\tilde{\alpha}_{\parallel}=\alpha_{\parallel}\mu_{0}/(2\chi_{\parallel})\) now has the unit of a magnetic field that drives the longitudinal relaxation. The contribution to the time evolution of \(\hat{\mathbf{m}}\) due to longitudinal relaxation is then
\[-\frac{\gamma\tilde{\alpha}_{\parallel}}{(1+\alpha^{2})}(m^{2}-1)m^{2}\hat{ \mathbf{m}}+\frac{2\chi_{\parallel}\gamma\tilde{\alpha}_{\parallel}}{(1+ \alpha^{2})m^{2}}(\hat{\mathbf{m}}\cdot\mathbf{H}_{0})\hat{\mathbf{m}}. \tag{34}\]
One can show that in the limit of low \(T\), much lower than relevant Curie temperatures, the second term in Eq. (34) can be ignored. In this case, the LLB-like addition to the equations of motion is simply
\[\frac{\gamma\tilde{\alpha}_{\parallel}}{(1+\alpha^{2})}\left[m^{2}-1\right]m^ {2}\hat{\mathbf{m}}, \tag{35}\]
where \(\tilde{\alpha}_{\parallel}\) has the dimension of a field. |
2307.06351 | The Kiloparsec Scale Influence of the AGN in NGC 1068 with SALT RSS
Fabry-Pérot Spectroscopy | We present Fabry-P\'erot (FP) imaging and longslit spectroscopy of the nearby
Seyfert II galaxy NGC 1068 using the Robert Stobie Spectrograph (RSS) on the
Southern African Large Telescope (SALT) to observe the impact of the central
Active Galactic Nucleus (AGN) on the ionized gas in the galaxy on kiloparsec
scales. With SALT RSS FP we are able to observe the H$\alpha$+[N II] emission
line complex over a $\sim$2.6 arcmin$^2$ field of view. Combined with the
longslit observation, we demonstrate the efficacy of FP spectroscopy for
studying nearby Type II Seyfert galaxies and investigate the kiloparsec-scale
ionized gas in NGC 1068. We confirm the results of previous work from the
TYPHOON/Progressive Integral Step Method (PrISM) survey that the
kiloparsec-scale ionized features in NGC 1068 are driven by AGN
photoionization. We analyze the spatial variation of the AGN intensity to put
forward an explanation for the shape and structure of the kiloparsec-scale
ionization features. Using a toy model, we suggest the ionization features may
be understood as a light-echo from a burst of enhanced AGN activity $\sim$2000
years in the past. | Raphael E. Hviding, Ryan Hickox, P. Väisänen, Rajin Ramphul, Kevin Hainline | 2023-07-12T18:00:00Z | http://arxiv.org/abs/2307.06351v1 | # The Kiloparsec Scale Influence of the AGN in NGC 1068 with SALT RSS Fabry-Perot Spectroscopy1
###### Abstract
We present Fabry-Perot (FP) imaging and longslit spectroscopy of the nearby Seyfert II galaxy NGC 1068 using the Robert Stobie Spectrograph (RSS) on the Southern African Large Telescope (SALT) to observe the impact of the central Active Galactic Nucleus (AGN) on the ionized gas in the galaxy on kiloparsec scales. With SALT RSS FP we are able to observe the H\(\alpha\)+[N ii] emission line complex over a \(\sim\)2.6 arcmin\({}^{2}\) field of view. Combined with the longslit observation, we demonstrate the efficacy of FP spectroscopy for studying nearby Type II Seyfert galaxies and investigate the kiloparsec-scale ionized gas in NGC 1068. We confirm the results of previous work from the TY-PHOON/Progressive Integral Step Method (PrISM) survey that the kiloparsec-scale ionized features in NGC 1068 are driven by AGN photoionization. We analyze the spatial variation of the AGN intensity to put forward an explanation for the shape and structure of the kiloparsec-scale ionization features. Using a toy model, we suggest the ionization features may be understood as a light-echo from a burst of enhanced AGN activity \(\sim\)2000 years in the past.
Seyfert Galaxies (1447), AGN Host Galaxies (2017) +
Footnote †: journal: AJ
0000-0002-8070-7885]Raphael E. Hviding
0000-0002-4880-7885]Ryan C. Hickox
0000-0002-4880-7885]Petri Vaisanen
0000-0002-0703-0885]Rajin Ramphul
0000-0002-0703-0885]Kevin N. Hainline
## 1 Introduction
Since the discovery of quasars, the most luminous Active Galactic Nuclei (AGNs), in the early 1960s (Schmidt, 1963), a multitude of AGN classifications have arisen over the last few decades to form the "AGN Zoo" (e.g. Padovani et al., 2017). AGN unification attempts to explain the different classifications through the geometry of the obscuring material surrounding the central supermassive black hole (SMBH) and accretion disk (e.g. Antonucci, 1993; Urry & Padovani, 1995; Netzer, 2015). AGNs can be divided into two populations: obscured sources with narrow high-ionization nebular emission lines (Type II), and unobscured sources with additional broad (FWHM \(>\)1000 km s\({}^{-1}\)) emission lines (Type I). In the standard unification model, these classifications correspond to different amounts of dust along our line of sight, due to the orientation of a parsec-scale obscuring torus, where sightlines that avoid the obscuring material can observe the fast moving gas closer to the central nucleus (e.g. Netzer, 2015; Ramos Almeida & Ricci, 2017).
However, recent work suggests that Type II AGN, classically understood as a difference in line of sight to the central engine, have been found to live in different environments and generate more powerful outflows (e.g., DiPompeo et al., 2014, 2016, 2018; Zakamska et al., 2016, 2019; Mitra et al., 2018), suggesting they differ from their unobscured counterparts by more than just viewing angle. An evolutionary paradigm has emerged, proposing that dynamical processes in gas-rich galaxies drive obscuring material into the nuclear region, producing the observed optical and ultraviolet (UV) attenuation (e.g. Kauffmann & Haehnelt, 2000; Hopkins et al., 2006, 2008). In this case the merger also drives gas into the nucleus to fuel AGN, producing the feedback that
eventually remove the dust and gas and potentially affecting the galaxy on larger scales (Fabian, 2012; Alexander and Hickox, 2012). The study of obscured AGN can shed light on the fueling processes and the galactic scale effects of AGN feedback (for a review, see Hickox and Alexander, 2018, and references therein).
Moderate luminosity Type II AGNs (i.e. Seyfert II galaxies) are ideal to examine the large-scale effects of an obscured AGN on its host. The lack of broad optical and UV emission lines imply obscuration of high velocity gas in the nuclear region. In many previous studies, spectroscopic analyses of Type II AGNs have used line diagnostic techniques to understand the nature of AGN emission integrated over galaxy scales. Recently, integral field units (IFUs) have enabled spatially-resolved spectral diagnostics for a large number of galaxies, but are generally limited to small fields of view (e.g. Bacon et al., 2010; Croom et al., 2012; Bundy et al., 2015). Therefore it has been challenging to obtain spatially-resolved spectral diagnostics of the best-studied nearby AGN that extend over large angular scales.
A powerful tool for probing the spectra of galaxies over a large field of view (FoV) is Fabry-Perot (FP) spectroscopy, which samples the source with extremely narrow bandpasses generated by precisely spacing an etalon for each exposure. FP spectroscopy can therefore provide moderate resolution spectroscopy across large FoVs without sacrificing spatial resolution. The technique is especially powerful when paired with large aperture telescopes which can obtain the necessary sensitivity for detailed studies of nearby galaxies with large angular extents.
By measuring the spectroscopic properties of galaxies over a large physical extent, it is possible to reveal the presence of extended emission-line regions on \(\sim\)kpc scales. These emission line regions can be indicative the current accretion luminosity from the black hole (Kauffmann et al., 2003; LaMassa et al., 2019) but may be due to _past_ enhanced AGN activity that can manifest as light or ionization "echoes" which have been studied extensively in Keel et al. (2012, 2015, 2017). These echoes can be used to constrain the accretion history of the AGN and potentially shed light on the coevolution of the central SMBH and its host galaxy.
In this work, we obtain FP spectroscopy from the Robert Stobie Spectrograph (RSS; Burgh et al., 2003; Nordsieck et al., 2003; Kobulnicky et al., 2003; Smith et al., 2006) with the Southern African Large Telescope (SALT) of the nearby Seyfert II galaxy NGC 1068 in order to observe the impact of AGN on ionized gas in the galaxy on kiloparsec scales. SALT RSS FP spectroscopy affords a large FoV along with the necessary spatial and spectral resolution to obtain high-quality velocity and ionization maps for the line-emitting gas across the extent of this galaxy.
We describe NGC 1068 and our observations in Section 2 and in Section 3 we detail the reduction of our data. We outline the procedure for producing line emission diagnostic maps of NGC 1068 from our reduced data in Section 4. We present the final diagnostic maps and derived results in Section 5 along with our interpretation of the results. Finally, our we discuss our results in Section 6 and discuss potential future research.
## 2 NGC 1068 and Observations
NGC 1068, also known as Messier 77, is a barred spiral galaxy and the optically brightest southern Seyfert galaxy known (de Vaucouleurs, 1973). As a Type II AGN, NGC 1068 presents possibly the best opportunity to conduct resolved studies of the interaction between a galaxy and obscured SMBH growth. Indeed, the obscuring material around the galaxy's central engine has been the subject of intense study for the past few decades. The nuclear region has been examined in the radio with Very Long Baseline Interferometry from the Very Large Array and in the infrared using interferometry from the Very Large Telescope to glean insight into the distribution, composition, and thermal properties of the obscuring material (van der Hulst et al., 1982; Greenhill et al., 1996; Rottgering et al., 2004; Gamez Rosas et al., 2022).
NGC 1068 has also been studied with the Atacama Large Millimeter Array (ALMA), which has been able to resolve the dusty torus around the SMBH on relatively small (parsec) scales (Garcia-Burillo et al., 2016). The Hubble Space Telescope (HST) obtained high-resolution spectral information using the Space Telescope Imaging Spectrograph (STIS) in order to constrain the dynamics of the inner 400 pc of the system (Das et al., 2006, 2007). Longslit studies in the infrared have covered similar regions to measure the photoionization properties of the galaxy nucleus (e.g. Tamura et al., 1991; Martins et al., 2010).
Wider-field observations, narrow-band imaging and spectroscopy have demonstrated the existence of extended emission on kiloparsec scales from the central nucleus, with heightened [O iii], [N ii], and Balmer line emission seen in Baldwin et al. (1987) and Pogge (1988). The innermost portion of these regions is also covered by Multi Unit Spectroscopic Explorer (MUSE) optical IFU observations from the Measuring Active Galactic Nuclei Under MUSE Microscope (MAGNUM) survey, where the ionizing source behind the emission is consistent with AGN activity (Venturi et al., 2021). The TYPHOON/Progressive Integral Step Method (PrISM)
survey has extended the range of observations with a stepped longslit program using the Wide Field reimaging CCD imaging spectrograph on the du Pont telescope at the Las Campanas Observatory, confirming the regions as AGN likely bring driven by AGN ionization several kiloparsecs from the galaxy nucleus (D'Agostino et al., 2018).
We obtained RSS FP spectroscopy with SALT on 2015-11-09 under proposal ID 2015-2-SCI-024. FP spectroscopy is ideal to study the kiloparsec scale ionized regions in NGC 1068. We make use of the FP spectroscopy mode with RSS on SALT described in Rangwala et al. (2008). We utilized the Low Resolution etalon, which affords a \(\mathcal{R}\) of 779, a free spectral range of 182 A, and a finesse of 21.9, all quoted at 6500A with an 8' diameter FoV. Our target wavelength range is selected to target the H\(\alpha\)+[N ii]\(\lambda\)06583,6548A complex. Adopting a systemic redshift of \(z=0.003793\) (1137 km s\({}^{-1}\); Huchra et al., 1999) for NGC 1068 we therefore target the observed wavelength range of 6570A to 6620A corresponding to a rest wavelength range of 6545A to 6595A.
We perform 2 integrations at 13 etalon positions with a PA of 159\({}^{\circ}\) and an exposure time of 20 seconds. Each etalon position is set such that the transmitted wavelength is 4A greater than the previous setting, so that we are appropriately sampling our \(\sim\)8A spectral resolution. Since the exact transmitted bandpass is a function of the position in the FoV, the spectral coverage of each region of the image varies on the order of a few A. In order to account for the chip gaps, a single dither is performed and all etalon spacings are repeated for a total of 1040 sec of science exposure time comprised of 52 frames.
In addition, we make use of a previously observed RSS longslit spectrum of NGC 1068 to validate our reduction and analysis of our FP data. The longslit observation was taken on 2011-07-30 under proposal ID 2011-2-RSA_OTH-002 using the PG0900 grating with a grating angle of 13.62\({}^{\circ}\) affording an \(\mathcal{R}\) of \(\sim 1400\) and \(\sim 1000\) at 6600A and 5000A respectively covering the H\(\alpha\)+[N ii] and H\(\beta\)+[O iii]\(\lambda\lambda\)5007,4959A complexes. A slit 8' long, 2'' wide, and a PA of 212\({}^{\circ}\) were used for three 300 sec exposures for a total integration time of 900 sec.
As the distance to NGC 1068 has remained a subject of debate for decades, in this work we consider the following three distance estimates: (1) 10.1 Mpc (2.9 kpc arcmin\({}^{-1}\)) using the Tully-Fisher (TF) relation from Tully et al. (2009), (2) 11.1 Mpc (3.2 kpc arcmin\({}^{-1}\)) measured from the tip of the red giant branch (TRGB) in Tikhonov and Galazutdinova (2021), and (3) 13.97 Mpc (4.1 kpc arcmin\({}^{-1}\)) derived from numerical action methods (NAM) in Anand et al. (2021). We adopt the TF distance method due to it's prevalence in the literature but our analysis will also consider the NAM distance as it is the most recent estimate of the galaxy's distance and the furthest distance we consider.
## 3 Data Reduction
In this section we detail the reduction of our SALT RSS data in both the FP and longslit modes. As SALT was built using a fixed primary mirror design, for which the effective pupil size changes during the observation, it is difficult to determine absolute flux incident onto the CCD. A relative flux is therefore produced at the end of RSS data reduction for both our longslit and FP data. While it would be possible to perform absolute flux calibration by comparing to existing data we do not do so in this work as our analysis primarily involves measuring the ionization state of the gas which relies on relative flux ratios.
### RSS Longslit Spectroscopy
The longslit spectrum is reduced using the PySALT1 user package which serves as the primary reduction software for data from SALT (Crawford et al., 2010). The frames are bias subtracted, flat-fielded, and a wavelength solution was generated from arc lamp spectra and applied to the data. The final flux is taken as the median of the frames while the error is computed as the standard deviation between the frames. Finally we apply an interstellar extinction correction to the 2D spectrum a long with a flux calibration generated from spectroscopic standards. The reduced 2D spectrum and total summed intensity spectrum across the FoV are presented in Figure 1(a).
Footnote 1: [http://pysalt.salt.ac.za/](http://pysalt.salt.ac.za/)
### RSS Fabry-Perot Spectroscopy
The FP images are reduced following the saltfppipe2 pipeline. The images are bias corrected, flat-fielded, and are mosaiced using the PySALT user package. Corresponding variance images are created based on the pixel variance and the readnoise. Due to the additional complexity of reducing FP data, we describe the process in depth in this section.
Footnote 2: [http://saltfppipe.readthedocs.io/en/latest/index.html](http://saltfppipe.readthedocs.io/en/latest/index.html)
Since each step along the etalon comprises a separate observation, the observing conditions are subject to change over the course of the total acquisition. In order to combine the observations, the average seeing must be measured. Each image can then be convolved with the appropriate function, ensuring that the effective beam size is constant across the observations. The final beam
size will be equivalent to the worse seeing across all the images. The effective seeing for each observation is measured by manually selecting several stars across the field, fitting a Gaussian profile to each of them across all images, and comparing their observed FWHM. The final FWHM for our FP observations of NGC 1068 was \(2^{\prime\prime}\) which is shown on all Figures presenting FP data.
Again, due to the nature of the fixed SALT primary mirror, the illumination fraction onto the primary will change between observations. For this reason, a number of stars are algorithmically detected throughout the image and their relative intensities and positions are recorded. By averaging over the measured intensities, each image can be normalized accordingly. The mea
Figure 1: The reduced data of NGC 1068. In panel (a) we present the reduced 2D spectrum of NGC 1068 taken with RSS longslit (top) along with the sum along the spatial dimension (bottom). In panel (b) we show the total measured intensity of the FP observations (right) along with the associated SNR (left). In addition we provide the spatial scale of the image assuming \(10.1\,\mathrm{Mpc}\) and \(13.97\,\mathrm{Mpc}\) distances to NGC 1068 (top right), the orientation of the image (bottom right), and a circle depicting the final FWHM of our observation. Due to the nature of the stationary primary mirror of SALT, no absolute flux calibration can be performed from the data.
sured positions also allows for any systematic drift between the images to be corrected through realignment. In order to properly generate the wavelength calibration of the observations, it is imperative to discern the optical center of the image. As the SALT RSS FP system induces a reflection, the optical center can be deduced by finding associated reflection pairs. This has the added benefit of being able to mitigate the effects of these "ghost" reflections by attempting to account for the effects of the reflection or, failing that, masking the brightest ghost features.
The wavelength calibration changes as a function of the distance from the optical center, the etalon spacing, and time. Given the wavelength stability of RSS FP spectroscopy, which incurs a wavelength drift \(\sim 1\)A hour\({}^{-1}\) and validated to not exceed FWHM/3 hour\({}^{-1}\) in commissioning, (Rangwala et al., 2008; Buckley et al., 2008), we do not introduce any time dependence in our wavelength solution.
Traditional arc lamp spectra allow us to find a precise wavelength at a given radius from the optical center. For SALT observations, two arc lamp spectra are taken at the start and the end of the observation, at the maximum and minimum of the etalon spacing. In addition, the individual images are used in solving the wavelength solution. By subtracting the median image from each observation, the night sky emission rings become prominent and can be matched to known emission line values in the range of the interference filter.
Following the wavelength calibration, the azimuthally-averaged radial profile for each image is examined and fitted in order to remove the contribution from night sky emission lines. This must be done for each image individually. Finally, the data cube can be assembled from the constituent images, here each observation is convolved with the appropriate function in order to produce the determined beam size. Finally, the pipeline corrects for the telescope's heliocentric velocity after the creation of the cube. We note that for one frame we were not able to achieve successful subtraction of the sky emission feature and therefore exclude it from the subsequent analysis.
In order to combine different dithers, a standard approach cannot be taken. Simply averaging pairs of corresponding pixels from the two corresponding images with the same etalon spacings is not advisable with RSS FP data, as two images from differing data cubes will have the object of interest at a different position relative to the optical center, changing the wavelength solution at each position. After discerning the offset, a new data cube is created by combining all of the dither information. Each spectral pixel (spaxel) will simply have twice the number of wavelength-intensity pairs as before, while those spaxels that lie in one of the chip gaps from one dither will have data from the other dither. Finally, stars in the in the FoV are identified in order to generate a World Coordinate System (WCS) transformation between pixel coordinates and the J2000 reference frame.
The total summed intensity of the entire data cube is presented in Figure 1 (b) along with the associated SNR in Figure 1 (c). Emission line regions are highlighted due to the relatively narrow bandpass across our FP observations. In addition, there do not appear to be any spatial variations in noise apart from those induced by fluctuations in the galaxy's brightness.
## 4 Data Analysis
In this section we discuss the analysis techniques performed on the reduced RSS FP spectroscopy data cubes and RSS longslit 2D spectrum in order to extract the emission line fluxes and velocities over the FoV.
### Adaptive Binning
We pursue an adaptive binning technique in order to achieve a minimum intensity in each bin to reach a sufficient signal to noise (SNR) far from the center of the galaxy. Adaptive binning creates nonuniform bins across the FoV, placing smaller bins in bright regions, and large bins in dim areas. For the RSS FP data we pursue using a Voronoi Tessellation Algorithm, which produces a set of polygonal bins (a Voronoi diagram) such that the spaxels in each bin meets a minimum intensity or signal to noise threshold. We use the Voronoi binning algorithm, VorBin, by Cappellari & Copin (2003) developed for IFU spectroscopy.
For both datasets we compute the SNR at each spatial position as the median SNR for all pixels across all wavelengths. For the longslit data we set a target SNR of 20 in order to ensure we can accurately retrieve emission-line fluxes. Due to the complex nature of the FP reduction and as our FP uncertainties are derived from the detector properties and not from combining multiple frames, as with our longslit data, we therefore set a more stringent target SNR of 100. We note that this does not guarantee a similar recovered SNR especially due to the small spectral range and relatively fewer data points across the observations.
We show our Voronoi tessellation for NGC 1068 in Figure 2. For the RSS Longslit data, we use VorBin to bin only along the spatial direction. Our final binning for the longslit data is presented in Figure 3.
In order to measure the strength of nebular emission lines and the source of the ionization, we pursue fitting the emission lines required for the established Baldwin et al. (1981, herafter BPT) diagnostic.
#### 4.2.1 Fabry-Perot
Following the Voronoi tessellation, we fit the [N ii]+H\(\alpha\) complex in each voronoi bin. First, we take the set of spaxels from each dither in a given bin. The spaxels from a given dither are then normalized to the mean intensity of the set of spaxels from both dithers, accounting for the moving pupil of SALT.
The [N ii]+H\(\alpha\) complex is fitted with the sum of three Gaussians and a constant continuum offset. The redshifts of the two [N ii] lines are fixed to be equal, and the [N ii]\(\lambda\)6548A to [N ii]\(\lambda\)6583A flux ratio is fixed at 0.34 (Oh et al., 2011). In addition, we require the redshifts of each of the lines to be bounded within 300 km s\({}^{-1}\) of the systemic redshift of NGC 1068 and that the flux in each line be positive. The FWHM of the lines is restricted to lie between 400 km s\({}^{-1}\), consistent with our spectral resolution, and 1000 km s\({}^{-1}\), the upper limit of narrow-line velocity widths. Finally, the width of all of the lines are set to be the same as the limited spectral resolution does not allow us to distinguish line broadening due to physical effects.
We note that the underlying line-spread function (LSF) induced by the FP system is a Voigt profile (Rangwala et al., 2008). However this is primarily important for the wings of the emission lines and has a minimal effect on our ability to accurately measure the line height and center. In addition, as the widths of our
Figure 2: The final binning of the FP data of NGC 1068 generated from the VorBin routine. Panel (a) depicts the Voronoi tessellation map of NGC 1068. The size of the Voronoi bins are chosen to achieve bins of a minimum intensity. Note how the bin size increases with distance from the center, as the intensity from the galaxy falls off. Panels (b) and (c) present the number of pixels and SNR in a given bin respectively. Finally, panel (d) shows how the total number of bins changes with distance from the center (red), along with the total enclosed intensity at a given radius (blue).
emission lines are fixed to the same value for the FP fitting, our retrieved fluxes are not affected by the small difference between a Gaussian and the LSF. To validate this, we re-fit our data using Lorentzian profiles, a worst-case scenario for the differences between the assumed Gaussian profile and the true Voigt LSF, and find no noticeable differences to our fitting with Gaussian profiles. While an investigation of the kinematics of the emitting gas would require detailed knowledge of the underlying LSF, especially if paired with a higher-resolution etalon, in this work our Gaussian treatment of the lines paired with our \(\Delta\lambda/2\) sampling is appropriate to retrieving emission line fluxes over the FOV.
The fitting is conducted using the astropy model fitting subclass using a Levenberg-Marquardt Least-Squares fitter. In order to generate errors on our recovered parameters, we bootstrap over the pixels in a given bin and refit 1000 times. The median and standard deviation of each parameter is taken as the value and error on the parameter. Finally we compute in which bins the median redshift was within \(20\,\mathrm{km}\,\mathrm{s}^{-1}\) of the limits mentioned above, while for the dispersion we calculate which bins are within \(1\,\mathrm{km}\,\mathrm{s}^{-1}\) of the limits. As the fitting in these bins was constantly hitting the limits in our fitting, the fits and their derived uncertainties cannot be trusted and are considered as failed fits. The subsequent analysis will only consider spaxels in which the fitting was successful which span a total angular size of \(2.6\,\mathrm{arcmin}^{2}\).
#### 4.2.2 Longslit
Following the adaptive binning, we combine the spectra in each bin by summing their flux and adding their errors in quadrature. We then fit the combined spec
Figure 3: The final binning of the longslit data of NGC 1068 generated from the VorBin routine. Panel (a) depicts the bin map of the longslit data. The size of the Voronoi bins are chosen to achieve bins of a minimum intensity. Panels (b) and (c) present the number of pixels and SNR in a given bin respectively. Finally, panel (d) depicts the total intensity summed along the spectral dimension (red), along with the cumulative number of bins (blue).
trum in each bin using the Galaxy/AGN Emission Line Analysis Tool (GELATO, Hviding, 2022) following the procedure outlined in Hviding et al. (2022). GELATO models the continuum as a linear combination of Simple Stellar Populations (SSPs) from the Extended MILES stellar library (E-MILESES; Vazdekis et al., 2016) and models emission lines as Gaussians.
GELATO is a flexible Python framework that enables the fast and robust analysis of optical galaxy spectroscopy while specifying the relationship between various emission line parameters to suit the user's needs. For the [N ii] and [O iii] emission doublets we tie the respective velocity dispersions and redshifts together and set the emission line ratios, [N ii]\(\lambda\)6548A/[N ii]\(\lambda\)6583A and [O iii]\(\lambda\)5007A/[0 iii]\(\lambda\)4959A, to 0.34 and 0.33 respectively (Oh et al., 2011). In addition, we do not attempt to fit a broad-line component to the Balmer emission lines.
## 5 Results
In this section we present our maps of NGC 1068 generated from fitting the emission lines from the reduced FP and longslit data.
### Velocity Maps
We present maps of the recessional velocity for both the H\(\alpha\) and [N ii] emission lines in NGC 1068 and their associated errors in Figure 4. The rotational motion of the galaxy is apparent in the figure, with the westward side receding from the observer relative to the eastward side. Nearer towards the center of the galaxy, the change in the inclination of the disk becomes apparent as observed in Schinnerer et al. (2000) by tracing the molecular gas in the inner 20\({}^{\prime\prime}\) of the galaxy and attributed to a warped disk in NGC 1068. We are able to observe the same warp in the ionized gas and trace the inclination angle of the disk out to several arcminutes.
### Emission Line Maps
We present maps of the emission line fluxes for both [N ii] and H\(\alpha\) in NGC 1068 along with their associated errors in Figure 5. Both emission line maps clearly reveal the ring of star formation around the the galaxy center, as documented in Thronson et al. (1989), which are more prominent in H\(\alpha\). Areas of star formation throughout the disk become apparent as well.
### Ionization Map
In order to assess the ionization state across the FoV, we produce maps of the [N ii]\(\lambda\)6583A/H\(\alpha\) emission line ratio. This diagnostic is sensitive to the level of ionization in the gas and can be used to discern the source of ionizing radiation when combined with other emission line ratios (Baldwin et al., 1981). Since the relative fluxes are preserved in the FP maps, the line ratio is measured accurately even without absolute flux calibration. While this emission line ratio alone does not totally discern the ionization source of the gas, it gives a measurement of the level of ionization. Our ionization map, presented in Figure 6 along with it's associated SNR, reveals the kiloparsec scale high-ionization features previously observed by D'Agostino et al. (2018).
In order to verify our FP ionization map, we overlay the longslit line ratios over the FP map again in Figure 6 along with it's associated SNR. The previous longslit data obtained of NGC 1068 covers the extended high-ionization regions and is offset from the center of the kiloparsec-scale ionized features by \(\sim\)20\({}^{\circ}\). The longslit results verify the existence of the highly ionized features in our FP data. We note that at the edges of the longslit data we observe some disagreement with our FP measurements, especially in the southwestern region of our data, though we note that this is likely driven by the low SNR measurements in both datasets at those distances.
We note, however, that the [N ii]/H\(\alpha\) ratio is also sensitive to the gas-phase metallicity and is therefore an imperfect tracer of the ionization ratio. Without the availability of [O iii] and H\(\beta\) FP maps we are unable to break this degeneracy over the entire field of view. However, as the longslit spectrum has wavelength coverage of the [O iii]+H\(\beta\) complex, we plot the BPT position of the longslit spectrum bins overplotted on the FP ionization map in Figure 7. In addition, we plot the Kauffmann et al. (2003b) and Kewley et al. (2001a) demarcations on the BPT diagram. The longslit spectrum observations confirm that the ionized features have line ratios consistent with an AGN ionizing source. The transition along the longslit FoV becomes apparent, with the kiloparsec scale ionized features having AGN line ratios as strong as the central engine, but the intermediate regions having composite and star formation emission ratios, diminished.
To emphasize this transition further, we plot the BPT position across the longslit FoV alongside stacked spectra across from the central AGN dominated region, the kpc-scale ionized regions, and the transition regions between the two in Figure 8. The regions over which the spectra are stacked are chosen by eye and to emphasize the differences across the FoV. The stacked spectra show a clear transition from AGN dominated, with prominent [O iii] and [N ii] in the central region, to composite and star forming, back to AGN dominated in the kiloparsec scale features. We note that the the indication of AGN regions beyond the northern and southern extended fea
tures is due to a lack of signal in the longslit image, and likely not indicative of AGN activity out to the edge of the figure.
Figure 8 also presents the GELATO fits to the stacked spectra along with the retrieved continuum to highlight the strength of absorption features in particular below the H\(\alpha\) emission line. It becomes clear that across the galaxy the emission line strength dominates over and the absorption features negligibly contribute to the measurement of the H\(\alpha\) flux, except for perhaps at the center of the galaxy. This validates our fitting approach to the FP data, where we are not able to model the underlying continuum, and reinforces that our FP ionization measurements are accurate over the FoV.
### Potential Light Echoes
The kilo-parsec scale features visible in Figure 6 are attributed to AGN ionization by D'Agostino et al.
Figure 4: Velocity maps (left) and their associated standard deviations (left) for the H\(\alpha\) (top) and [N ii] (bottom) emission lines. The colorbars for the velocity are centered on the systemic redshift of NGC 1068 corresponding to 1137 km s\({}^{-1}\).
(2018). This picture is reinforced by Figure 29 of D'Agostino et al. (2018) which shows no enhancement of the velocity dispersion in the location of the shocks. In addition, if shocks driven by outflows were present, we would expect to see distortions in the velocity field of the galaxy or enhanced line widths in the kiloparsec scale ionized regions (Ho et al., 2014). As the ionization features are nearly perpendicular to the axis about which the galaxy is inclined, these velocity shifts would be visible in the velocity field of the galaxy.
We do not observe evidence of an outflow in the velocity fields of the H\(\alpha\) or [N ii] as shown in Figure 4. In addition, the kiloparsec scale ionized regions have velocity dispersions consistent with the intervening composite
Figure 5: Emission line flux maps (left) and their associated standard deviations (right) for the H\(\alpha\) (top) and [N ii]\(\lambda\)6583Å (bottom) lines. While there is no absolute flux calibration due to the nature of the stationary primary mirror of SALT, the relative flux is preserved.
and star forming regions, with average velocity dispersions of \(\sim\)175 km s\({}^{-1}\) with a maximum measured dispersion of \(\sim\)225 km s\({}^{-1}\), while the central AGN region does show enhanced velocity dispersions, with average velocity dispersions of \(\sim\)325 km s\({}^{-1}\) with a maximum measured dispersion of \(\sim\)450 km s\({}^{-1}\).
In this section we explore the possibility of modeling the kiloparsec-scale AGN-photoionized emission across the galaxy to investigate if the spatial variation can provide insight into the AGN luminosity as a function of time. We present the possibility that the ionized features are due to past AGN activity and are therefore a form of light echoes, and create a toy model to explore the time variability of NGC 1068.
#### 5.4.1 AGN Contribution to H\(\alpha\)
Similar to previous spatially-resolved spectroscopic studies of AGN, we aim to disentangle the contribution of AGN and SF processes to the H\(\alpha\) flux across
Figure 6: Maps of H\(\alpha\)/[N ii] (left) and their associated SNRs (right) for the FP (top) and longslit (bottom) data. The longslit observations are plotted over grayscale FP H\(\alpha\)/[N ii] ratio maps. Their corresponding standard deviations are shown in panels (c) and (d). The longslit observations confirm the accuracy of the our FP data reduction and analysis and highlight the kiloparsec scale ionizing features.
the FoV (Kewley et al., 2001b; Davies et al., 2016, 2017; D'Agostino et al., 2018, 2019). We first derive the relative H\(\alpha\) flux contributed by the AGN in the FP image. By assigning typical star forming and AGN [N ii]/H\(\alpha\) ratios, we can solve for the AGN contribution given the H\(\alpha\) and [N ii] fluxes. By examining the range of [N ii]/H\(\alpha\) ratios present in the longslit data we can take the extreme values to be representative of pure SF or AGN activity. By investigating Figure 7, we set the pure-AGN value as \((\rm[N~{}\textsc{II}]/H\alpha)_{\rm AGN}=2\) and a the pure star-forming value as \((\rm[N~{}\textsc{II}]/H\alpha)_{\rm SF}=0.3\). We can therefore express the relationships between the line ratios and the typical emission ratio values in the following equations:
\[\rm H\alpha=H\alpha_{AGN}+H\alpha_{SF} \tag{1}\]
\[\rm[N~{}\textsc{II}]=\rm[N~{}\textsc{II}]_{AGN}+\rm[N~{}\textsc{II}]_{SF} \tag{2}\]
\[\rm([N~{}\textsc{II}]/H\alpha)_{AGN}\times H\alpha_{AGN}=\rm[N~{}\textsc{II}]_{AGN} \tag{3}\]
\[\rm([N~{}\textsc{II}]/H\alpha)_{SF}\times H\alpha_{SF}=\rm[N~{}\textsc{II}]_{SF} \tag{4}\]
By solving the above system of equations, we can derive the H\(\alpha\) flux contributed to the AGN using the H\(\alpha\) and [N ii] emission maps and the typical line ratios chosen above.
\[\rm H\alpha_{AGN}=\frac{\rm(H\alpha/[N~{}\textsc{II}])_{SF}\times[N~{}\textsc{ II}]-H\alpha}{\rm([N~{}\textsc{II}]/H\alpha)_{AGN}\times(H\alpha/[N~{}\textsc{II}])_{ SF}-1} \tag{5}\]
The derived map of the H\(\alpha\) flux contributed by the AGN is shown in the left middle panel of Figure 9. The derived map demonstrates that there is significant variation over the FoV, especially in the direction of the kiloparsec-scale ionized features that may be attributable to varying AGN intensity in the past.
#### 5.4.2 Toy Model
In order to recreate the structures visible in the H\(\alpha_{\rm AGN}\) flux, we construct a toy model of the NGC 1068 galaxy and its AGN. The southern structure motivated our attempt to model the NGC 1068 AGN system as a biconical outflow intersecting with a galactic disk. We model the disk of the galaxy as an opaque plane, consistent with measurements of the optical depth of galaxy disks (James & Puxley, 1993). If we align the \(x\) and \(y\) axes with our observation (with the \(z\) axis therefore towards the observer), we can start be defining the plane of the disk in the \(x\) and \(z\) axes, defined by the normal
Figure 7: In panel (a), colored longslit positions overplotted on the FP log\({}_{10}\)(H\(\alpha\)/[N ii]) diagnostic map. In panel (b) we plot BPT positions of the colored longslit positions to highlight the transition between the star-forming and AGN regions of NGC 1068 along the longslit spectrum. With the additional [O iii] and H\(\beta\) information afforded by the longslit spectrum, the source of the kiloparsec scale ionization features becomes clear as the line ratios are consistent with that of an AGN. In addition, we plot the Kewley et al. (2001a) and Kauffmann et al. (2003b) demarcations as solid and dashed lines, respectively.
vector, \(\hat{n}=\mathbf{\hat{j}}=(0,1,0)\). Given that a face on galaxy has an inclination angle of \(0^{\circ}\), therefore \(\hat{n}\) defines a disk with an inclination of \(90^{\circ}\). To rotate the model disk to an arbitrary angle, \(i_{\mathrm{gal}}\), we apply the standard rotation matrix about the \(x\)-axis, \(\mathbf{R}_{x}\):
\[\hat{n}_{\mathrm{inclined}}=\mathbf{R}_{x}\left(\frac{\pi}{2}-i_{\mathrm{gal }}\right)\times\mathbf{\hat{j}}=\begin{pmatrix}0\\ \sin(i_{\mathrm{gal}})\\ \cos(i_{\mathrm{gal}})\end{pmatrix} \tag{6}\]
We then apply a second rotation about the \(z\)-axis, \(\mathbf{R}_{z}\), to align the model with an arbitrary position angle, \(\mathrm{PA}_{\mathrm{gal}}\). The position angle of the observation, determined from the WCS information, is added to the model galaxy PA in order to match with the data. Therefore, the normal vector that describes the modeled disk of NGC 1068 is given by:
\[\hat{n}_{\mathrm{gal}}=\mathbf{R}_{z}(\mathrm{PA}_{\mathrm{gal}}+\mathrm{PA}_ {\mathrm{WCS}})\times\mathbf{R}_{x}(\pi/2-i_{\mathrm{gal}})\times\mathbf{\hat {j}} \tag{7}\]
We set the position angle of the disk to be \(8^{\circ}\) based on Garcia-Burillo et al. (2016)3 and the inclination to be \(40^{\circ}\) Schinnerer et al. (2000). As we model the disk as an opaque surface, so any points behind the plane of the disk do not contribute any flux to the toy model:
Footnote 3: As our PA is defined as the angle to the normal vector of the galaxy rather than the axis about which the galaxy rotates, our PA is greater by \(90^{\circ}\).
\[F\left(\hat{n}_{\mathrm{gal}}\cdot\begin{pmatrix}x\\ y\\ z\end{pmatrix}<0\right)=0 \tag{8}\]
In order to match the ionization features, we model bursts of past AGN activity as a biconical structure. In general, the equation for the region inside the bicone along the \(z\)-axis with opening angle, \(\theta_{\mathrm{o}}\), is given by:
\[x^{2}+z^{2}\leq\tan^{2}\left(\frac{\theta_{\mathrm{o}}}{2}\right)y^{2} \tag{9}\]
We can repeat the same process as above to generate the generic rotation matrix for the disk of the galaxy to find the normal vector that defines the direction of the bicone:
\[\mathbf{R}_{\mathrm{c}}=\mathbf{R}_{z}(\mathrm{PA}_{\mathrm{cone}}+\mathrm{ PA}_{\mathrm{WCS}})\times\mathbf{R}_{x}(i_{\mathrm{cone}}) \tag{10}\]
Therefore, we can apply the following constraint:
\[F(\vec{x^{\prime}})=0 \tag{11}\]
where \(\vec{x^{\prime}}=(x^{\prime},y^{\prime},z^{\prime})\) and
\[x^{\prime 2}+z^{\prime 2}>\tan^{2}\left(\frac{\theta_{\mathrm{o}}}{2}\right)y^{ \prime 2}\text{ where }\begin{pmatrix}x^{\prime}\\ y^{\prime}\\ z^{\prime}\end{pmatrix}=\mathbf{R}_{\mathrm{c}}\begin{pmatrix}x\\ y\\ z\end{pmatrix} \tag{12}\]
Figure 8: In panel (a), the BPT positions of the longslit spectrum bins overplotted on the FP \(\log_{10}(\mathrm{H}\alpha/[\mathrm{N}\ \mathrm{\SIUnitSymbolMicro m}])\) diagnostic map. In panel (b), we present the stacked spectra from the highlighted regions in panel (a). The selected regions highlight the transition of the longslit spectrum from star-forming to AGN activity across the FoV. In addition, the spectra reveal the rotational motion of the galaxy with the slight shift of the emission lines relative to the average redshift of the galaxy.
We set the opening angle of the bicone to 80\({}^{\circ}\) based on the work done in Das et al. (2006) and Garcia-Burillo et al. (2016). For the inclination angle, Crenshaw & Kraemer (2000) and Das et al. (2007) find a small inclination of \(\sim\)5\({}^{\circ}\) for the innermost 6" of the galaxy. We find that to best fit our data we require a slight smaller inclination of \(\sim\)2.5\({}^{\circ}\) into the plane of the sky, but note that our observations are at a larger scale than the previous work. Finally, the position angle of the bicone is set to 30\({}^{\circ}\) based on Das et al. (2006).
We model the bursts of AGN activity as two Gaussians in time: \(\mathcal{G}(\mu,\sigma)=\exp(-\frac{1}{2}((t_{\rm lookback}-\mu)/\sigma)^{2})\). Assuming a distance of 10.1 Mpc, our best fit models are comprised of a past burst occurring \(\sim\)2250 years ago and a current burst of activity beginning within the past \(\sim\)500 years that is 8 times in strength. The variable output of the AGN is shown in the top panel of Figure 9. In addition, we present a few other potential models of AGN activity in Figure 9 on which the subsequent analysis is also performed in order to provide some context on the dependence on our chosen Gaussian parameters. Finally we add a constant background AGN component that emits along the plane of the disk and is not restricted to the biconical structure which is intended to represent a constant level of photoionization induced by the AGN in the galaxy.
In order to compare our results to our FP maps, we simulate observations of our model. We create a \(\sim\)6 kpc wide cube grid with a resolution that matches our pixel scale (80 ly px\({}^{-1}\)) and is aligned the FP im
Figure 9: Our model of the kiloparsec scale ionized region as light echoes from the AGN in NGC 1068. In the top panel we present the model lightcurves with the main model depicted in solid purple. The center left panel shows the derived (H\(\alpha\))\({}_{\rm AGN}\) flux in log scale. In the bottom left panel we show the modeled AGN flux following the procedure described in Section 5.4 in log scale. We plot the radial profiles of the derived (H\(\alpha\))\({}_{\rm AGN}\) flux (solid) and the model AGN lightcurves (dashed) for the four cardinal directions in the center middle (South), right middle (West), center bottom (North), and right bottom (East) panels.
age. First, we calculate the distance to each point: \(d(\vec{x})=\sqrt{x^{2}+y^{2}+z^{2}}\), where \(\vec{x}\) is the position vector to an arbitrary grid cell. In order to realistically model the travel time from the central engine to model position and finally to the observer, the observed flux is given by:
\[F_{\rm AGN}(\vec{x})\propto\frac{1}{d(\vec{x})^{2}}L_{\rm AGN}\left(\frac{d( \vec{x})-x}{c}\right) \tag{13}\]
where \(c\) is the speed of light. In order to account for the gas in the galaxy, above the opaque plane, we model the disk as an exponential fall of with a scale height of \(10^{3.5}\,\)ly (\(\sim 1\,\)kpc):
\[F_{\rm AGN}(\vec{x})\propto\frac{\exp(\vec{x}\cdot\hat{n}_{\rm gal}/10^{3.5})} {d(\vec{x})^{2}}L_{\rm AGN}\left(\frac{d(\vec{x})-x}{c}\right) \tag{14}\]
After applying the constraints outlined in Equations 8 and 11, we create a simulated image by summing the model flux grid along the line of sight to the observer: \(F_{\rm im}(x,y)=\sum_{z}F(\vec{x})_{\rm AGN}\). Finally, a constant AGN background component in the disk is modeled by calculating the distance to a point in the disk at each position in the simulated image:
\[d_{\rm proj}(x,y)=x^{2}+y^{2}+\left(\frac{x(\hat{n}_{\rm gal})_{x}+y(\hat{n}_ {\rm gal})_{y}}{(\hat{n}_{\rm gal})_{z}}\right)^{2} \tag{15}\]
The background flux is then given by:
\[F_{b}(x,y)\propto\frac{1}{(d_{\rm proj}(x,y))^{2}} \tag{16}\]
The final simulated image, calculated by summing the AGN and background components, is presented in the left bottom panel of Figure 9 along with the radial profiles of the model and data in the four cardinal directions. For the North and South profiles we sum along the shape of the bicone, while for the East and West profiles we sum excluding the bicone. Our simple toy model is able to capture the general shape of the H\(\alpha\) flux, along with matching the radial profiles quite well out to \(\sim\)2.5 kiloparsecs. Our toy model is able to capture the shape of the kiloparsec scale feature visible in the southern part of the FP image, which comes from the intersection of the biconical structure with the disk of the galaxy at different position and inclination angles. Given the relative simplicity of our model and its ability to reproduce most of the AGN flux distribution in the data, we conclude it may be a viable solution for the kpc-scale AGN ionization seen in NGC 1068.
## 6 Summary & Discussion
Using SALT RSS Fabry-Perot spectroscopy, we are able to map the [N ii] and H\(\alpha\) complex across \(\sim\)2.6 arcmin\({}^{2}\) of NGC 1068. We measure the ionization state of the gas and, combined with SALT RSS longslit spectroscopy, find that the kiloparsec-scale ionization features are powered by AGN photoionization, in complete agreement with (D'Agostino et al., 2018). Our observations confirm the efficacy of using FP spectroscopy to study the extended effect of the AGN in nearby Seyfert II galaxies.
We offer an alternative explanation for the source of the ionization features as photoionization due to past AGN activity. Our analysis suggest that the extended ionization seen in NGC 1068 are due to enhanced AGN activity in the past and therefore the AGN history can be understood through spatially resolved studies of the ionized gas. Our toy model, while a relatively good fit to the data, is not definitive evidence that the ionization across the FoV directly traces the AGN's history. However, we believe our model presents a viable possibility which can be explored further in resolved future studies with detailed models that can appropriately back out the AGN intensity as a function of time from the distribution of AGN-ionized nebular gas.
We note that the variation in ionization and H\(\alpha\) intensity across the FoV may be induced by other physical effects. Nebular gas density variations across the disk could potentially replicated the observed pattern. In addition, while optical spectroscopic analysis has limited what fraction of the ionization is induced by shocks in the kiloparsec-scale ionized features, this can be more directly measured from coronal emission line strengths (e.g. [Fe ii]\(\lambda\)1.257\(\mu\)m, [P ii]\(\lambda\)1.188\(\mu\)m, etc.), such as in Terao et al. (2016), to place stricter limits on the contribution of shocks.
We would like to thank the anonymous reviewer for their constructive comments which improved the final paper.
REH acknowledges support from the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-1746060. RCH acknowledges support from the National Science Foundation CAREER Award number 1554584. PV and RR acknowledge support from the National Research Foundation of South Africa.
All of the observations reported in this paper were obtained with the Southern African Large Telescope (SALT) under proposal IDs 2011-2-RSA_OTH-002 and 2015-2-SCI-02. This research made use of saltfppipe, a data reduction package for the SALT Fabry-Perot.
We respectfully acknowledge the University of Arizona is on the land and territories of Indigenous peoples. To
day, Arizona is home to 22 federally recognized tribes, with Tucson being home to the O'odham and the Yaqui. Committed to diversity and inclusion, the University strives to build sustainable relationships with sovereign Native Nations and Indigenous communities through education offerings, partnerships, and community service.
SALT(RSS)
Astropy(Collaboration et al., 2013), Matplotlib(Hunter, 2007), MPFIT(Markwardt, 2009), NumPy(Oliphant, 2006; van der Walt et al., 2011; Harris et al., 2020), PyRAF/IRAF(Tody, 1993; Science Software Branch at STScI, 2012), PySALT(Crawford et al., 2010), saltfppipe, SciPy(Virtanen et al., 2020), VorBin(Cappellari & Copin, 2003)
|
2310.12933 | Assisted metrology and preparation of macroscopic superpositions with
split spin-squeezed states | We analyse the conditional states in which one part of a split spin-squeezed
state is left, upon performing a collective spin measurement on the other part.
For appropriate measurement directions and outcomes, we see the possibility of
obtaining states with high quantum Fisher information, even reaching the
Heisenberg limit. This allows us to propose a metrological protocol that can
outperform standard approaches, for example in a situation where the number of
particles in the probe is bounded. The robustness of this protocol is
investigated by considering realistic forms of noise present in cold-atom
experiments, such as particle number fluctuations and imperfect detection.
Ultimately, we show how this measurement-based state preparation approach can
allow for the conditional (\ie heralded) preparation of spin Schr\"{o}dinger's
cat states even when the initial state before splitting is only mildly
squeezed. | Jiajie Guo, Fengxiao Sun, Qiongyi He, Matteo Fadel | 2023-10-19T17:31:37Z | http://arxiv.org/abs/2310.12933v1 | # Assisted metrology and preparation of macroscopic superpositions
###### Abstract
We analyse the conditional states in which one part of a split spin-squeezed state is left, upon performing a collective spin measurement on the other part. For appropriate measurement directions and outcomes, we see the possibility of obtaining states with high quantum Fisher information, even reaching the Heisenberg limit. This allows us to propose a metrological protocol that can outperform standard approaches, for example in a situation where the number of particles in the probe is bounded. The robustness of this protocol is investigated by considering realistic forms of noise present in cold-atom experiments, such as particle number fluctuations and imperfect detection. Ultimately, we show how this measurement-based state preparation approach can allow for the conditional (_i.e._ heralded) preparation of spin Schrodinger's cat states even when the initial state before splitting is only mildly squeezed.
## I Introduction
Spin-squeezed states are of paramount importance for investigating multipartite quantum correlation, as well as for quantum-enhanced metrology applications. Experimentally, these states are nowadays routinely prepared in atomic ensembles, either by controlling atomic collisions, or by light-matter interaction. In these platforms, a number of studies revealed the rich entanglement structure of spin-squeezed states [1], and demonstrated their usefulness for performing measurements with a precision surpassing the standard quantum limit [2].
Recently, the concept of split spin-squeezed states was introduced, where an ensemble of spin-squeezed particles is spatially distributed into individually addressable modes [3]. Through this process, the particle entanglement present in the initial state give origin to mode entanglement between its partitions [4], highlighting also a strong duality between these two concepts [5]. After their first experimental realisation with Bose-Einstein condensates [6], split spin-squeezed states raised a lot of interest for their possible applications in quantum technologies and fundamental studies. Examples include theoretical investigations of their potential quantum metrology [7; 8], recently demonstrated experimentally in [9], and for investigating multipartite quantum correlations [10; 11; 12; 13; 14]. Taking this successful example into consideration, it would be crucial to understand whether other quantum information tasks could be accessible by such states.
In this context, we provide here a new metrological protocol enabled by split spin-squeezed states. The idea is based on the fact that, due to the shared quantum correlations between the two parties of the system, performing a local measurement on one of them leaves the other in a conditional state that can have an extremely high sensitivity. This protocol can outperform the standard approach of using spin-squeezed states when the number of particles in the probe, as well as the state preparation time, are limited.
Moreover, our measurement-based state preparation protocol can result in the generation of macroscopic superposition states, also known as spin Schrodinger cat states [15; 16]. Besides their interest for metrology, such states are appealing for fundamental studies of quantum correlations in many-body systems. Their non-classicality is notoriously related to interference fringes and negative regions in the Wigner function, which are typically difficult to prepare experimentally.
In summary, our work analyses a regime of system parameters and resources in which an assisted metrological protocol using split spin-squeezed states can offer an advantage. Moreover, we investigate the use of such states for the heralded preparations of macroscopic superposition states.
Figure 1: **State preparation and metrology with split atomic ensembles.** (a) To prepare non-classical states of many-body systems, traditional approaches rely on implementing a nonlinear dynamic in a trapped ensemble. (b) Instead, our assisted protocol is based on first spatially splitting a mildly entangled ensemble, to then measure one of the two parts. Because of the shared correlations, this results in projecting the other half into a multipartite state that can have strong quantum correlations.
These ideas could be implemented experimentally with Bose-Einstein condensates, where the preparation of cat-like spin states turned out extremely challenging using conventional approaches.
## II Single probe metrology with OAT states
In a typical quantum metrology scheme, the phase shift \(\theta\) to be determined is encoded in a \(N\)-partite probe state \(\rho_{0}\) by a generator \(H\) as \(\rho=e^{-i\theta H}\rho_{0}e^{i\theta H}\). A fundamental limit to the maximum phase sensitivity is provided by the so-called quantum Cramer-Rao bound \(\Delta\theta\geq\Delta\theta_{QCR}\equiv 1/\sqrt{vF_{Q}[\rho,H]}\), where \(F_{Q}[\rho,H]\) is the quantum Fisher information (QFI) and \(v\) is the number of independent measurements [2]. For a pure state, the QFI can be expressed in terms of the variance of \(H\) as \(F_{Q}[\rho,H]=4\text{Var}[\rho,H]\). The standard quantum limit tells us that for all classical states \(F_{Q}[\rho,H]\leq N\), while according to the Heisenberg limit quantum states satisfy \(F_{Q}[\rho,H]\leq N^{2}\). Therefore, observing \(F_{Q}[\rho,H]>N\) implies the presence of metrologically useful entanglement [2]. Moreover, a high QFI can be related to correlations that are even stronger than entanglement, namely Bell correlations [17].
Of paramount importance for preparing atomic ensembles in quantum states with large QFI is the one-axis twisting (OAT) dynamics [18]. Starting from a \(N\)-partite spin coherent state pointing along the \(+x\) direction, the OAT Hamiltonian \(H=\hbar_{\!\!\!\!/}X_{z}^{2}\) gives after an evolution time \(t\) the state
\[|\psi(\mu)\rangle=\frac{1}{\sqrt{2^{N}}}\sum_{k=0}^{N}\sqrt{\binom{N}{k}}e^{- i\frac{\pi}{2}(N/2-k)^{2}}|k\rangle, \tag{1}\]
where \(\mu=2\nu t\) is an adimensional parameter, and \(|k\rangle\) is the Dicke state with \(k\) excitations.
The properties of state Eq. (1) have been extensively investigated theoretically [2; 19]. Notably, expectation values of the collective spin operator can be computed analytically, also for high moments [20]. This allows us to obtain analytical expressions also for the eigenvalues of the \(3\times 3\) covariance matrix \(\Gamma_{ij}=\text{Cov}[S_{i},S_{j}]\), with \(S_{i}\in\{S_{x},S_{y},S_{z}\}\). The basis change that diagonalizes \(\Gamma\) is of clear physical intuition, and often convenient to use. Together with the polarization direction \(x=x^{\prime}\), we introduce the squeezing direction \(z^{\prime}=-\sin\theta^{\prime}y+\cos\theta^{\prime}z\) and anti-squeezing directions \(y^{\prime}=\cos\theta^{\prime}y+\sin\theta^{\prime}z\), with
\[\theta^{\prime}=\frac{1}{2}\arctan\left(\frac{4\sin\left(\frac{\theta}{2} \right)\cos^{N-2}\left(\frac{\theta}{2}\right)}{1-\cos^{N-2}(\mu)}\right), \tag{2}\]
as the directions that respectively minimize (\(z^{\prime}\)) and maximize (\(y^{\prime}\)) the second moment of the collective spin.
The maximum eigenvalue of the covariance matrix \(\Gamma\) is also proportional to the QFI of the state Eq. (1). One obtains a QFI larger than \(N\) for \(0<\mu<2\pi\) and even reaching \(N^{2}\) for \(\mu=\pi\), when a "Schodinger cat" state is obtained [15].
Experimentally, the OAT dynamics is implemented in e.g. ion traps through light-mediated interactions [21] or BECs through atomic elastic collisions [22], and it is routinely used for the preparation of spin-squeezed states. These enabled numerous demonstrations of quantum-enhanced metrology, such as being applied to measuring magnetic fields [23], improving frequency resolution in atomic clocks [24; 25], and realizing squeezed matter-wave interferometry [26].
If we consider a metrological application where the number of particles in the probe is limited to some maximum number, we also set a limit to the achievable QFI (_i.e._ the Heisenberg limit), and thus to the sensitivity. However, one might argue that the state preparation could involve more particles than the one used in the probe itself, and ask whether this could be used to provide some advantage. While it is clear that if the ancillary particles are just discarded no advantage can be obtained, it is not trivial to see whether the probe sensitivity can be improved by a partial characterization of the ancillary particles' state. Here, by partial characterization we mean the information that can be obtained from some measurement of experimentally practical implementation, such as the result of a collective measurement performed on the ensemble of ancillary particles.
This question can be refined even further, by considering a more realistic situation that includes the relevant noise sources. In fact, during the preparation of squeezed BECs there are inevitable decoherence mechanisms resulting from technical and intrinsic noise [27; 28]. The former can originate from imperfections in the implementation, while the latter is fundamental as it originates from particle losses. For BECs, these noise sources limit the OAT evolution to short times (\(\mu<N^{-2/3}\)).
## III Assisted metrology
In order to present our metrological protocol, we consider the case of an atomic ensemble in which the OAT dynamics is followed by a spatial separation of the particles into two distinct partitions [3], see Fig. 1. This last step can be realized by modifying the trapping potential to a double-well [29], or by exploiting additional internal states of the atoms [30], and it can formally be described by a beam-splitter transformation. The resulting split spin-squeezed state can thus be written as [3]
\[|\Phi(\mu)\rangle=\frac{1}{2^{N}}\sum_{N_{A}=0}^{N}\sum_{k_{A}=0} ^{N_{A}}\sum_{k_{B}=0}^{N_{B}}\sqrt{\binom{N}{N_{A}}\binom{N_{A}}{k_{A}}\binom{ N_{B}}{k_{B}}}\\ \times e^{-i\frac{\pi}{2}(N/2-k_{A}-k_{B})^{2}}\ket{k_{A}}_{N_{A}} \ket{k_{B}}_{N_{B}}, \tag{3}\]
where \(N_{A}\) is the number of particles for partition \(\alpha\in\{A,B\}\) with \(N_{A}+N_{B}=N\), and \(\ket{k_{\alpha}}_{N_{\alpha}}\) is the \(N_{\alpha}-\)particle Dicke state with \(k_{\alpha}\) excitations. Crucial to this state is that the multipartite entanglement generated by the OAT dynamics is partially "converted" by the spatial splitting into mode entanglement between the \(A\) and \(B\) partitions.
Split spin squeezed states have already been realized experimentally [6; 31], and are thus becoming relevant for practical metrological applications [9]. In the protocol we consider, \(N_{B}\) particles constitute the probe, whose sensitivity might depends on the operations performed on the \(N_{A}\) ancillary particles. In the following we investigate the probe's conditional states obtained upon performing a collective spin measurement on \(A\), and discuss in which scenarios this assisted protocol can provide a better metrological performance than the standard OAT dynamics.
### Ideal scenario
Let us consider the situation in which it is performed a measurement of the number of ancilla particles, and of their collective spin \(S_{\mathbf{a}}^{A}\) along direction \(\mathbf{n}\). Note that these two physical quantities can be measured simultaneously, as the associated operators commute. Obtaining as result (\(N_{A},I_{A}\)), the probe particles are left in the (unnormalized) state
\[|\Phi(\mu)^{B}\rangle=\prescript{n}{}{}_{N_{A}}\langle I_{A}\,|\Phi(\mu)\rangle\,, \tag{4}\]
where \(\prescript{n}{}{}_{N_{A}}\langle I_{A}|\) is the \(N_{A}\)-particle Dicke state with \(I_{A}\) excitations for \(S_{\mathbf{a}}^{A}\). The probability for this to state to occur is given by
\[p(I_{A},N_{A}|\mathbf{n})=\frac{1}{2^{2N}}\sum_{k_{A}=0}^{N_{A} }\sum_{k_{A}^{2}=0}^{N_{A}}\sum_{k_{B}=0}^{N-N_{A}}\binom{N}{N_{A}}\binom{N-N_ {A}}{k_{B}}\sqrt{\binom{N_{A}}{k_{A}}\binom{N_{A}}{k_{A}}}\\ \times e^{-i\frac{\pi}{2}(N/2-k_{A}-k_{B})^{2}}e^{i\frac{\pi}{2}( N/2-k_{A}^{2}-k_{B})^{2}}\prescript{n}{}{}_{N_{A}}\langle I_{A}|k_{A}\rangle_{N_{A}} \langle k_{A}^{\prime}|I_{A}\rangle_{N_{A}}^{\mathbf{n}}. \tag{5}\]
This expression allows us to introduce the probability of obtaining result \(I_{A}\) from a measurement of \(\hat{S}_{\mathbf{a}}^{A}\) on \(N_{A}\) particles, namely \(p_{N_{A},\mathbf{n}}(I_{A})=p(I_{A},N_{A}|\mathbf{n})/p(N_{A})\), where \(p(N_{\alpha})=2^{-N}\binom{N}{N_{A}}\) is the probability of having \(N_{\alpha}\) particles in mode \(\alpha\in\{A,B\}\).
For a given \(N_{A}\) it is worth investigating the conditional states Eq. (4), their QFI, and their probability to occur Eq. (5). From our analytical expression it is possible to consider arbitrary measurement directions \(\mathbf{n}\) and results \(I_{A}\), but in the following we will focus on discussing the parameters we found most interesting.
We start considering a collective spin measurement on the \(yz-\)plane performed locally on \(A\), so that \(S_{\mathbf{a}}^{A}=\sin\theta_{A}S_{y^{\prime}}^{A}+\cos\theta_{A}S_{z^{\prime}} ^{A}\), where \(\theta_{A}\) is the angle between the measurement and the squeezing direction \(z^{\prime}\). For \(I_{A}=N_{A}/2\), we show in Fig. 2a the QFI of the conditional probe states as a function of \(\theta_{A}\). Interestingly, for small values of \(\mu\) conditional states with large QFI are obtained for \(\theta_{A}=0\) (_i.e._ the squeezing direction \(z^{\prime}\)), while for larger values of \(\mu\) a large QFI is obtained for \(\theta_{A}\approx\pi/2\) (_i.e._ the antisqueezing direction \(y^{\prime}\)). In order to understand better this behaviour, we look at the Wigner functions of the conditional probe states resulting from different measurement angles and levels of squeezing. Interestingly, we observe that a measurement along \(\theta_{A}\approx 0\) results in conditional states that resemble spin squeezed and oversqueezed states, while a measurement along \(\theta_{A}\approx\pi/2\) results in conditional states that resemble a superposition of coherent spin states, i.e. a spin cat state. To analyse the probability \(p(I_{A})\equiv p_{N_{A},\mathbf{n}}(I_{A})\) of these states to occur, we plot in Figs. 2b,c the value of Eq. (5) for different levels of squeezing. As \(\mu\) increases, if the measurement is performed along the anti-squeezing direction \(y^{\prime}\), \(p(I_{A})\) tends to spread uniformly over all range of \(I_{A}\) (see Fig. 2b), while for a measurement along the squeezing direction \(z^{\prime}\), then \(p(I_{A})\) gets peaked around \(I_{A}=N_{A}/2\) (see Fig. 2c). For both measurement directions, and for different results \(I_{A}\), we can then compute the QFI of the conditional states, see Figs. 2d,e.
With these in hand, we want to compare the metrological advantage given by the conditional probe states just investigated, and an OAT state. The resources we keep constrained are the number of atoms in the probe state, and the adimensional squeezing parameter \(\mu\). For a nonlinearity \(\chi\) independent of the particle number, the latter constraint corresponds to keeping fixed the state preparation time \(t=\mu/2\chi\). In Fig. 2f we compare the value of \(F_{Q}/N_{B}\) for the different conditional states just discussed, with the one for an OAT state with \(N_{B}=50\) particles. This comparison is meaningful for a scenario where the number of particle in the probe is limited, but additional ancillary particles not interacting with the field to be estimated can be included in the state preparation and measurement. Interestingly, we see that there are situations where the conditional states reach much higher \(F_{Q}/N_{B}\) than the OAT state, and that one can even saturate the Heisenberg limit, i.e. \(F_{Q}/N_{B}\approx N_{B}\), for \(\mu\ll\pi\).
In particular, when the measurement direction is aligned with the anti-squeezing direction \(y^{\prime}\), we obtain for relatively large values of \(\mu\) (\(\mu>0.4\) for \(N_{A}=N_{B}=50\)) conditional states with \(F_{Q}/N_{B}\) that is in general high compared to a simple OAT state is observed. Interestingly, we also observe large fluctuations of \(F_{Q}/N_{B}\) for \(\mu>0.5\), and that it is possible to reach \(F_{Q}/N_{B}\approx N_{B}\) for \(\mu\ll\pi\) (see Supplementary Material [32] Sec. 2). On the other hand, when the measurement is aligned with the squeezing direction \(z^{\prime}\), the value of \(F_{Q}/N_{B}\) obtained for the conditional states roughly follows the one for an OAT state, apart for small \(\mu\). This regime is of particular interest, as i) these values of \(\mu\) are the one typically explored in cold atom experiments, ii) in this regime one can exceed the \(F_{Q}/N_{B}\) of an OAT state, and iii) this occurs with high probability, since \(p(I_{A})\) is peaked around \(I_{A}=N_{A}/2\). Moreover, we will show in the following section that this configuration is also robust to noise, in the sense of particle number fluctuations and imperfect detection.
We then consider a collective spin measurement along \(x\) performed locally on \(A\). The probability to obtain a certain measurement result \(I_{A}\) strongly depends on the amount of squeezing \(\mu\), Fig. 3a. In fact, for \(\mu=0\) the state is fully polarized along \(x\), and one has \(I_{A}=N_{A}\) with unit probability, but when \(\mu\) increases the state starts to 'wrap around' the Bloch sphere, resulting in a non-zero probability for all possible \(I_{A}\). The QFI for the associated conditional states is illustrated in Fig. 3b, showing a large variation even reaching the Heisenberg limit. If the result \(I_{A}=N_{A}\) is obtained, the conditional probe state is a mildly squeezed spin state. However, as soon as one obtains \(I_{A}<N_{A}\), the resulting conditional state resembles a spin cat state, Fig. 3c. Note that for \(I_{A}>N_{A}/2\) the
angular separation of the coherent spin states participating in the superposition, and therefore also the number of interference fringes, scales with \(N_{A}-l_{A}\). Moreover, remember that even if conditional states with \(F_{Q}/N_{B}\approx N_{B}\) are possible, these occur with very small probabilities. To compare the metrological advantage given by these conditional states and an OAT state we show in Fig. 3d the corresponding QFI values. For relatively large values of \(\mu\) (\(\mu>0.3\) for \(N_{A}=N_{B}=50\)) we obtain conditional states with \(F_{Q}/N_{B}\) that strongly fluctuates, taking values both larger and lower than the one of an OAT state. The behaviour is much more regular for small values of \(\mu\), where we can see a regime in which conditional states with \(l_{A}<N_{A}\) give a \(F_{Q}/N_{B}\) growing in time much faster than the one of an OAT state (see \(\mu<0.1\) in Fig. 3d). This regime is the one resulting in conditional states that closely resemble spin cat states.
In Sec. 1 of the Supplementary Material [32] we give more details about the states considered so far, while in Sec. 2 of [32] we show Wigner functions of the conditional states resulting from several other measurement directions and outcomes, together with their properties.
### Noisy scenarios
So far we have considered a fixed \(N_{A}\), but the splitting process resulting in the state Eq. (3) is associated to partition noise which makes \(N_{A}\) and \(N_{B}=N-N_{A}\) fluctuate. For the equal (50:50) splitting we considered, the probability to observe \(N_{a}\) particles in mode \(\alpha=A,B\) is simply given by the Binomial distribution \(p(N_{\alpha})\). Concretely, this means that in an experiment the probe states will have a fluctuating number of particles and, therefore, a fluctuating sensitivity. In a practical scenario it would be extremely inefficient to post-select only experimental realisations with a given \(N_{B}\), therefore we might ask what is the average sensitivity if all realisations are considered. In each realisation, \(A\)'s measurement gives knowledge of \(N_{A}\) and \(l_{A}\), which would allow us to perform a local optimisation on \(B\) side to exploit the maximum sensitivity of the conditional state. We can thus define the average QFI density as
\[\left\langle\frac{F_{Q}}{N_{B}}\right\rangle_{l_{A}}=\sum_{N_{B}=0}^{N}p(N_{B} )\frac{F_{Q}[\rho_{l_{A},N_{A}|\mathbf{n}}^{B}]}{N_{B}}, \tag{6}\]
Figure 3: **Measurements along the x direction.** Properties of the conditional states obtained from a split spin-squeezed state with \(N=100,N_{A}=N_{B}=N/2\). (a) Probability of measuring \(l_{A}\) for different levels of squeezing, and (b) the \(F_{Q}/N_{B}\) of the associated conditional states. (c) Selected Wigner functions showing cat-like states of different size. (d) Comparison between \(F_{Q}/N_{B}\) of conditional states and of OAT states, as a function of the squeezing \(\mu\). Here, the comparison is for a fixed number of particles \(N_{B}\) in the probe state.
Figure 2: **Measurements on the xy-plane.** Properties of the conditional states obtained from a split spin-squeezed state with \(N=100,N_{A}=N_{B}=N/2\). (a) For \(l_{A}=N_{A}/2\), \(F_{Q}/N_{B}\) as a function of the measurement direction \(\theta_{A}\), and selected Wigner functions showing squeezed and cat-like states. Fixing \(\theta_{A}\) such that the measurement direction is either \(y^{\prime}\) or \(z^{\prime}\), we show the probability of measuring \(l_{A}\), panels (b,c) respectively, and the \(F_{Q}/N_{B}\) of the associated states, panels (d,e). For \(l_{A}=N_{A}/2\), we show in (f) a comparison between \(F_{Q}/N_{B}\) of conditional states and of OAT states, as a function of the squeezing \(\mu\). Here, the comparison is for a fixed number of particles \(N_{B}\) in the probe state.
where \(F_{B}[\rho^{B}_{l_{A},N_{B}|\mathbf{n}}]\) is the QFI of the conditional probe state \(\rho^{B}_{l_{A},N_{B}|\mathbf{n}}\). The latter is obtained from a measurement on \(A\) along the direction specified by \(\mathbf{n}\), and giving as result \(l_{A}\). However, note that \(l_{A}\) has now to be a function of \(N_{A}\), since the size of system \(A\) is fluctuating. For example, we could compute Eq. (6) for the case when \(l_{A}=\lceil N_{A}/2\rceil\), which we have seen to be the most likely result for measurements on the \(yz\) plane and small \(\mu\), see Figs. 2b,c. Remarkably, we observe that there is no appreciable difference between a numerical evaluation of \(\langle F_{Q}/N_{B}\rangle\) for measurements on the \(yz\) plane and \(l_{A}=\lceil N_{A}/2\rceil\), and the value of \(F_{Q}/N_{B}\) when \(N_{A}=N_{B}=N/2\) and \(l_{A}=N_{A}/2\). In other words, averaging \(F_{Q}/N_{B}\) over the distribution \(p(N_{B})\) seems to give a result compatible with the value of \(F_{Q}/N_{B}\) when \(N_{B}=N/2\). This could be explained by noting that: i) for large \(N\) the distribution \(p(N_{B})\) is sharply peaked and symmetric around \(N_{B}=N/2\), and ii) in the averaging, the \(F_{Q}/N_{B}\) of a state with \(N_{B}=N/2+k\) particles compensates the one of a state with \(N_{B}=N/2-k\) particles, resulting in a value very close to the \(F_{Q}/N_{B}\) of a state with \(N_{B}=N/2\) particles. Perhaps surprisingly, we find that this correspondence holds for any value of \(\mu\), and for different choices of the function for \(l_{A}\) (e.g. \(l_{A}=N_{A}-1\) for measurements of \(S^{A}_{x}\)). More details about this comparison can be found in Sec. 2.2 of Supplementary Material [32]. There, we also compare another possible definition of average QFI, in the case where no measurement optimisation is done on \(B\) side depending on the value of \(N_{B}\) (still, the same post-selection according to \(l_{A}(N_{A})\) is applied). Even in this scenario, we observe that the average QFI is compatible with the value of \(F_{Q}/N_{B}\) when \(N_{B}=N/2\), which can be attributed to the fact that conditional states with different \(N_{B}\approx N/2\) can appear very similar.
The second type of noise that we analyze is a measurement noise that results in errors on the observed value of \(l_{A}\). Experimentally, this can originate from imperfect atom number counting, which always happens as detectors have finite resolution. We model this noise as a Gaussian distribution centered around \(l_{A}\) and of standard deviation \(\sigma\), such that if the value \(l^{*}_{A}\) is observed there is a probability \(p_{l_{A},\sigma}(l^{*}_{A})=(2\pi\sigma^{2})^{-1/2}e^{-(l^{*}_{A}-l_{A})^{2}/ 2\sigma^{2}}\) for the true value to be \(l_{A}\).
Analogously to the previous case, we define the QFI averaged over different values of \(l_{A}\) as
\[\left\langle\frac{F_{Q}}{N_{B}}\right\rangle_{l^{*}_{A}}=\frac{F_{Q}[\mathcal{ N}\sum_{l_{A}=0}^{N_{A}}p_{N_{A},\mathbf{n}}(l_{A})p_{l_{A},\sigma}(l^{*}_{A}) \rho^{B}_{l_{A},N_{B}|\mathbf{n}}]}{N_{B}}, \tag{7}\]
where \(\mathcal{N}^{-1}=\sum_{l_{A}=0}^{N_{A}}p_{N_{A},\mathbf{n}}(l_{A})p_{l_{A}, \sigma}(l^{*}_{A})\) is a normalization parameter. We illustrate in Fig. 4 how this quantity varies as a function of \(\sigma\), and for different measurement settings on \(A\).
Figure 4a shows that the noise we are considering affects differently the conditional states obtained upon measurement of \(S^{A}_{\gamma}\) or \(S^{A}_{\gamma}\). In the first case, it appears that there exists a critical level of noise \(\sigma^{\star}\) after which the average QFI is the one of a mixture of coherent spin states. On the other hand, this is not true in the second case, where we see the average QFI decreasing only asymptotically. We can understand this behaviour by looking at the Wigner's in Fig. 2a, where it is reasonable to expect that the Gaussian-like conditional states resulting from \(S^{A}_{\gamma}\) measurements are more robust than the cat-like conditional states resulting from \(S^{A}_{\gamma^{\prime}}\) measurements. For a given amount of noise \(\sigma\), it is interesting to know how the average QFI changes as a function of the squeezing \(\mu\). This is illustrated in Fig. 4b, for different levels of noise \(\sigma\). Interestingly, while the fragility of the conditional states obtained from \(S^{A}_{\gamma^{\prime}}\) measurements results in an average QFI that can quickly fall below the value of the QFI for an OAT state, conditional states obtained from \(S^{A}_{\gamma^{\prime}}\) measurements seem able to achieve an average QFI larger than the one of an OTA state for small \(\mu\), even if \(\sigma\) is relatively large. This result further supports the statement made in the previous section, saying that the regime of small \(\mu\) and \(S^{A}_{\gamma}\) measurements is of great interest for assisted metrology tasks, since it results in conditional states with high sensitivity and noise robustness.
Figure 4c shows how the measurement noise we are considering affects the conditional states obtained upon measurement of \(S^{A}_{x}\), for different values of the result \(l^{*}_{A}\). As expected, the average QFI of conditional states with larger \(l^{*}_{A}\) decays faster as the noise \(\sigma\) increases, since such states are cat-like states with fine structures in the Wigner function that are rapidly washed-out by noise, see Fig. 3c. Also in this scenario, it is interesting to know how the average QFI changes as a function of the squeezing \(\mu\), for a fixed amount of noise \(\sigma\). This is illustrated in Fig. 4d, for conditional states with different \(l^{*}_{A}\). For small values of \(\mu\) it is possible to see that the QFI of an OAT state can be surpassed by the considered conditional states, given a \(l^{*}_{A}<N\) and a small enough \(\sigma\).
In addition, a further analysis on imperfection on atom counting on both \(N_{\alpha}\) and \(l_{A}\) is discussed in Supplementary Material [32] Sec. 3.
## IV Measurement-based preparation of spin cat states
Schodinger cat states are regarded as powerful resources for quantum metrology, error-corrected quantum computing, and fundamental studies. While cat states have been successfully implemented with trapped ions [33; 34], Rydberg atoms [35], optical and microwave photons [36; 37; 38; 39; 40], and mechanical oscillators [41], their realization atomic ensembles has remained elusive. Difficulties lie in engineering the correct nonlinear interactions, suppressing noise mechanisms (such as particle losses and phase noise), and performing measurements with high resolution.
It is known that the OAT dynamics Eq. (1) result in a spin cat state at \(\mu=\pi\)[15]. Nevertheless, following this simple strategy is unrealistic for BECs, due to the severe particle losses that would occur during the long dynamics. Approaches to mitigate these have been investigated [16], even if their experimental implementation remains challenging. Alternatively, ideas have been proposed to prepare macroscopic superpositions between two modes of a spin-1 BEC with a dynamic governed by spin-exchanging collisions [42].
In the analysis of conditional states we presented, we have seen that spin cat states can be obtained as a result of measurement along suitable directions (e.g. \(S_{A}^{A}\) or \(S_{x}^{A}\)), if appropriate results are obtained, see Figs. 2,3. We can thus propose to use this approach for the heralded preparation of macroscopic superposition states in spin-\(1/2\) BECs. Crucially, even if this protocol demands a high-resolution in counting the number of particles, it has the advantage of being potentially fast, as the initial squeezed state that needs to be prepared requires an OAT evolution parameter \(\mu\) much smaller than \(\pi\) for \(S_{x}^{A}\) measurements, see Fig. 3.
To understand the robustness of the protocol we propose, we investigate how finite measurement resolution affects the prepared state. From Figs. 3,5 we can see that after a measurement of \(S_{x}^{A}\) different cat states are obtained depending on the result \(l_{A}\). In particular, these states have different size (i.e. separation between the two coherent spin state components), different parity (i.e. Wigner function value at the origin), and different orientation on the \(yz\)-plane. Therefore, if in an experiment the measured \(l_{A}^{*}\) differs from the actual \(l_{A}\) because of noise, the resulting conditional state will be a statistical mixture of different cat states. If this noise is too large, averaging over different cat states would result in a washing-out of the interference fringes, and thus of the quantum coherence of the superposition. Importantly, to estimate what is the amount of noise that can be tolerated it is not enough to take into account the variance \(\sigma^{2}\) of the Gaussian distribution modeling uncertainties in \(l_{A}\), but also the probability that a certain \(l_{A}\) occurs for the parameters considered (See Eq. (14) of Supplementary Material [32]). For this reason, if result \(l_{A}^{*}\) is obtained, the conditional mixed state takes the form
\[\rho(l_{A}^{*},\sigma)=\mathcal{N}\sum_{l_{A}=0}^{N_{A}}p_{N_{A},S_{x}^{A}}(l_ {A})p_{l_{A},\sigma}(l_{A}^{*})\rho^{B}_{l_{A},N_{A}|\mathbf{n}}. \tag{8}\]
In Fig. 5a we plot the Wigner function of such states for different values of \(l_{A}^{*}\) and \(\sigma\). Even if the precise value of the noise that can be tolerated depends on the \(l_{A}^{*}\) considered, we observe that around \(\sigma\approx 0.7\) the interference fringes characterising the coherent superposition vanish. This value corresponds to an approximate probability of \(p\approx 0.2\) for the real value of \(l_{A}\) to be \(l_{A}^{*}\pm 1\).
A more quantitative analysis of the effect of noise is obtained by looking at the negativity of the Wigner function. For continuous variable systems, Wigner negativity is related to non-Gaussianity and non-classicality of the state [43], and it is known to be a resource for quantum information tasks [44; 45; 46]. For spin systems, however, the Wigner function is defined on a (generalized Bloch) sphere, and the definition of non-Gaussianity and negativity is subtle. Here we follow Ref. [47], and compute the Wigner negativity as
\[WN(\rho)=\frac{1}{2}\left(\frac{2j+1}{4\pi}\int_{\theta=0}^{\pi}\int_{\phi=0} ^{2\pi}|W_{\rho}(\theta,\phi)|\sin\theta d\theta d\phi-1\right), \tag{9}\]
Figure 5: **Heralded generation of cat states.** Non-classical features of the conditional states obtained from a split spin-squeezed state with \(N=100,N_{A}=N_{B}=N/2\), after measuring \(S_{x}^{A}\). We show, as a function of the level of detection noise \(\sigma\), (a) Wigner functions of conditional states associated to different \(l_{A}^{*}\), and (b) Wigner function negativity as defined in Eq.(9).
where \(W_{\rho}(\theta,\phi)\) is the value of the Wigner function at point \((\theta,\phi)\) on the Bloch sphere, see Sec. 4 of the Supplementary Material [32]. In Fig. 5b we show how the negativity of the conditional mixed states' Wigner function changes as a function of the amount of noise \(\sigma\), for different values of \(I_{A}^{*}\). For small \(\sigma\), we have that \(\rho\) stays very close to a pure state with \(I_{A}^{*}\approx I_{A}\), so that its negativity stays constant until a critical value of \(\sigma\) (approximately \(0.4\) in Fig. 5b) where conditional states with \(I_{A}^{*}\pm 1\) start to contribute. After this point the negativity decreases until the point where it completely vanishes (approximately \(0.5-0.9\) in Fig. 5b).
## V Conclusions
We analysed the conditional states resulting from a local measurement in one of the two parts of a split-spin squeezed state. The multipartite entanglement present in these states, combined with the local measurement, leads to a rich family of non-trivial conditional states exhibiting high Fisher information or large Wigner negativities. These have been investigated quantitatively, for different local measurement directions and outcomes, both without and with the presence of noise. The latter was chosen to take into account particle number fluctuations in the conditional states, which are intrinsic in the probabilistic (beam-splitter-like) splitting process, as well as measurement imperfections. We observe that the observed non-classical properties are robust to noise, and therefore of interest for applications in quantum technologies.
In this context, we propose a protocol that can be used to enhance the sensitivity of a measurement probe in a scenario where its size, as well as the state preparation time, are limited. Our idea is based on the fact that, if the probe is entangled with an ancilla system, a local measurement in the latter can prepare the probe in conditional states with much higher sensitivity. Concretely, we analyse a scenario where a split spin-squeezed state is shared between the probe and the ancilla, and identified the range of system parameters and local measurements providing a metrological advantage.
Besides this practical application, we note that the measurement-based state preparation protocol we investigate can be used to generate spin cat states. These macroscopic superposition states are of interest not only for metrology, but also for fundamental research. We quantify the non-classicality of the conditional states that can be prepared through a measure of their Wigner function negativity, and investigated its robustness with noise.
A natural platform where our ideas could be realized are ultracold atomic ensembles, where spin-squeezed states are routinely prepared for a number of applications. More recently, the spatial splitting of such states was also demonstrated [6; 31], thus opening the path to the experimental study of split spin-squeezed states [3]. Apart from shedding light on multipartite quantum correlations [10; 11; 12], it is of interest to investigate the usefulness of such states for quantum technologies, such as for quantum teleportation [48], and metrology [7; 8]. Our study brings a contribution in these interesting directions.
**Acknowledgements.-** This work is supported by the National Natural Science Foundation of China (Grants No. 12125402, No.11975026 and No.12147148) and the Beijing Natural Science Foundation (Z190005). JG acknowledges financial support from the China Scholarship Council (Grant No. 202106010192). FS acknowledges the China Postdoctoral Science Foundation (Grant No. 2020M680186). MF was supported by The Branco Weiss Fellowship - Society in Science, administered by the ETH Zurich.
|
2307.05658 | Model-checking in the Foundations of Algorithmic Law and the Case of
Regulation 561 | We discuss model-checking problems as formal models of algorithmic law.
Specifically, we ask for an algorithmically tractable general purpose
model-checking problem that naturally models the European transport Regulation
561, and discuss the reaches and limits of a version of discrete time stopwatch
automata. | Moritz Müller, Joost J. Joosten | 2023-07-11T16:22:14Z | http://arxiv.org/abs/2307.05658v1 | # Model-checking in the Foundations of Algorithmic Law and the Case of Regulation 561
###### Abstract
We discuss model-checking problems as formal models of algorithmic law. Specifically, we ask for an algorithmically tractable general purpose model-checking problem that naturally models the European transport Regulation 561 ([49]), and discuss the reaches and limits of a version of discrete time stopwatch automata.
## 1 Model-checking and algorithmic law
The European transport Regulation 561 [49] concerns activities of truck drivers as recorded by tachyographs. A tachyon recording determines for each time unit the activity of the driver which can be _driving_, _resting_ or _doing other work_. Regulation 561 is a complex set of articles that limits driving and work time by prescribing various types of rest periods. The regulation prescribes that the time units are minutes, so a tachyon recording of 2 months determines a sequence of activities of length 87840. It is clear that the legality of such a recording can only be judged with the help of an algorithm.
By the application of a law to a case we mean the decision whether the case is legal according to that law or not. By an _algorithmic law_ we mean a law whose application to a case is executed by an algorithm. Instead of designing one algorithm per law we are interested in _general purpose_ algorithms: these take as input both a case from a set of cases of interest, and a law from a set of laws of interest, and decide whether the given case is legal according to the given law or not. In order to present cases and laws of interest as inputs to an algorithm, both have to be suitably _formalized_.
### Computational problems in algorithmic law
For Regulation 561, a case is a sequence of activities and hence straightforwardly formalized as a word over the alphabet \(\Sigma\coloneqq\{d,r,w\}\): e.g., the word \(dddwrr\in\Sigma^{6}\) is the activity
sequence consisting of 3 minutes driving, followed by 1 minute other work, followed by 2 minutes resting.1 Generally, we formalize a set of cases by a class of finite structures \(\mathcal{K}\).2
Footnote 1: Some technograph readers will work with other formats like _activity-change lists_: lists of timepoints where the driver’s activity changes. We do not discuss other formats in this paper.
Footnote 2: Words are straightforwardly seen as structures, see e.g. [35, Example 4.11].
In this setting, a generic formalization of a law is given by translating the law to a sentence \(\varphi\) of a formal language, i.e., a logic \(L\). That a particular case \(K\in\mathcal{K}\) is legal according to the law \(\varphi\) then formally means that \(K\vDash\varphi\), i.e., \(K\) satisfies \(\varphi\). We arrive at what is the central computational problem of algorithmic law:
Model-checkingThe _model-checking problem (for \(L\) over \(\mathcal{K}\))_ is a formal model for a family of algorithmic laws where laws are formalized by sentences of \(L\) and cases are formalized by structures in \(\mathcal{K}\).
\begin{tabular}{|l l|} \hline MC(\(\mathcal{K},L\)) & \\ _Input:_ & \(K\in\mathcal{K}\) and \(\varphi\in L\). \\ _Problem:_ & \(K\vDash\varphi\)? \\ \hline \end{tabular}
A _model-checker (for \(L\) over \(\mathcal{K}\))_ is an algorithm deciding MC(\(\mathcal{K},L\)). This is a general purpose algorithm as asked for above.
We consider two more computational problems associated to algorithmic law.
Consistency-checkingA minimal requirement for law design is that it should be possible to comply with the law (cf. [31] for a problematic case). For laws governing activity sequences _consistency_ means that there should be at least one such sequence that is legal according to the law. A related question of interest is whether a certain type of behaviour can be legal. This is tantamount to ask whether the artificial law augmented by demanding the type of behaviour is consistent.
This is formally modeled by the _consistency problem (for \(L\) over \(\mathcal{K}\))_:
\begin{tabular}{|l l|} \hline Con(\(\mathcal{K},L\)) & \\ _Input:_ & \(\varphi\in L\). \\ _Problem:_ & does there exist some \(K\in\mathcal{K}\) such that \(K\vDash\varphi\)? \\ \hline \end{tabular}
SchedulingAssume a truck driver has to schedule next week's driving, working and resting and is interested to drive as long as possible. A week has 10080 minutes, so the driver faces the computational optimization problem to compute a length 10080 extension of the word given by the current technograph recording that is legal according to Regulation 561 and that maximizes driving time.
Consider laws governing activity sequences, that is, \(\mathcal{K}\) is the (set of structures corresponding to the) set of finite words \(\Sigma^{*}\) over some alphabet \(\Sigma\). For a word \(w=a_{0}\cdots a_{n-1}\in\Sigma^{n}\)
(the \(a_{i}\) are letters that represent the corresponding activities) and a letter \(a\in\Sigma\), let \(\#_{a}(w)\) denote the number of times the letter \(a\) appears in \(w\), i.e.,
\[\#_{a}(w):=|\{i<n\mid a_{i}=a\}|.\]
The _scheduling problem (for \(L\) over \(\mathcal{K}=\Sigma^{*}\))_ is:
\begin{tabular}{|l l|} \hline \hline Scheduling(\(\mathcal{K},L\)) & \\ _Input:_ & \(\varphi\in L\), \(w\in\Sigma^{*}\), \(a\in\Sigma\) and \(n\in\mathbb{N}\). \\ _Problem:_ & if there is no \(v\in\Sigma^{n}\) such that \(wv\models\varphi\), then output "illegal"; \\ & otherwise output some \(\overline{v}\in\Sigma^{n}\) such that \\ & \(\#_{a}(w\overline{v})=\max\big{\{}\#_{a}(wv)\mid v\in\Sigma^{n},wv\models\varphi \big{\}}\). \\ \hline \hline \end{tabular}
### Model-checking as a formal model
There is a vast amount of research concerning model-checking problems \(\operatorname{MC}(\mathcal{K},L)\). The two main interpretational perspectives stem from _database theory_ and from _system verification_. In database theory [46], \(\mathcal{K}\) is viewed as a set of databases, and \(L\) a set of Boolean queries. In system verification [7], \(\mathcal{K}\) is as a set of transition systems or certain automata that formalize concurrent systems or parallel programs, and \(L\) formalizes correctness specifications of the system, that is, properties all executions of the system should have. We add a third interpretational perspective on model-checking problems as formal models for families of algorithmic laws. We highlight three conflicting requirements on such a formal model.
Tractability requirementThe first and foremost constraint for a model \(\operatorname{MC}(\mathcal{K},L)\) of a family of algorithmic laws is its computational complexity. For the existence of a practically useful general purpose model-checker the problem \(\operatorname{MC}(\mathcal{K},L)\) should be _tractable_. We argue that the notion of tractability here cannot just mean \(\mathsf{PTIME}\), a more fine-grained complexity analysis of \(\operatorname{MC}(\mathcal{K},L)\) is required.
Classical computational complexity theory tells us that already extremely simple pairs \((\mathcal{K},L)\) have intractable model-checking problems. An important example from database theory is that \(\operatorname{MC}(\mathcal{K},L)\) is \(\mathsf{NP}\)-complete for \(L\) the set of conjunctive queries and \(\mathcal{K}\) the set of graphs (or the single binary word \(01\)) [18]. An important example [51] from system verification is that \(\operatorname{MC}(\mathcal{K},L)\) is \(\mathsf{PSPACE}\)-complete for \(L\) equal to linear time temporal logic \(\mathsf{LTL}\) and \(\mathcal{K}\) the class of finite automata [53].
However, this \(\mathsf{PSPACE}\)-completeness result is largely irrelevant because the model-checking problem is _fixed-parameter tractable (fpt)_, that is, it is decidable in time \(f(k)\cdot n^{O(1)}\) for some computable function \(f:\mathbb{N}\to\mathbb{N}\) where \(n\) is the total input size and \(k:=\|\varphi\|\) the size of (a reasonable binary encoding of) the input \(\mathsf{LTL}\) formula \(\varphi\). In fact, we have _parameter dependence_\(f(k)\leqslant 2^{O(k)}\). Informally speaking, we are mainly interested in inputs with \(k\ll n\), so this can be considered tractable. In other words, the computational hardness relies on uninteresting inputs with relatively large \(k\). In contrast, model-checking conjunctive queries
over graphs is likely not fixed-parameter tractable: this is equivalent to \(\mathsf{FPT}\neq\mathsf{W}[1]\), the central hardness hypothesis of parameterized complexity theory.3
Footnote 3: Grohe [38] (refined in [21, 22]) gives a quite complete understanding of which conjunctive queries are tractable.
The focus on inputs with \(k\ll n\) is common in model-checking and it is an often repeated point that a reasonable complexity analysis must take this asymmetry of the input into account; [48] is an early reference addressing both perspectives from database theory and system verification. The theoretical framework for such a fine-grained complexity analysis is parameterized complexity theory [35, 26, 27] whose central tractability notion is fixed-parameter tractability.4
Footnote 4: This paper does not require familiarity with parameterized complexity theory. Only Section 6.3 requires some results of this theory and will recall what is needed.
To sum up, judging he tractability of \(\operatorname{MC}(\mathcal{K},L)\) should be based on a fine-grained complexity analysis that measures the computational complexity with respect to various input _aspects_\(n,k,\ldots\).5 The quality of the model \(\operatorname{MC}(\mathcal{K},L)\) depends on the "right" identification of relevant aspects in its complexity analysis.
Footnote 5: Formally, an _aspect_ could be defined as a _parameterization_, possibly viewed as a _size measure_ as in [35, p.418f]. However, we don’t need a definition and use the term informally.
Expressivity requirementRecall that we ask for a _general purpose_ model-checker that solves a model-checking problem \(\operatorname{MC}(\mathcal{K},L)\) modeling a family of algorithmic laws instead of single-purpose model-checkers deciding \(\operatorname{MC}(\mathcal{K},\{\varphi\})\), one per algorithmic law \(\varphi\). From a theoretical perspective we expect insight on which laws can possibly be algorithmic.
From a practical perspective, this avoids the costly production of many algorithms, their updates following law reforms and their validation for legal use. It is thus desirable to find tractable \(\operatorname{MC}(\mathcal{K},L)\) for as rich as possible classes \(\mathcal{K}\) and \(L\). In particular, for laws governing sequences of activities (i.e., \(\mathcal{K}=\Sigma^{*}\)) we ask for an as expressive as possible logic \(L\). Of course, this is in tension with the tractability requirement.
Naturality requirementFrom an algorithmic perspective it is not only the expressivity of \(L\) that matters, but also its _succinctness_. Typically, model-checking complexity grows fast with the size of the sentence \(\varphi\) formalizing the law, so logics allowing for shorter formalizations are preferable. E.g., it is well-known that the expressive power of \(\mathsf{LTL}\) is not increased when adding past modalities but their use can lead to exponentially shorter sentences. Crucially, the complexity of model-checking (over finite automata) is not substantially increased. Moving to a more succinct logic is not necessarily an improvement. E.g. further adding a now-modality again increases succinctness exponentially but apparently also the model-checking complexity [43].
Furthermore, it is one thing to model a law application by a model-checking instance \((K,\varphi)\) any old how and another to do so by somehow typical members of \(\mathcal{K}\) and \(L\). E.g., in case the formalization of actual laws uses only special artificial members of \(\mathcal{K}\) (_semantic overkill_) or \(L\) (_syntactic overkill_), one would want to trade the richness of \(\mathcal{K}\) and \(L\) for a faster model-checker.
Very long or contrived formalizations of laws are also prohibitive for legal practice which requires the law to be readable and understandable by humans. This is vital also for the validation of their formalization, i.e., their translation from the typically ambiguous natural language into a formal language able to be algorithmically processed. Without attempting a definition of this vague term, we thus informally require that the (formalization given by the) model \(\operatorname{MC}(\mathcal{K},L)\) must be _natural_.
Other requirementsWe focus on the above three requirements but, of course, there are more whose discussion is beyond the scope of this paper.
An important one is trust in the output of model-checkers. This is a threefold issue. First, the formalization process requires trust: laws are written in natural language and thereby formally not precise and ambigue; formalization typically leads to choices to disambiguate or even repair the written law; this calls for a collaboration of different experts. Second, the implementation process requires trust: this could call for formally verified implementations; we refer to [1] for an example. Third, one needs trust that the data given to the algorithm are correct and in the right format (we refer to [31] for a discussion); for example, Regulation 561 prescribes working in UTC and it is known that no tachyon graph actually records in UTC; theoretically, the change from non-UTC to UTC data can have drastic effects [24].
Furthermore, algorithmic outputs should be transparent and explainable to be used in legal practice and it is unclear what this exactly means. Further requirements on the model might come from ethical or political considerations - e.g., the required transparency can be in conflict with intellectual property rights and there can be more general issues concerning the involvement of the private sector in law execution.
### Contribution and outline
We focus on laws governing temporal sequences of activities, that is, laws concerning cases that can readily be formalized by words over some finite alphabet \(\Sigma\), i.e., \(\mathcal{K}=\Sigma^{*}\). This paper is about the quest for a logic \(L\) such that \(\operatorname{MC}(\mathcal{K},L)\) is a good model for such laws. To judge expressivity and naturality we use European Regulation 561 [49] as a test case, that is, we want \(L\) to naturally formalize Regulation 561. Given the complexity of this regulation, this is an ambitious goal and we expect success to result in a model that encompasses a broad family of laws concerning sequences of activities.
The imperative constraint is the tractability of \(\operatorname{MC}(\mathcal{K},L)\). The next section surveys the relevant literature on model-checking and discusses shortcomings of known model-checkers. Thereby we build up some intuition about what the right input aspects are, i.e., those relevant to calibrate the computational complexity of \(\operatorname{MC}(\mathcal{K},L)\) and to judge its tractability.
We suggest (a version of) discrete time _stopwatch automata_\(\mathsf{SWA}\) as an answer to our central question, that is, we propose \(\operatorname{MC}(\Sigma^{*},\mathsf{SWA})\) as a model for algorithmic laws concerning sequences of activities.
Stopwatch automata are defined in Section 3. Our main technical contribution is the construction of a stopwatch automaton expressing Regulation 561 in Section 4. Sections 5
and 6.3 gauge the expressivity of stopwatch automata and the computational complexity of the problems mentioned in Section 1.2: model-checking problem, consistency-checking and scheduling. It turns out that while stopwatch automata have high expressive power, their model-checking complexity is relatively tame, and scales well with the aspects identified in Section 2: we summarize our technical results in Section 2.4.
## 2 Regulation 561 and various logics
Model-checking complexity has been investigated mainly from two interpretational perspectives: database theory and system verification. We give a brief survey guided by our central question to model Regulation 561.
### Regulation 561 and Buchi's theorem
We recall Buchi's theorem and, to fix some notation, the definitions of regular languages and finite automata.
An _alphabet_\(\Sigma\) is a non-empty finite set of _letters_, \(\Sigma^{*}=\bigcup_{n\in\mathbb{N}}\Sigma^{n}\) denotes the set of (finite) _words_. A word \(w=a_{0}\cdots a_{n-1}\in\Sigma^{n}\) (the \(a_{i}\) are letters) has _length_\(|w|:=n\). A _(non-deterministic) finite automaton_\(\mathbb{B}\) is given by a finite set of _states_\(Q\), an alphabet \(\Sigma\), sets of _initial_ and _final_ states \(I,F\subseteq Q\), and a set \(\Delta\subseteq Q\times\Sigma\times Q\) of _transitions_. A _computation of \(\mathbb{B}\) on \(w=a_{0}\cdots a_{n-1}\in\Sigma^{n}\)_ is a sequence \(q_{0}\cdots q_{n}\) of states such that \((q_{i},a_{i},q_{i+1})\in\Delta\) for every \(i<n\). The computation is _initial_ if \(q_{0}\in I\) and _accepting_ if \(q_{n}\in F\). The _language_\(L(\mathbb{B})\)_of_\(\mathbb{B}\) is the set of words \(w\in\Sigma^{*}\) such that \(\mathbb{B}\)_accepts_\(w\), i.e., there exists an initial accepting computation of \(\mathbb{B}\) on \(w\). A language (i.e., subset of words over \(\Sigma\)) is _regular_ if it equals \(L(\mathbb{B})\) for some finite automaton \(\mathbb{B}\).
We refer to [54] for a definition of MSO-definable languages and a proof of:
**Theorem 1** (Buchi).: _A language is MSO-definable if and only if it is regular._
This can be extended to infinite words and trees using various types of automata - we refer to [29] for a monograph on the subject.
The proof of Buchi's theorem is effective in that there is a computable function that computes for every MSO-sentence \(\varphi\) and automaton \(\mathbb{B}_{\varphi}\) that accepts a word \(w\) if and only if \(w\vDash\varphi\). It follows that \(\operatorname{MC}(\Sigma^{*},\mathsf{MSO})\) is fixed-parameter tractable: given an input \((w,\varphi)\), compute \(\mathbb{B}_{\varphi}\) and check \(\mathbb{B}_{\varphi}\) accepts \(w\). This takes time6\(f(\|\varphi\|)\cdot|w|\) for some computable function \(f:\mathbb{N}\to\mathbb{N}\). It also follows that \(\operatorname{Con}(\Sigma^{*},\mathsf{MSO})\) is decidable because finite automata have _decidable emptiness_: there is an (even linear time) algorithm that, given a finite automaton \(\mathbb{A}\), decides whether \(L(\mathbb{A})=\varnothing\).
Footnote 6: This is not true for the empty word \(w\). We trust the readers common sense to interpret this and similar statements reasonably.
MSO is a very expressive logic. In [24] it is argued that Regulation 561 can be formalized in MSO, and naturally so. Thus, in a sense \(\operatorname{MC}(\Sigma^{*},\mathsf{MSO})\) is
natural, so a good answer to our central question. The starting point of this work was the question for a better model, namely improving its tractability. The problem with the runtime \(f(\|\varphi\|)\cdot|w|\) of Buchi's model-checker is that the parameter dependence \(f(k)\) grows extremely fast: it is non-elementary in the sense that it cannot be bounded by \(2^{2^{\cdot^{2^{k}}}}\) for any fixed height tower of \(2\)'s. This is due to the fact that in general the size \(\|\mathbb{B}_{\varphi}\|\) of (a reasonable binary encoding of) \(\mathbb{B}_{\varphi}\) is non-elementary in \(\|\varphi\|\). Under suitable hardness hypotheses this non-elementary parameter dependence cannot be avoided, not even when restricting to first-order logic FO[37].
This motivates the quest for fragments of MSO or less succinct variants thereof that allow a tamer parameter dependence. In system verification, LTL has been proposed: an LTL formula of size \(k\) can be translated to a size \(2^{O(k)}\) Buchi automaton [57] or a size \(O(k)\) alternating automaton [55]. The model-checking problem asks given a system modeled by a finite automaton \(\mathbb{A}\) whether all (finite or infinite) words accepted by the automaton satisfy the given LTL-sentence \(\varphi\). The model-checker decides emptiness of a suitable product automaton accepting \(L(\mathbb{A})\cap L(\mathbb{B}_{\neg\varphi})\) and takes time \(2^{O(\|\varphi\|)}\cdot\|\mathbb{A}\|\). This is the dominant approach to model-checking in system verification.
[24] formalizes part of Regulation 561 in LTL. In part, these formalizations rely on Kamp's theorem (cf. [52]) stating that LTL and FO have the same expressive power over \(\Sigma^{*}\). But the translation of an FO-sentence to an LTL-sentence can involve a non-elementary blow-up in size. Indeed, [24] proves lower bounds on the length of LTL-sentences expressing parts of Regulation 561. Very large sentences are not natural and lead to prohibitive model-checking times.
**Example 2**.: To illustrate the point, consider the following law in Regulation 561:
Article 6.2: The weekly driving time shall not exceed 56 hours...
Restrict attention to words representing one week of activities, i.e., words of length \(7\cdot 24\cdot 60\) over the alphabet \(\Sigma=\{d,w,r\}\). A straightforward formalization of Article 6.2 in LTL is (using \(d\in\Sigma\) as a propositional variable) the huge disjunction of
\[\bigwedge_{j\in D}\Big{(}\bigwedge_{r_{j}\leqslant i<\ell_{j+1}}\circ^{i} \neg d\wedge\bigwedge_{\ell_{j}\leqslant i<r_{j}}\circ^{i}d\Big{)}\]
for all \(D\leqslant 7\cdot 24\cdot 60\) and all \(r_{0}:=0\leqslant\ell_{1}<r_{1}<\cdots<\ell_{D}<r_{D}<\ell_{D+1}:=7\cdot 24\cdot 60\) with \(\sum_{1\leqslant j\leqslant D}(r_{j}-\ell_{j})\leqslant 56\cdot 60\). These are \(>\binom{7\cdot 24\cdot 60}{56\cdot 60}>10^{2784}\) many disjuncts.
To conclude, MSO gives the wrong model because it does not allow sufficiently fast model-checkers, and LTL is the wrong model because it is not sufficiently (expressive nor) succinct, hence not natural. It can be expected that, like Regulation 561, many algorithmic laws concerning sequences of activities state lower and upper bounds on the duration of certain activities or types of activities. The constants used to state these bounds are not necessarily small, and an important aspect to take into account when analyzing the model-checking complexity.
### Regulation 561 and timed modal logics
The above motivates to look at models with built-in timing constraints: "In practice one would want to use'sugared' versions of \(\mathsf{LTL}\), such as metric temporal logic (\(\mathsf{MTL}\); [47]) which allow for expressions such as \(\bigcirc^{n+1}\) to be represented succinctly"[24]. \(\mathsf{MTL}\) has modalities like \(\Diamond_{[5,8]}\varphi\) expressing that \(\varphi\) holds within 5 and 8 time units from now.
For Regulation 561, cases are tachyon recordings which, formally, are _timed words_\((a_{0},t_{0})\)\((a_{1},t_{1})\)\(\cdots\) where the \(a_{i}\) are letters and the \(t_{i}\) an increasing sequence of time-points; intuitively, activity \(a_{0}\) is observed until time point \(t_{0}\), then \(a_{1}\) until \(t_{1}\), and so on. Alur and Dill [4] extended finite automata to _timed automata_ that accept sets of timed words - see [12] for a survey. Roughly speaking, computations of such automata happen in time and are governed by finitely many clocks: transitions from one state to another are enabled or blocked depending on the clock values, and transitions can reset some clocks (to value 0). Alur and Dill [4] proved that timed automata have decidable emptiness, thus enabling the dominant model-checking paradigm.
Consequently, a wealth of timed temporal logics have been investigated - [40, 13] are surveys. The following are some of the most important choices when defining such a logic:
\begin{tabular}{l l|l|l} _semantics_ & _time_ & _clocks_ \\ \hline finite words & signal-based & continuous \(\mathbb{R}_{\geqslant 0}\) & branching & internal \\ infinite words & event-based & discrete \(\mathbb{N}\) & linear & external \\ \end{tabular}
A subtle choice is between signal- or event-based semantics. It means, roughly and respectively, that the modalities quantify over all time-points or only over the \(t_{i}\) appearing in the timed word; \(\mathsf{MTL}\) is known to be less expressive in the latter semantics over finite timed words [28]. A crucial choice is between time \(\mathbb{N}\) or \(\mathbb{R}_{\geqslant 0}\). Internal clocks appear only on the side of the automata, external clocks appear in sentences which reason about their values. We briefly survey the most important results.
An early success [2] concerns the infinite word signal-based branching continuous time logic \(\mathsf{TCTL}\) (timed computation tree logic): over (systems modeled by) timed automata it admits a model-checker with runtime \(t^{O(c)}\cdot k\cdot n\) where \(n\) is the automaton size, \(k\) the size of the input sentence, \(c\) the number of clocks, and \(t\) is the largest time constant appearing in the input. [42] extends this allowing external clocks. However, continuous branching time is semantical and syntactical overkill for Regulation 561. For linear continuous time we find \(\mathsf{MTL}\) and \(\mathsf{TPTL}\) (timed propositional temporal logic), a more expressive [15] extension with external clocks. Since model-checking is undecidable for these logics [6, 5], fragments have been investigated. Surprisingly [47] found an fpt model-checker for \(\mathsf{MTL}\) over event-based finite words via a translation to alternating automata with one clock, albeit with intolerable parameter dependence (non-primitive recursive). \(\mathsf{MTL}\) (metric interval temporal logic) [5] is the fragment of \(\mathsf{MTL}\) disallowing singular time constraints as, e.g., in \(\Diamond_{[1,1]}\varphi\). [33, 44] gives an elegant translation of \(\mathsf{MTL}\) to timed automata and thereby a model-checker with runtime7\(2^{O(t\cdot k)}\cdot n\). Over discrete time, [6] adapts the mentioned translation of \(\mathsf{LTL}\) to Buchi automata and gives a model-checker for \(\mathsf{TPTL}\) with runtime \(2^{O(t^{c}\cdot k)}\cdot n\).
Footnote 7: In fact, \(t\) can be replaced by a typically smaller number, called the _resolution_ of the formula – see [33].
As said, from the perspective of algorithmic law, \(t\) is not typically small and runtimes exponential in \(t=56h=3360\mathit{min}\) are prohibitive. Tamer runtimes with \(t\) moved out of the exponent have been found for a certain natural \(\mathsf{MITL}\)-fragment \(\mathsf{MITL}_{0,\infty}\) both over discrete and continuous time - see [40, 5].
However, "standard real-time temporal logics [...] do not allow us to constrain the accumulated satisfaction time of state predicates" [3, p.414]. It seems that this is just what is required to formalize the mentioned Article 6 (2), and we expect similar difficulties to be encountered with other laws concerning activity sequences.
There are various attempts to empower the logics with some reasoning about durations. _Stopwatch automata_[25] are timed automata that can not only reset clocks but also stop and activate them. However, emptiness is undecidable already for a single stopwatch [41]. Positive results are obtained in [3] for _observer stopwatches_, i.e., roughly, stopwatches not used to govern the automaton's transitions. On the logical side, [11] and [17] study fragments and restrictions for \(\mathsf{TCTL}\) with (observer) stopwatches. On another strand, [20] puts forward the _calculus of durations_, but already tiny fragments turn out undecidable [19]. For discrete time, [39] gives an fpt model-checker via a translation to finite automata. For continuous time, [36] obtains fpt results under certain reasonable restrictions of the semantics. A drawback is that these fpt results have non-elementary parameter dependence.
To conclude, the extensive research on "'sugared' versions" of \(\mathsf{LTL}\) in system verification does not reveil a good answer to our central question for a model-checking problem modeling algorithmic laws concerning activity sequences. In particular, many known model-checkers are too slow in that they do not scale well with time constants mentioned in the law.
### The perspective from algorithmic law
The new perspective on model-checking from algorithmic law seems orthogonal to the dominant perspectives from database theory and system verification in the sense that it seems to guide incomparable research directions.
In database theory there is special interest in model-checking problems for a rich class \(\mathcal{K}\), formalizing a large class of databases, and possibly weak logics \(L\) formalizing simple basic queries. In algorithmic law (concerning activity sequences) it is the other way around, focussing on \(\mathcal{K}=\Sigma^{*}\).
System verification gives special interest to infinite words and continuous time (cf. e.g. [2]) while algorithmic law focusses on finite words and discrete time. Most importantly, system verification focusses on structures specifying _sets_ of words: its model-checking problem corresponds to (a generalization of) the consistency problem in algorithmic law. In algorithmic law the consistency problem is secondary, the main interest is in evaluating sentences over single words.
Finally, the canonical parameterization of a model-checking problem takes the size \(\|\varphi\|\) of the input sentence \(\varphi\) as the parameter. Intuitively, then parameterized complexity analysis focusses attention on inputs of the problem where \(\|\varphi\|\) is relatively small. Due to large constants on time constraints appearing in the law to be formalized this parameterization
does not seem to result in a faithful model of algorithmic law. We shall come back to this point in Section 6.2.
Compared to system verification this shift of attention in algorithmic law opens the possibility to use more expressive logics while retaining tractability of the resulting model. In particular, complexity can significantly drop via the shift from continuous time, infinite words and consistency-checking, to discrete time, finite words and model-checking. While discrete time is well investigated in system verification, it has been noted that both finite words and model-checking have been neglected - see [34] and [45], respectively. To make the point: over finite words consistency-checking \(\mathsf{LTL}\) is \(\mathsf{PSPACE}\)-complete but model-checking is \(\mathsf{PTIME}\), even for the more succinct extensions of \(\mathsf{LTL}\) with past- and now-modalities [45], or even finite variable \(\mathsf{FO}\)[56].8
Footnote 8: 3-variable (2-variable) \(\mathsf{FO}\) has the same expressive power as (unary) \(\mathsf{LTL}\) over finite words but is much more succinct [32]. [23] gives a fine calibration of the parameterized complexity of finite variable \(\mathsf{FO}\).
### Model-checking stopwatch automata: summary
We take advantage of this possibility to use more expressive logics and suggest (a version of) discrete time _stopwatch automata_\(\mathsf{SWA}\) as an answer to our central question, that is, we propose \(\operatorname{MC}(\Sigma^{*},\mathsf{SWA})\) as a model for algorithmic laws concerning sequences of activities.9 Our stopwatches are bounded, and their bounds correspond to time constants mentioned in laws. Mimicking notation from Section 2.2, we let \(c_{\mathbb{A}}\) denote the number of stopwatches and \(t_{\mathbb{A}}\) the largest stopwatch bound of a stopwatch automaton \(\mathbb{A}\). We give the following upper bound on the complexity of \(\operatorname{MC}(\Sigma^{*},\mathsf{SWA})\):
Footnote 9: In the notation of Section 1.1 we define \(w\vDash\mathbb{A}\) for a finite word \(w\) and a stopwatch automaton \(\mathbb{A}\) to mean that \(\mathbb{A}\) accepts \(w\).
**Theorem 3**.: _There is an algorithm that given a stopwatch automaton \(\mathbb{A}\) and a word \(w\) decides whether \(\mathbb{A}\) accepts \(w\) in time_
\[O\big{(}|\mathbb{A}|^{2}\cdot t_{\mathbb{A}}^{c_{\mathbb{A}}}\cdot|w|\big{)}.\]
We prove a slightly stronger result in Theorem 20. Notably, the aspect \(t_{\mathbb{A}}\) does not appear in the exponent, so this overcomes a bottle-neck of various model-checkers designed in system verification (see Section 2.2). We obtain similar algorithms for consistency-checking and scheduling (Corollary 19 and Theorem 21). This is despite the fact that stopwatch automata are highly expressive, namely have the same expressive power as \(\mathsf{MSO}\) over finite words (Theorems 15 and 1).
The final Section 6 discusses our model \(\operatorname{MC}(\Sigma^{*},\mathsf{SWA})\) following the criteria of Section 1.2, and gives a critical examination of the factor \(t_{\mathbb{A}}^{c_{\mathbb{A}}}\) in the runtime of our model-checker. Intuitively, typical inputs have small \(c_{\mathbb{A}}\) and large \(t_{\mathbb{A}}\), and it would be desirable to replace this factor by, e.g., \(2^{O(c_{\mathbb{A}})}\cdot t_{\mathbb{A}}^{O(1)}\). We show this is unlikely to be possible. Theorem 28 implies:
**Theorem 4**.: _Assume \(\mathsf{FPT}\) does not contain the W-hierarchy. Let \(f:\mathbb{N}\to\mathbb{N}\) be a computable function. Then there does not exist an algorithm that given a stopwatch automaton \(\mathbb{A}\) and a
_word \(w\) decides whether \(\mathbb{A}\) accepts \(w\) in time_
\[\big{(}\|\mathbb{A}\|\cdot f(c_{\mathbb{A}})\cdot t_{\mathbb{A}}\cdot|w|\big{)} ^{O(1)}.\]
The complexity-theoretic assumption here is weaker than \(\mathsf{FPT}\neq\mathsf{W}[1]\) considered earlier.
## 3 Stopwatch automata
Before giving our definition we informally describe the working of a _stopwatch automaton_. A stopwatch automaton is an extension of a finite automaton whose computations happen in discrete time: the automaton can stay for some amount of time in some state and then take an instantaneous transition to another state.
There are constraints on which transitions can be taken at a given point of time as follows. Time is recorded by a set of _stopwatches_\(X\), every stopwatch \(x\in X\) has a _bound_\(\beta(x)\), a maximal time it can record. Every stopwatch is _active_ or not in a given state. During a run that stays in a given state for a certain amount of time, the value of the active stopwatches increases by this amount of time (up to their bounds) while the inactive stopwatches do not change their value. Transitions between states are labeled with a _guard_ and an _action_. The guard is a condition on the values of the stopwatches that has to be satisfied for the transition to be taken, usually requiring upper or lower bounds on certain stopwatch values. The action modifies stopwatch values, for example, resets some of the stopwatches to value \(0\).
Instead of transitions, states are labeled by letters of the alphabet. A stopwatch automaton _accepts_ a given word if there exists a computation leading from a special state _start_ to a special state _accept_ and that _reads_ the word: staying in a state for \(5\) time units means reading \(5\) copies of the letter labelling the state.
### Abstract stopwatch automata
We now give the definitions that have been anticipated by the informal description above.
**Definition 5**.: An _abstract stopwatch automaton_ is a tuple \(\mathbb{A}=(Q,\Sigma,X,\lambda,\beta,\zeta,\Delta)\) where
* \(Q\) is a finite set of _states_ containing the states _start_ and _accept_;
* \(\Sigma\) is a finite _alphabet_;
* \(X\) is a finite set of _stopwatches_;
* \(\lambda:Q\to\Sigma\);
* \(\beta:X\to\mathbb{N}\) maps every stopwatch \(x\in X\) to its _bound_\(\beta(x)\in\mathbb{N}\);
* \(\zeta\subseteq X\times Q\) contains pairs \((x,q)\) such that the stopwatch \(x\) is _active in_ state \(q\);
* \(\Delta\subseteq Q\times\mathcal{G}\times\mathcal{A}\times Q\) is a set of _transitions_.
Here, \(\mathcal{G}\) is the set of _abstract guards_ (for \(\mathbb{A}\)), namely sets of assignments, and \(\mathcal{A}\) is the set of _abstract actions_ (for \(\mathbb{A}\)), namely functions from assignments to assignments. An _assignment_ (for \(\mathbb{A}\)) is a function \(\xi:X\to\mathbb{N}\) such that \(\xi(x)\leqslant\beta(x)\) for all \(x\in X\). To be precise, we should speak of a \(\beta\)-assignment but the \(\beta\) will always be clear from the context. We define the _bound of \(\mathbb{A}\)_ to be
\[B_{\mathbb{A}}:=\prod_{x\in X}(\beta(x)+1)\]
understanding that the empty product is \(1\) so that \(\prod_{x\in\mathcal{G}}(\beta(x)+1):=1\). This is the cardinality of the set of assignments (for \(\mathbb{A}\)). We say that a transition \((q,g,\alpha,q^{\prime})\in\Delta\) is _from_\(q\), and _to \(q^{\prime}\)_, and _has_ abstract guard \(g\) and abstract action \(\alpha\).
Computations of stopwatch automata are defined in terms of their corresponding _transition systems_.
**Definition 6**.: Let \(\mathbb{A}=(Q,\Sigma,X,\lambda,\beta,\zeta,\Delta)\) be an abstract stopwatch automaton. The _transition system_\(\mathit{TS}(\mathbb{A})\) of \(\mathbb{A}\) is given by a set of nodes and labeled edges: a _node (of \(\mathit{TS}(\mathbb{A})\))_ is a pair \((q,\xi)\) of a state \(q\in Q\) and an assignment \(\xi\); a _labeled edge (of \(\mathit{TS}(\mathbb{A})\))_ is a triple \(((q,\xi),t,(q^{\prime},\xi^{\prime}))\) for nodes \((q,\xi),(q^{\prime},\xi^{\prime})\) and \(t\in\mathbb{N}\) such that **either**:
* \(t=0\) and \((q,g,\alpha,q^{\prime})\in\Delta\) for an abstract guard \(g\) and an abstract action \(\alpha\) such that \(\xi\in g\) and \(\alpha(\xi)=\xi^{\prime}\),
**or**,
* \(t>0\) and \(q=q^{\prime}\) and \(\xi^{\prime}\) is the assignment given by \[\xi^{\prime}(x)=\left\{\begin{array}{ll}\min\left\{\xi(x)+t,\beta(x)\right\} &\text{if }(x,q)\in\zeta,\\ \xi(x)&\text{else}.\end{array}\right.\]
For \(t\in\mathbb{N}\) we let \(\overset{t}{\to}\) be the binary relation that contains those pairs \(((q,\xi),(q^{\prime},\xi^{\prime}))\) of nodes such that \(((q,\xi),t,(q^{\prime},\xi^{\prime}))\) is a labeled edge.
**Definition 7**.: A (finite) _computation of \(\mathbb{A}\)_ is a finite walk in \(\mathit{TS}(\mathbb{A})\), i.e., for some \(\ell\in\mathbb{N}\) a sequence
\[\left(\left((q_{i},\xi_{i}),t_{i},(q_{i+1},\xi_{i+1})\right)\right)_{i<\ell}\]
of directed edges of \(\mathit{TS}(\mathbb{A})\) such that \(q_{i}\neq\mathit{accept}\) for all \(i<\ell\); we write this as
\[(q_{0},\xi_{0})\overset{t_{0}}{\to}(q_{1},\xi_{1})\overset{t_{1}}{\to}(q_{2}, \xi_{2})\overset{t_{2}}{\to}\cdots\overset{t_{\ell-1}}{\to}(q_{\ell},\xi_{ \ell}).\]
In this case, we say that the computation is _from_\((q_{0},\xi_{0})\) and _to_\((q_{\ell},\xi_{\ell})\); it is _initial_ if \(\xi_{0}\) is constantly \(0\) and \(q_{0}=\mathit{start}\); it is _accepting_ if \(q_{\ell}=\mathit{accept}\). The computation _reads_ the word
\[\lambda(q_{0})^{t_{0}}\lambda(q_{1})^{t_{1}}\cdots\lambda(q_{\ell-1})^{t_{ \ell-1}}.\]
We understand that \(\sigma^{0}\) denotes the empty string for every letter \(\sigma\) in the alphabet \(\Sigma\) and juxtaposition of strings corresponds to concatenation. Through computations, we define strings and languages accepted by a Stopwatch automaton.
**Definition 8**.: The automaton \(\mathbb{A}\)_accepts_\(w\in\Sigma^{*}\) if there is an initial accepting computation of \(\mathbb{A}\) that reads \(w\). The set of these words is the _language \(L(\mathbb{A})\) of \(\mathbb{A}\)_.
**Remark 9**.: The requirement \(q_{i}\neq\mathit{accept}\) for all \(i<\ell\) in the definition of computations means that we interpret _accept_ as a halting state; it implies that the label \(\lambda(\mathit{accept})\) as well as transitions from _accept_ are irrelevant. Without this condition, \(w\in L(\mathbb{A})\) implies \(wa^{n}\in L(\mathbb{A})\) for \(a:=\lambda(\mathit{accept})\) and all \(w\in\Sigma^{*}\) and \(n\in\mathbb{N}\).
**Remark 10**.: Stopwatch automata are straightforwardly explained for continuous time \(\mathbb{R}_{\geqslant 0}\) where they read timed words, and bounds \(\beta(x)=\infty\). Stopwatch automata according to [25, 41] are such automata where guards are Boolean combinations of \(x\geqslant c\) (for \(x\in X\) and \(c\in\mathbb{N}\)), and actions are resets (to 0 of some stopwatches). The emptiness problem for those automata is undecidable [41]. So-called _timed automata_ additionally require stopwatches to be active in all states, and have decidable emptiness [4]. The model allowing \(x\geqslant c+y\) (for \(x,y\in X\) and \(c\in\mathbb{N}\)) in guards still has decidable emptiness and is exponentially more succinct than guards with just Boolean combinations of \(x\geqslant c\) ([14]). Allowing more actions is subtle, e.g., emptiness becomes undecidable when \(x\coloneqq x\mathbin{\raisebox{-1.0pt}{\scalebox{1.0}{$\sim$}}}1\) or when \(x\coloneqq 2x\) is allowed; see [16] for a detailed study.
### Specific stopwatch automata
To consider an abstract stopwatch automata as an input to an algorithm, we must agree on how to specify the guards and actions, i.e., properties and functions on assignments. This is a somewhat annoying issue because on the one hand our upper bounds on the model-checking complexity turn out to be robust with respect to the choice of this specification in the sense that they scale well with the complexity of computing guards and actions, so a very general definition is affordable. On the other hand, for natural stopwatch automata including the one we are going to present for the European Traffic Regulation 561, we expect guards and actions to be simple properties and functions.
As mentioned, typically guards mainly compare certain stopwatch values with constants or other values, and actions do simple re-assignments of values like setting some values to 0. Hence our choice on how to specify guards and actions is somewhat arbitrary. To stress the robustness part, we use a general model of computation: Boolean circuits. In natural automata, we expect these circuits to be small.
An assignment determines for each stopwatch \(x\in X\) its bounded value and as such can be specified by
\[b_{\mathbb{A}}\coloneqq\sum_{x\in X}\lceil\log(\beta(x)+1)\rceil\]
many bits. We think of the collection of \(b_{\mathbb{A}}\) bits as being composed of blocks, with a block of \(\lceil\log(\beta(x)+1)\rceil\) bits corresponding to the binary representation of the value of stopwatch
\(x\in X\) under the assignment. A _specific guard_ is a Boolean circuit with one output gate and \(b_{\mathbb{A}}\) many input gates. A specific guard determines an abstract one in the obvious way.
A _specific action_ is a Boolean circuit with \(b_{\mathbb{A}}\) many output gates and \(b_{\mathbb{A}}\) many input gates. On input an assignment, for each clock \(x\in X\), it computes the binary representation of a value \(v_{x}\in\mathbb{N}\) in the block of \(\lceil\log(\beta(x)+1)\rceil\) output gates corresponding to \(x\). Furthermore, we agree that the assignment computed by the circuit maps \(x\) to \(\min\{v_{x},\beta(x)\}\) thereby mapping assignments to assignments. A specific action determines an abstract one in the obvious way.
A _specific stopwatch automaton_ is defined like an abstract one but with specific guards and actions replacing abstract ones. A specific stopwatch automaton determines an abstract one taking the abstract guards and actions as those determined by the specific ones. Computations of specific stopwatch automata and the language they accept are defined as those of the corresponding abstract one. The _size_\(|\mathbb{A}|\)of specific stopwatch \(\mathbb{A}\) automaton is the length of a reasonable binary encoding of it.
We shall only be concerned with specific stopwatch automata and shall mostly omit the qualification'specific'.
### A definitorial variation
To showcase the robustness of our definition and for later use, we mention a natural variation of our definition and show it is inessential.
Define a _\(P(\Sigma)\)-labeled (specific) stopwatch automaton \(\mathbb{A}=(Q,\Sigma,X,\lambda,\beta,\zeta,\Delta)\)_ like a (specific) stopwatch automaton but with \(\lambda:Q\to P(\Sigma)\setminus\{\varnothing\}\). A computation
\[(q_{0},\xi_{0})\stackrel{{ t_{0}}}{{\to}}(q_{1},\xi_{1}) \stackrel{{ t_{1}}}{{\to}}(q_{2},\xi_{2})\stackrel{{ t_{2}}}{{\to}}\cdots\stackrel{{ t_{\ell-1}}}{{\to}}(q_{\ell},\xi_{\ell}). \tag{1}\]
is said to _read_ any word \(a_{0}^{t_{0}}\cdots a_{\ell-1}^{t_{\ell-1}}\) with \(a_{i}\in\lambda(q_{i})\) for every \(i<\ell\). The language \(L(\mathbb{A})\) of \(\mathbb{A}\) is defined as before.
A stopwatch automaton can be seen as a \(P(\Sigma)\)-labeled stopwatch automaton whose state labels are singletons. Conversely, given a \(P(\Sigma)\)-labeled stopwatch automaton \(\mathbb{A}=(Q,\Sigma,X,\lambda,\beta,\zeta,\Delta)\) we define a stopwatch automaton \(\mathbb{A}^{\prime}=(Q^{\prime},\Sigma,X,\lambda^{\prime},\beta,\zeta^{\prime},\Delta^{\prime})\) as follows: its states \(Q^{\prime}\) are pairs \((q,a)\in Q\times\Sigma\) such that \(a\in\lambda(q)\); for the start and accept states choose any \((q,a)\) for \(q\) the start, resp., accept state of \(\mathbb{A}\). The \(\lambda^{\prime}\)-label of \((q,a)\in Q^{\prime}\) is \(a\) and stopwatch \(x\in X\) is active (according \(\zeta^{\prime}\)) in \((q,a)\) if and only if it is active in \(q\) (according \(\zeta\)). We let \(\Delta^{\prime}\) contain a transition \(((q,a),g,\alpha,(q^{\prime},a^{\prime}))\) if \((q,a),(q^{\prime},a^{\prime})\in Q^{\prime}\) and \((q,g,\alpha,q^{\prime})\in\Delta\). Further, we add transitions with trivial guards and actions from \((q,a)\in Q^{\prime}\) to \((q,a^{\prime})\in Q^{\prime}\).
Given a computation of \(\mathbb{A}\) as above, choose any \(a_{i}\in\lambda(q_{i})\) for every \(i<\ell\). Then
\[((q_{0},a_{0}),\xi_{0})\stackrel{{ t_{0}}}{{\to}}((q_{1},a_{1}),\xi_{1})\stackrel{{ t_{1}}}{{\to}}((q_{2},a_{2}),\xi_{2}) \stackrel{{ t_{2}}}{{\to}}\cdots\stackrel{{ t_{\ell-1}}}{{\to}}((q_{ \ell},a_{\ell}),\xi_{\ell}). \tag{2}\]
is a computation of \(\mathbb{A}^{\prime}\). The choice of the \(a_{i}\) can be made so that this computation reads the same word as the computation (1). If (1) is initial (accepting), make (2) initial (accepting) by adding a \(\stackrel{{ 0}}{{\to}}\)-transition from (to) the start (accept) state of \(\mathbb{A}^{\prime}\).
Conversely, if (2) is a computation of \(\mathbb{A}^{\prime}\), then (1) is a computation of \(\mathbb{A}\) that reads the same word. To sum up:
**Proposition 11**.: _There is a polynomial time computable function that maps every \(P(\Sigma)\)-labeled stopwatch automaton \(\mathbb{A}\) to a stopwatch automaton \(\mathbb{A}^{\prime}\) with \(B_{\mathbb{A}^{\prime}}=B_{\mathbb{A}}\) and \(L(\mathbb{A})=L(\mathbb{A}^{\prime})\)._
## 4 A stopwatch automaton for Regulation 561
Aside expressivity and tractability, we stressed naturality as a criterion of models for algorithmic law. In this section and the next section we make the point for stopwatch automata by implementing Regulation 561. As already mentioned, Regulation 561 is a complex set of articles concerning sequences of activities of truck drivers. Possible activities are _driving_, _resting_ or _other work_. The activities over time are recorded by tachyographs and formally understood as words over the alphabet \(\Sigma:=\{d,r,w\}\). In the real world time units are minutes. Regulation 561 limits driving and work times by demanding breaks, daily rest periods and weekly rest periods, both of which can be regular or reduced under various conditions.
We construct a stopwatch automaton that accepts precisely the words over \(\Sigma\) that represent activity sequences that are legal according to Regulation 561. The states \(Q\) of the automaton are:
_drive, break, other work,_
_reduced daily, regular daily, reduced weekly, regular weekly,_
_compensate1, compensate2, week, start, accept._
The states in the first row have the obvious meaning. The states in the second row represent different kinds of rest periods. The function \(\lambda\) labels _other work_ by \(w\), _drive_ by \(d\) and all other states by \(r\). The states _compensate1_ and _compensate2_ are used for the most complicated part of Regulation 561 that demands certain compensating rest periods whenever a weekly rest period is reduced. The state _week_ is auxiliary, and accepting computations spend \(0\) time in it. The same is true for _start_. So, the \(\lambda\)-labels of _start_ and _week_ do not matter.
We construct the automaton stepwise implementing one article after the next, introducing stopwatches along the way. For each stopwatch \(x\) we state its bound \(\beta(x)\) and the states \(q\) in which it is active, i.e., specifying the pairs \((x,q)\in\zeta\). We shall refer to stopwatches that are nowhere active as _counters_ or _registers_, depending on their informal usage; a _bit_ is a counter with bound \(1\).
We describe a transition \((q,g,\alpha,q^{\prime})\) saying that there is a transition from \(q\) to \(q^{\prime}\) with guard \(g\) and action \(\alpha\). We specify guards by a list of expressions of the form \(z\leqslant r\) or \(z+z^{\prime}>r\) or the like for \(r\in\mathbb{N}\); this is shorthand for a circuit that checks the conjunction of these conditions. We specify actions by lists of expressions of the form \(z:=r\) or \(z:=z^{\prime}+r\) or the like for \(z,z^{\prime}\in X\) and \(r\in\mathbb{N}\); this is shorthand for the action that carries out the stated re-assignments of values in the order given by the list. These lists are also described stepwise treating one article after the next. As a mode of speech, when treating a particular law,
we shall say that a given transition has this or that action or guard: what we mean is that the actions or guards of the transition of the final automaton is given by the lists of these statements in order of appearance (mostly the order won't matter).
We illustrate this mode of speech by describing the automaton around _start_: let \(x_{\mathit{start}}\) be a stopwatch with bound \(1\) and active at _start_; there are no transitions to _start_ and transitions from _start_ to all other states except _week_; these transitions have guard \(x_{\mathit{start}}=0\). Later these transitions shall get more guards and also some actions. These stipulations mean more precisely the following: the bound \(\beta\) satisfies \(\beta(x_{\mathit{start}})=1\); the set \(\Delta\) contains for any state \(q\notin\{\mathit{week},\mathit{start}\}\) the transition \((\mathit{start},g,\alpha,q)\) where the guard \(g\) checks the conjunction of \(x_{\mathit{start}}=0\) and the other guards introduced later, and the action \(\alpha\) carries out the assignments and re-assignments as specified later; further, \((x_{\mathit{start}},q)\in\zeta\) if and only if \(q=\mathit{start}\).
We loosely divide Regulation 561 into daily and weekly demands. We first describe how to implement the daily demands using the first 5 states and _daily driving_ and _accept_. The other states will be used to implement the weekly demands.
During the construction we shall explicitly collect the constants appearing in the articles and denote them by \(t_{0},\ldots,t_{16}\). Our construction is such that these constants determine all guards, actions and bounds in an obvious way. Knowing this will be useful for the discussion in later sections.
### Daily demands
We use the first 3 states to implement the the law about _continuous driving_:
**Article 7** (1st part): After a driving period of four and a half hours a driver shall take an uninterrupted break of not less than 45 minutes, unless he takes a rest period.
We use a stopwatch \(x_{\mathit{cd}}\) with bound \(4.5h+1=271\) that is active in _drive_. Further, we use a stopwatch \(x_{\mathit{break}}\) with bound \(9h\) that is active in _break_. For the law under consideration we could use the bound of \(4.5h+1\), the reason we use \(9h\) will become clear later when implementing Article 8.7.
There are transitions back and forth between any two of the states _break_, _drive_ and _other work_. We give the transitions to _break_ action \(x_{\mathit{break}}:=0\), and the transitions from _drive_ the guard \(x_{\mathit{cd}}\leqslant 4.5h\). This ensures that a computation staying in _drive_ for more than 4.5h will not be able to leave this state, so cannot be accepting. We add two transitions from _break_ to both _drive_ and _other work_ with guard \(x_{\mathit{break}}\geqslant 45\) and action \(x_{\mathit{break}}:=0;\ x_{\mathit{cd}}:=0\).
Transitions to _regular daily_ and _reduced daily_ have action \(x_{\mathit{cd}}:=0\): this ensures the "unless..." statement in Article 7 (transitions to weekly rest periods described below will also have this action). The first part of this Article 7 uses constants \(t_{0}:=4.5h=270;\ t_{1}:=45\) (the constant \(9h\) is denominated later by \(t_{16}\)).
Article 7 allows to divide the demanded break into two shorter ones:
Article 7 (2nd part): This break may be replaced by a break of at least 15 minutes followed by a break of at least 30 minutes each distributed over the period in such a way as to comply with the provisions of the first paragraph.
To implement this possibility, we use a bit \(b_{rb}\) that, intuitively, indicates a reduced break. We add transitions from _break_ to _other work_ and _drive_ with guard \(15\leqslant x_{\textit{break}}<45\) and action \(b_{rb}:=1;\ x_{\textit{break}}:=0\). We note that these transitions do not have action \(x_{cd}:=0\). We add transitions from _break_ to _other work_ and _drive_ with guards \(b_{rb}=1\) and \(30\leqslant x_{\textit{break}}\) and action \(b_{rb}:=0;\ x_{\textit{cd}}:=0;\ x_{\textit{break}}:=0\). Transitions to states representing daily or weekly rests introduced below all get action \(b_{rb}:=0\). The second part of Article 7 uses the constant \(t_{2}:=15\); we do not introduce a name for 30 but view this constant as equal to \(t_{1}-t_{2}=45-15\).
Article 4.(k) defines 'daily driving time' as the accumulated driving time between two daily rest periods. According to Article 4.(g) daily rest periods can be regular or reduced, the former meaning at least \(11h\) of rest, the latter means less than \(11h\) but at least \(9h\) of rest. These are represented by the states _regular daily_ and _reduced daily_.
Article 8.1: A driver shall take daily and weekly rest periods.
Article 8.2: Within each period of 24 hours after the end of the previous daily rest period or weekly rest period a driver shall have taken a new daily rest period. If the portion of the daily rest period which falls within that 24 hour period is at least nine hours but less than 11 hours, then the daily rest period in question shall be regarded as a reduced daily rest period.
Weekly rest periods are treated in the next subsection. We use a stopwatch \(x_{\textit{day}}\) with bound \(24h+1\) which is active in all states except _accept_ and _start_, and a stopwatch \(x_{dr}\) with bound \(11h\) active in _reduced daily_ and _regular daily_. We have transitions back and forth between the states _break_, _drive_, _other work_ and the states _regular daily_, _reduced daily_. The transitions to _regular daily_ are guarded by \(x_{\textit{day}}\leqslant 24h-11h=780;\ b_{rb}=0\); transitions to
Figure 1: Illustration of Article 7 (first part); stopwatches \(x_{cd},x_{\textit{break}}\) are shown at the states where they are active.
_reduced daily_ are guarded by \(x_{day}\leqslant 24h-9h=900;\ b_{rb}=0\). The transitions from _regular daily_ are guarded by \(x_{dr}\geqslant 11h\), and the transitions from _reduced daily_ are guarded by \(11h>x_{dr}\geqslant 9h\) - later we shall refer to these guards as _definitorial_ for the states _regular daily_ and _reduced daily_. Transitions from _regular daily_, _reduced daily_ have action \(x_{dr}\coloneqq 0,x_{day}\coloneqq 0\).
All transitions to _accept_ get guard \(x_{day}\leqslant 24h\). Note that an accepting computation cannot involve an assignment satisfying \(x_{day}>24h\), so eventually has to visit and leave _regular daily_ or _reduced daily_ (or their weekly counterparts, see below). This ensures Article 8.1 for daily rest periods. These laws use constants \(t_{3}\coloneqq 24h=1440,t_{4}\coloneqq 11h=660,t_{5}\coloneqq 9h=540\).
Actually, the definition of regular daily rest periods in Article 4.(g) is more complicated:
'regular daily rest period' means any period of rest of at least 11 hours. Alternatively, this regular daily rest period may be taken in two periods, the first of which must be an uninterrupted period of at least 3 hours and the second an uninterrupted period of at least nine hours,
To implement this we use a bit \(b_{dr}\) indicating that a \(3h\) part of a regular daily rest period has been taken. We duplicate the transitions from _regular daily_ but replace the guard \(x_{dr}\geqslant 11h\) by \(x_{dr}\geqslant 9h,b_{dr}=1\). To add the possibility of taking a partial regular daily rest period of at least \(3h\) we add transitions from _regular daily_ to _drive_ and _other work_ with guards \(b_{dr}=0,3h\leqslant x_{dr}<11h\) and action \(b_{dr}\coloneqq 1\); note these transitions do not have action \(x_{day}\coloneqq 0\). All transitions with action \(x_{day}\coloneqq 0\) also get action \(b_{dr}\coloneqq 0\), including those modeling weekly rest periods described below. This uses the constants \(t_{6}=3h=180,t_{7}\coloneqq 9h=540\).
The final daily demand constrains daily driving times:
**Article 6.1**: The daily driving time shall not exceed nine hours. However, the daily driving time may be extended to at most 10 hours not more than twice during the week.
To implement Article 6.1 we use a stopwatch \(x_{dd}\) active at _drive_ with bound \(10h+1\) to measure the daily driving time. Additionally, we use a counter \(c_{dd}\) with bound 3. As described later, this counter will be reset to 0 when the week changes. Duplicate the transitions to _regular daily_ and _reduced daily_: one gets guard \(x_{dd}\leqslant 9h\), the other guard \(9h<x_{dd}\leqslant 10h\) and action \(c_{dd}\coloneqq c_{dd}+1\). Transitions from _regular daily_ and _reduced daily_ get guard \(c_{dd}\leqslant 2\). This used constants \(t_{8}\coloneqq 10h=600\) and \(t_{9}\coloneqq 9h=540\).
### Weekly demands
Article 4(i) defines a week as a calendar week, i.e., as the time between Monday 00:00 and Sunday 24:00. Our formalization of real tachograph recordings by timed words replaces the time-points of tachograph recordings by numbers starting from 0. Hence time is shifted and the information of the beginning of weeks is lost. A possibility to remedy this is to use timed words where the beginnings of weeks are marked, or at least the first of them. For simplicity, we restrict attention to tachograph recordings starting at the beginning of a week, that is,
we pretend that time-point \(0\) starts a week. We then leave it to the automaton to determine the time-points when weeks change.
To this end, we use the auxiliary state _week_ and a stopwatch \(x_{\mathit{week}}\) with bound \(7\cdot 24h+1=168h+1\) that is active at all states except _accept_ and _start_. All transitions to _accept_ are guarded by \(x_{\mathit{week}}\leqslant 168h\). The state _week_ has incoming transitions from all states except _accept_ and transitions to all states except _start_. All these transitions are guarded by \(x_{\mathit{week}}=168h\) and the outgoing transitions have actions \(x_{\mathit{week}}:=0\) and \(c_{\mathit{dd}}:=0\) (see the implementation of Article 6.1 above). This ensures that every accepting computation of \(\mathbb{A}\) enters _week_ for \(0\) time units exactly every week, i.e., every \(168h\).
Additionally, we want the automaton to switch from _week_ back to the state it came from. To this end we introduce a bit \(b_{q}\) for each state \(q\neq\mathit{accept}\). We give the transition from \(q\) to _week_ the action \(b_{q}:=1\), and the transition from _week_ to \(q\) the guard \(b_{q}=1\) and the action \(b_{q}:=0\). The transition from _week_ to _accept_ has no guard involving the bits \(b_{q}\). This uses the constant \(t_{10}:=168h=10080\).
Much of the following implementation work is done by adding guards and actions to the transitions from and to _week_. For example, we can readily implement
Article 6.2: The weekly driving time shall not exceed 56 hours and shall not result in the maximum weekly working time laid down in Directive 2002/15/EC being exceeded.
Article 6.3: The total accumulated driving time during any two consecutive weeks shall not exceed 90 hours.
The time laid down by Directive 2002/15/EC is \(60h\). Use a stopwatch \(x_{\mathit{ww}}\) with bound \(60h+1\) that is active at _drive_ and _other work_. Use a stopwatch \(x_{\mathit{dw}}\) with bound \(56h+1\) active at _drive_. To implement Article 6.2, the transitions to _week_ and _accept_ have guard \(x_{\mathit{dw}}\leqslant 56h,x_{\mathit{ww}}\leqslant 60h\), and the transitions from _week_ have action \(x_{\mathit{dw}}:=0,x_{\mathit{ww}}:=0\). Note that accepting computations contain only nodes with assignments satisfying \(x_{\mathit{dw}}\leqslant 56h\) and \(x_{\mathit{ww}}\leqslant 60h\). This implements Article 6.2.
To implement Article 6.3 we have to remember the value \(x_{\mathit{dw}}\) of the previous week. We use a register \(x^{\prime}_{\mathit{dw}}\) with the same bound as \(x_{\mathit{dw}}\) and give the transitions from _week_ the action \(x^{\prime}_{\mathit{dw}}:=x_{\mathit{dw}}\). Note \(x^{\prime}_{\mathit{dw}}\) functions like a register in that it just stores a value. We then guard all transitions to _accept_ by \(x^{\prime}_{\mathit{dw}}+x_{\mathit{dw}}\leqslant 90h\). These articles use constants \(t_{11}:=56h=3360,t_{12}:=60h=3600\) and \(t_{13}:=90h=5400\).
We now treat the articles concerning weekly rest periods. According to Article 4.(h), weekly rest periods can be regular or reduced, the former meaning at least \(45h\) of rest, the latter means less than \(45h\) but at least \(24h\) of rest. These rest periods are represented by the states _regular weekly_ and _reduced weekly_.
To implement their definition we use a stopwatch \(x_{\mathit{wr}}\) with bound \(45h\) active in these two states. For the two states we add transitions from and to _drive_ and _other work_ and transitions to _accept_: those from _regular weekly_ have guard \(x_{\mathit{wr}}\geqslant 45h\) and action \(x_{\mathit{wr}}:=0\), and those from _reduced weekly_ have guards \(45h>x_{\mathit{wr}}\geqslant 24h\) and action \(x_{\mathit{wr}}:=0\). Later
we shall refer to these guards as _definitorial guards_ for _regular weekly_ and _reduced weekly_, respectively. This uses the constants \(t_{14}:=45h=2700,t_{15}:=24h=1440\).
We start with some easy implementations:
1. Article 8.6 (3rd part): A weekly rest period shall start no later than at the end of six 24-hour periods from the end of the previous weekly rest period. Article 8.3: A daily rest period may be extended to make a regular weekly rest period or a reduced weekly rest period. Article 8.4: A driver may have at most three reduced daily rest periods between any two weekly rest periods.
Article 8.6 (3rd part) is implemented with the help of a stopwatch \(x_{pw}\) that measures the time since the previous weekly rest period. It has bound \(6\cdot 24h+1\) and is active in all states except _start_ and _accept_. We give the transitions to _regular weekly_ and _reduced weekly_ the guard \(x_{pw}\leqslant 6\cdot 24h\), and the transitions from these two states the action \(x_{pw}:=0\). This law uses constant \(t_{16}:=6\cdot 24h=8640\).
For Article 8.3 we simply copy the guards and actions of the transitions from _drive_ and _other work_ to _regular daily_ to the corresponding transitions to both _regular weekly_ and _reduced weekly_. Below we shall add more guards and actions. For Article 8.4 we use a counter \(c_{rd}\) with bound 4. We add guard \(c_{rd}\leqslant 2\) and action \(c_{rd}:=c_{rd}+1\) to the transitions to _reduced daily_ and the action \(c_{rd}:=0\) to the transitions leaving _reduced weekly_ and _regular weekly_.
We still have to implement Article 8.1 for weekly rest periods, and additionally
1. Article 8.9: A weekly rest period that falls in two weeks may be counted in either week, but not in both.
We use two bits \(b_{wr},b_{used}\) meant to indicate whether a weekly rest period has been taken in the current week, and whether the current weekly rest period is used for this. The transitions from _drive_ or _other work_ to _reduced weekly_ or _regular weekly_ are duplicated: one gets guard \(b_{wr}=0\) and action \(b_{used}:=1;\ b_{wr}:=1\), the other gets no further guards and actions. Transitions from _reduced weekly_ or _regular weekly_ get action \(b_{used}:=0\). The transitions to _week_ get guard \(b_{wr}=1\).
Each transition from _week_ to _reduced weekly_ or _regular weekly_ is triplicated: the first gets additional guard \(b_{used}=1\) and action \(b_{used}:=0;\ b_{wr}:=0\), the second gets guard \(b_{used}=0\) and action \(b_{wr}:=0\), and the third gets guard \(b_{used}=0\) and action \(b_{used}:=1;\ b_{wr}:=1\). This means that when the week changes during a weekly rest period and this rest period is not used, it can be used for the next week.
The most complicated part of Regulation 561 are the rules governing reductions of weekly rest periods. The regulation starts as follows:
1. Article 8.6 (1st part): In any two consecutive weeks a driver shall take at least two regular weekly rest periods, or one regular weekly rest period and one reduced weekly rest period of at least 24 hours.
We use a bit \(b_{rw}\) indicating whether the previous weekly rest period was reduced: transitions to _reduced weekly_ have guard \(b_{rw}=0\) and action \(b_{rw}:=1\). Transitions to _regular weekly_ have action \(b_{rw}:=0\). The regulation continues as follows:
Article 8.6 (2nd part): However, the reduction shall be compensated by an equivalent period of rest taken en bloc before the end of the third week following the week in question.
Article 8.7: Any rest taken as compensation for a reduced weekly rest period shall be attached to another rest period of at least nine hours.
We introduce two registers \(x_{c1},x_{c2}\) with bounds \(45h-24h\). We shall use the following informal mode of speech for the discussion: a reduced weekly rest period creates a 'compensation obligation', namely an additional resting time \(x_{c1}>0\) or \(x_{c2}>1\). The obligations are 'fulfilled' by setting these registers back to \(0\). Note that compensation obligations are created by reduced weekly rest periods and, by Article 8.6 (1st part), this can happen at most every other week. As obligations have to be fulfilled within 3 weeks, at any given time a legal driver can have at most two obligations.
We now give the implementation. Obligations are produced by transitions from _reduced weekly_ (recall \(x_{wr}\) records the resting time in _reduced weekly_): duplicate each such transition, give one guard \(x_{c1}=0\) and action \(x_{c1}:=45h-x_{wr}\), and the other guard \(x_{c1}>0;\ x_{c2}=0\) and action \(x_{c2}:=45h-x_{wr}\). The 3 week deadline to fulfill the obligations is implemented by two counters \(c_{c1},c_{c2}\) with bound 4. These counters are increased by transitions from _week_ but only if some obligation is actually recorded: transitions from _week_ get action \(c_{c1}:=c_{c1}+\operatorname{sgn}(x_{c1});\ c_{c2}:=c_{c2}+\operatorname{sgn} (x_{c2})\). To ensure the deadline, transitions to _week_ get guard \(c_{c1}\leqslant 3;\ c_{c2}\leqslant 3\).
We now implement a way to fullfill obligations, i.e., to set \(x_{c1}\) and \(x_{c2}\) back to \(0\). This is done with the states _compensate1_ and _compensate2_ whose \(\lambda\)-label is \(r\). We use a stopwatch \(x_{cr}\) with bound \(45h-24h\) active at these states. We describe the transitions involving _compensate1_. It receives transitions from the states with \(\lambda\)-label \(r\), that is, _regular daily_, _reduced daily_, _regular weekly_, _reduced weekly_ and _break_. The transition from _break_ has guard \(x_{\mathit{break}}\geqslant 9h\), the others have their respective definitorial guards (e.g., the one from _regular weekly_ has guard \(x_{wr}\geqslant 45h\)). Transitions from _compensate1_ go to _drive_, _other work_ and _accept_. These have guard \(x_{cr}\geqslant x_{c1}\) and action \(x_{c1}:=0:\ c_{c1}:=0\). Additionally, we already introduced transitions from and to _week_: the transition to _week_ is duplicated, one gets guard \(x_{cr}<x_{c1}\), the other gets guard \(x_{cr}\geqslant x_{c1}\) and action \(x_{c1}:=0\). Thus, when the week changes during compensation and at a time-point when the obligation is fulfilled, the counter is not increased.
The transitions from and to _compensate2_ are analogous with \(x_{c2},\ c_{c2}\) replacing \(x_{c1},\ c_{c1}\). These laws use the constant \(t_{16}:=9h=540\); the bound \(45h-24h=1560\) equals \(t_{14}-t_{15}\).
This finishes the definition of our automaton. We close this section with some remarks on the formalization:
**Remark 12**.:
1. Regulation 561 contains a few laws concerning multi-manning that gives rise to an additional activity _available_ and a distinction between breaks and rests. This is omitted in our treatment.
2. Article 7.2 is formally unclean: the second paragraph allows an exception to the first that obviously cannot "comply with the provisions of the first paragraph". A reasonable formalization requires an interpretational change to the law as written. The following two points or [24, 31] give more such examples.
3. The definition in Article 4.(k) forgets the boundary case of a new driver: without any (daily) rest period there cannot be any daily driving time. A similar problem appears with Article 8.6 (3rd part) when there is no previous weekly rest period.
4. Concerning Article 6.1, recall that daily driving times are periods delimited by daily rest periods and a week is defined as calendar week starting at Monday 00:00. Consider a \(10h\) extended daily driving time starting on a Sunday and ending on a Monday. To which one of the two weeks should it be counted? The law seems underspecified here. Our formalization assigns it to the week that starts on Monday. Various tachograph readers make different choices. For example, the software _Police Controller_ has an option to fix the choices or to choose the distribution as to minimize the fine [30].
5. The nomenclature in Regulation 561 is confusing. A _day_ is determined by daily rest periods, a _week_ by the calendar, while _weekly_ (e.g., in Article 8.9) does not refer to calendar weeks. Additionally, the regulation does not state what should be done when a leap second is added on a Sunday so that the time 24:00:01 exists.
6. For example, \((dr)^{270}\) is legal according to Article 7 but likely not in line with the spirit of the law. Another regulation ((EU) 2016/799) stipulates that any minute of rest between two minutes of driving will be considered as driving - outruling the above example. Then \((ddrr)^{135}\) is still legal. We expect that it is generally easy to construct artificial counterintuitive cases.
## 5 Theory of stopwatch automata
In this section we observe that stopwatch automata have the same expressive power as MSO over finite words but a relatively tame model-checking complexity. We also give efficient algorithms for consistency-checking and scheduling (see Section 1.2). Finally, we mention a version of stopwatch automata going beyond MSO.
### Expressivity
**Lemma 13**.: _Every regular language is the language of some stopwatch automaton._
Proof.: Given a non-deterministic finite automaton \(\mathbb{B}=(S,\Sigma,I,F,\Gamma)\) as described in Subsection 2.1, we define a stopwatch automaton \(\mathbb{A}=(Q,\Sigma,X,\lambda,\beta,\zeta,\Delta)\) such that \(L(\mathbb{B})=L(\mathbb{A})\).
The states \(Q\) of \(\mathbb{A}\) are _start_ and _accept_ together with the states \(S\times\Sigma\) labeled \(\lambda\big{(}(s,a)\big{)}:=a\) (the labels of _start_ and _accept_ are irrelevant). We use a stopwatch \(x\) intended to force the automaton to spend \(1\) time unit in every state \((s,a)\): it has bound \(\beta(x):=2\) and is active everywhere, i.e., \(\zeta:=\{x\}\times Q\).
Each transition from some \((s,a)\) to \((s^{\prime},a^{\prime})\) has condition \(x=1\) and action \(x:=0\). We only allow those transitions from \((s,a)\) to \((s^{\prime},a^{\prime})\) when \((s,a,s^{\prime})\in\Gamma\). This defines \(\Delta\) when the states are not _start_ or _accept_. Transitions from _start_ lead to \(I\times\Sigma\) and have guard \(x=0\). Transitions to _accept_ come from \(F\times\Sigma\) and have guard \(x=0\).
The converse of this lemma is based on the following definition.
**Definition 14**.: Given a stopwatch automaton \(\mathbb{A}=(Q,\Sigma,X,\lambda,\beta,\zeta,\Delta)\), we define the following finite automaton \(\mathbb{B}(\mathbb{A})=(S,\Sigma,I,F,\Gamma)\). For \(S\) we take the set of nodes of \(\mathit{TS}(\mathbb{A})\); we let \(I:=\{(\mathit{start},\xi_{0})\}\) where \(\xi_{0}\) is constantly \(0\), and \(F\) contain the nodes \((q,\xi)\) of \(\mathit{TS}(\mathbb{A})\) such that \((q,\xi)\stackrel{{ 0^{*}}}{{\rightarrow}}(\mathit{accept},\xi^{ \prime})\) for some assignment \(\xi^{\prime}\). Here, \(\stackrel{{ 0^{*}}}{{\rightarrow}}\) denotes the transitive and reflexive closure of \(\stackrel{{ 0}}{{\rightarrow}}\). We let \(\Gamma\) contain \(\big{(}(q,\xi),a,(q^{\prime},\xi^{\prime})\big{)}\) if \(\lambda(q^{\prime})=a\) and there is \(\xi^{\prime\prime}\) such that \((q,\xi)\stackrel{{ 0^{*}}}{{\rightarrow}}(q^{\prime},\xi^{ \prime\prime})\stackrel{{ 1}}{{\rightarrow}}(q^{\prime},\xi^{ \prime})\).
**Theorem 15**.: _A language is regular if and only if it is the language of some stopwatch automaton._
Proof.: One direction follows from Lemma 13. Conversely, given a language that is recognised by some SWA \(\mathbb{A}\) we easily see that \(L(\mathbb{A})=L(\mathbb{B}(\mathbb{A}))\) where \(\mathbb{B}(\mathbb{A})\) is as in Definition 14. That \(L(\mathbb{B}(\mathbb{A}))\subseteq L(\mathbb{A})\) is immediate and for \(L(\mathbb{A})\subseteq L(\mathbb{B}(\mathbb{A}))\) we observe that a step in \(\mathit{TS}(\mathbb{A})\) of duration \(n\) can be obtained by \(n\) consecutive steps of duration \(1\).
The proof of Lemma 13 gives a polynomial time computable function mapping every finite automaton to an equivalent stopwatch automaton. There is no such function for the converse translation, in fact, stopwatch automata are exponentially more succinct than finite automata.
**Proposition 16**.: _For every \(k\) there is a stopwatch automaton \(\mathbb{A}_{k}\) of size \(O(\log k)\) such that every finite automaton accepting \(L(\mathbb{A}_{k})\) has size at least \(k\)._
We defer the proof to the end of Section 5.5.
### Consistency-checking
By Theorem 15 we know that the languages accepted by stopwatch automata are exactly the regular languages. In particular, these languages are closed under intersections. We give an explicit construction of such an automaton computing an intersection because we shall need explicit bounds.
**Lemma 17**.: _Given stopwatch automata \(\mathbb{A},\mathbb{A}^{\prime}\) with bounds \(B_{\mathbb{A}},B_{\mathbb{A}^{\prime}}\) one can compute in time \(O(|\mathbb{A}|\!\cdot\!|\mathbb{A}^{\prime}|\!|)\) a stopwatch automaton \(\mathbb{A}\otimes\mathbb{A}^{\prime}\) with bound \(B_{\mathbb{A}\otimes\mathbb{A}^{\prime}}=B_{\mathbb{A}}\cdot B_{\mathbb{A}^{ \prime}}\) such that \(L(\mathbb{A}\otimes\mathbb{A}^{\prime})=L(\mathbb{A})\cap L(\mathbb{A}^{ \prime})\)._
Proof.: Let \(\mathbb{A}=(Q,\Sigma,X,\lambda,\beta,\zeta,\Delta)\) and \(\mathbb{A}^{\prime}=(Q^{\prime},\Sigma,X^{\prime},\lambda^{\prime},\beta^{ \prime},\zeta^{\prime},\Delta^{\prime})\) be stopwatch automata. Without loss of generality, we can assume that
1. \(X\) and \(X^{\prime}\) are disjoint;
2. neither \(\mathbb{A}\) nor \(\mathbb{A}^{\prime}\) contains transitions from its accept state;
3. both \(\Delta\) and \(\Delta^{\prime}\) contain for every state except the accept state a transition from the state to itself with trivial guard and action;
We first define an automaton \(\mathbb{A}\times\mathbb{A}^{\prime}\) with alphabet \(\Sigma\times\Sigma\). Its states are \(Q,\times Q^{\prime}\) with start and accept state the pair of corresponding states of \(\mathbb{A}\), \(\mathbb{A}^{\prime}\). The stopwatches are \(X\cup X^{\prime}\) with the same bounds as in \(\mathbb{A},\mathbb{A}^{\prime}\). A stopwatch \(x\in X\cup X^{\prime}\) is active in \((q,q^{\prime})\) if either \((x,q)\in\zeta\) or \((x,q^{\prime})\in\zeta^{\prime}\). A state \((q,q^{\prime})\) is labeled \((\lambda(q),\lambda^{\prime}(q^{\prime}))\). The transitions are \(((q_{0},q^{\prime}_{0}),g^{*},\alpha^{*},(q_{1},q^{\prime}_{1}))\) such that there are \((q_{0},g,\alpha,q_{1})\in\Delta\) and \((q^{\prime}_{0},g^{\prime},\alpha^{\prime},q^{\prime}_{1})\in\Delta^{\prime}\) such that \(g^{*}\) computes the conjunction of \(g\) and \(g^{\prime}\) and \(\alpha^{*}\) executes \(\alpha\) and \(\alpha^{\prime}\) in parallel.
This is well-defined by (a). Also by (a) we can write assignments for \(\mathbb{A}\times\mathbb{A}^{\prime}\) as \(\xi\cup\xi^{\prime}\) where \(\xi,\xi^{\prime}\) are assignments for \(\mathbb{A},\mathbb{A}^{\prime}\). We claim that \(\mathbb{A}\times\mathbb{A}^{\prime}\) accepts a word \((a_{0},a^{\prime}_{0})\cdots(a_{n-1},a^{\prime}_{n-1})\in(\Sigma\times\Sigma)^ {n}\) if and only if \(a_{0}\cdots a_{n-1}\in L(\mathbb{A})\) and \(a^{\prime}_{0}\cdots a^{\prime}_{n-1}\in L(\mathbb{A}^{\prime})\).
Indeed, if \(((q_{0},q^{\prime}_{0}),\xi_{0}\cup\xi^{\prime}_{0})\stackrel{{ t_{0}}}{{\to}}\cdots((q_{\ell-1},q^{\prime}_{\ell-1}),\xi_{\ell-1} \cup\xi^{\prime}_{\ell-1})\) is an initial accepting run of \(\mathbb{A}\times\mathbb{A}^{\prime}\), then, by (b), \(q_{i}\) is the accept state of \(\mathbb{A}\) exactly for \(i=\ell-1\). Then \((q_{0},\xi_{0})\stackrel{{ t_{0}}}{{\to}}\cdots(q_{\ell-1},\xi_{ \ell-1})\) is an initial accepting run of \(\mathbb{A}\) that reads \(a_{0}\cdots a_{n-1}\). Analogously, \(a^{\prime}_{0}\cdots a^{\prime}_{n-1}\in L(\mathbb{A}^{\prime})\).
Conversely, given \(a_{0}\cdots a_{n-1}\in L(\mathbb{A})\) and \(a^{\prime}_{0}\cdots a^{\prime}_{n-1}\in L(\mathbb{A}^{\prime})\) we can choose initial accepting runs of \(\mathbb{A}\) and \(\mathbb{A}^{\prime}\) reading these words, respectively, and have the form:
\[(q_{0},\xi_{0})\stackrel{{ 0^{*}}}{{\to}}(r_{0},\eta_{0}) \stackrel{{ 1}}{{\to}}(q_{1},\xi_{1})\stackrel{{ 0^{*}}}{{\to}}(r_{1},\eta_{1}) \stackrel{{ 1}}{{\to}}\cdots(q_{\ell-1},\xi_{\ell-1}) \stackrel{{ 0^{*}}}{{\to}}(r_{\ell},\eta_{\ell}),\] \[(q^{\prime}_{0},\xi^{\prime}_{0})\stackrel{{ 0^{*}}}{{\to}}(r^{\prime}_{0},\eta^{\prime}_{0}) \stackrel{{ 1}}{{\to}}(q^{\prime}_{1},\xi^{\prime}_{1}) \stackrel{{ 0^{*}}}{{\to}}(r^{\prime}_{1},\eta^{\prime}_{1}) \stackrel{{ 1}}{{\to}}\cdots(q^{\prime}_{\ell^{\prime}-1},\xi^{\prime}_{\ell-1} )\stackrel{{ 0^{*}}}{{\to}}(r^{\prime}_{\ell^{\prime}},\eta^{\prime}_{\ell^{\prime}}).\]
Here, \(\stackrel{{ 0^{*}}}{{\to}}\) denotes the transitive closure of \(\stackrel{{ 0}}{{\to}}\) in \(\mathit{TS}(\mathbb{A})\) and \(\mathit{TS}(\mathbb{A}^{\prime})\). Then \(\ell=\ell^{\prime}=n\). By (c), we can assume that the \(\stackrel{{ 0^{*}}}{{\to}}\)-paths have the same length. Then the runs have the same length. Then the runs can be combined in the obvious way to an initial accepting run of \(\mathbb{A}\times\mathbb{A}^{\prime}\) reading \((a_{0},a^{\prime}_{0})\cdots(a_{n-1},a^{\prime}_{n-1})\). This proves the claim.
The automaton \(\mathbb{A}\otimes\mathbb{A}^{\prime}\) is easily obtained from a modification of \(\mathbb{A}\times\mathbb{A}^{\prime}\) whose initial accepting runs are precisely those initial accepting runs of \(\mathbb{A}\times\mathbb{A}^{\prime}\) that read words over \(\{(a,a)\;|\;a\in\Sigma\}\). Such a modification is easy to obtain: add a new stopwatch \(y\) with bound \(1\) to \(\mathbb{A}\times\mathbb{A}^{\prime}\) that is active in all states; every transition gets action \(y:=0\) and every transition from a state \((q,q^{\prime})\) with \(\lambda(q)\neq\lambda(q^{\prime})\) gets guard \(y=0\).
The claims about the bound of \(\mathbb{A}\otimes\mathbb{A}^{\prime}\) and the time needed to compute it are clear.
The following algorithm can be used to check if the intersection of two languages is empty or not. Informally, we can perceive this as an algorithm that checks whether a certain type of behaviour is illegal according to a law when both the type of behaviour and the law are specified by stopwatch automata.
**Theorem 18**.: _There is an algorithm that given stopwatch automata \(\mathbb{A},\mathbb{A}^{\prime}\) with bounds \(B_{\mathbb{A}},B_{\mathbb{A}^{\prime}}\), respectively, decides whether \(L(\mathbb{A})\cap L(\mathbb{A}^{\prime})\neq\varnothing\) in time_
\[O\big{(}(\|\mathbb{A}\|\cdot\|\mathbb{A}^{\prime}\|\cdot B_{\mathbb{A}}\cdot B _{\mathbb{A}^{\prime}})^{3}\big{)}.\]
Proof.: The algorithm first computes the product automaton \(\mathbb{A}\otimes\mathbb{A}^{\prime}\) from the previous lemma. Next, the algorithm computes the finite automaton \(\mathbb{B}(\mathbb{A}\otimes\mathbb{A}^{\prime})=(S,\Sigma,I,F,\Gamma)\) as given in Definition 14. Note \(|S|\leqslant O(\|\mathbb{A}\|\cdot\|\mathbb{A}\|\cdot B_{\mathbb{A}\otimes \mathbb{A}^{\prime}})\).
To compute \(\Gamma\) we first compute the graph on \(S\) with edges \(\overset{0}{\rightarrow}\): cycle through all \((q,\xi)\in S\) and transitions \(\Delta\) of \(\mathbb{A}\otimes\mathbb{A}^{\prime}\) and evaluate its guard and action on \(\xi\). Each evaluation can be done in time linear in the size of the circuits, so in time \(O(\|\mathbb{A}\|\cdot\|\mathbb{A}^{\prime}\|)\). Thus, the graph can be computed in time \(O(|S|\cdot|\Delta|\cdot\|\mathbb{A}\|\cdot\|\mathbb{A}^{\prime}\|)\). Its transitive closure can be computed in cubic time \(O(|S|^{3})\). Each of the at most \(|S|^{2}\) edges in \(\overset{0^{*}}{\rightarrow}\) determines a transition in \(\Gamma\). Thus \(\mathbb{B}(\mathbb{A}\otimes\mathbb{A}^{\prime})\) can be computed in time cubic in \(\|\mathbb{A}\|\cdot\|\mathbb{A}^{\prime}\|\cdot B_{\mathbb{A}}\cdot B_{ \mathbb{A}^{\prime}}\).
Observe \(L(\mathbb{B}(\mathbb{A}\otimes\mathbb{A}^{\prime}))\neq\varnothing\) if and only if some final state is reachable from the initial state. Checking this takes linear time in the size of the automaton.
The algorithm solves the consistency problem for stopwatch automata by fixing input \(\mathbb{A}^{\prime}\) to some stopwatch automaton with \(L(\mathbb{A}^{\prime})=\Sigma^{*}\).
**Corollary 19**.: _There is an algorithm that given a stopwatch automaton \(\mathbb{A}\) with bound \(B_{\mathbb{A}}\), decides whether \(L(\mathbb{A})\neq\varnothing\) in time_
\[O\big{(}\|\mathbb{A}\|^{3}\cdot B_{\mathbb{A}}^{3}\big{)}.\]
### Model-checking
The algorithm of Theorem 18 can be used to solve the model-checking problem: note \(w\in L(\mathbb{A})\) if and only if \(L(\mathbb{B}_{w})\cap L(\mathbb{A})\neq\varnothing\) for a suitable size \(O(|w|)\) automaton \(\mathbb{B}_{w}\) with \(L(\mathbb{B}_{w})=\{w\}\). A more direct model-checking algorithm achieves a somewhat better time complexity, in particular, linear in \(B_{\mathbb{A}}\):
**Theorem 20**.: _There is an algorithm that given a word \(w\) and a stopwatch automaton \(\mathbb{A}\) with bound \(B_{\mathbb{A}}\) decides whether \(w\in L(\mathbb{A})\) in time_
\[O\big{(}\|\mathbb{A}\|^{2}\cdot B_{\mathbb{A}}\cdot|w|\big{)}.\]
Proof.: Let \(\mathbb{A}=(Q,\Sigma,X,\lambda,\beta,\zeta,\Delta)\) have bound \(B_{\mathbb{A}}\). Let \(G=(V,E)\) be the directed graph whose vertices \(V\) are the nodes of \(\mathit{TS}(\mathbb{A})\) and whose directed edges \(E\) are given by \(\overset{0}{\rightarrow}\). Note \(|V|=|Q|\cdot B_{\mathbb{A}}\) and \(|E|\leqslant B_{\mathbb{A}}\cdot|\Delta|\). Let \(w=w_{1}\cdots w_{t}\in\Sigma^{t}\) for some \(t\in\mathbb{N}\).
We define a directed graph with vertices \(\{0,\ldots,t\}\times V\) and the following edges. Edges within each copy \(\{i\}\times V\) are copies of \(E\). So these account for at most \((t+1)\cdot B_{\mathbb{A}}\cdot|\Delta|\) many edges, each determined by evaluating guards and actions in time \(O(\|\mathbb{A}\|)\). Further edges lead from vertices in the \(i\)-th copy \(\{i\}\times V\) to vertices in the \((i+1)\)th copy \(\{i+1\}\times V\), namely from \((i,(q,\xi))\) to \((i+1,(q,\xi^{\prime}))\) if
\[(q,\xi)\overset{1}{\rightarrow}(q,\xi^{\prime})\text{ and }q\neq accept\text{ and } \lambda(q)=w_{i}. \tag{3}\]
There are at most \(t\cdot|Q|\cdot B_{\mathbb{A}}\) such edges between copies. This graph has size \(O(t\cdot|\mathbb{A}|\cdot B_{\mathbb{A}})\) and can be computed in time \(O(t\cdot|\mathbb{A}|^{2}\cdot B_{\mathbb{A}})\).
It is clear that \(w\in L(\mathbb{A})\) if and only if \((t,(\mathit{accept},\xi^{\prime}))\) for some assignment \(\xi^{\prime}\) is _reachable_ in the sense that there is a path from \((0,(\mathit{start},\xi_{0}))\) with \(\xi_{0}\) constantly \(0\) to it. Checking this takes time linear in the size of the graph.
### Scheduling
We strengthen the model-checker of Theorem 20 to solve the scheduling problem: the model-checker treats the special case for inputs with \(n=0\).
**Theorem 21**.: _There is an algorithm that given a stopwatch automaton \(\mathbb{A}\) with bound \(B_{\mathbb{A}}\) and alphabet \(\Sigma\), a word \(w\in\Sigma^{*}\), a letter \(a\in\Sigma\) and \(n\in\mathbb{N}\), rejects if there does not exist a word \(v\) over \(\Sigma\) of length \(n\) such that \(wv\in L(\mathbb{A})\) and otherwise computes such a word \(v\) with maximal \(\#_{a}(v)\). It runs in time_
\[O\big{(}\|\mathbb{A}\|^{2}\cdot B_{\mathbb{A}}\cdot(|w|+n)\big{)}.\]
Proof.: Consider the graph constructed in the poof of Theorem 20 but with \(t+n\) instead of \(t\) and the following modification: in (3) for \(t\leqslant i<t+n\) drop the condition \(\lambda(q)=w_{i}\) for edges between the \(i\)-th and the \((i+1)\)-th copy. In the resulting graph there is a reachable vertex \((t+n,(\mathit{accept},\xi^{\prime}))\) for some assignment \(\xi^{\prime}\) if and only if there exists a length \(n\) word \(v\) such that \(wv\in L(\mathbb{A})\). We now show how to compute the maximum value \(\#_{a}(v)\) for such \(v\).
Successively for \(i=0,\ldots,n\) compute a label \(V_{i}(q,\xi)\) for each vertex \((t+i,(q,\xi))\) in the \((t+i)\)-th copy. For \(i=0\) all these labels are \(\#_{a}(w)\). For \(i>0\) label \((t+i,(q,\xi))\) with the maximum value
\[\left\{\begin{array}{ll}V_{i-1}(q,\xi^{\prime})+1&\text{if }\lambda(q)=a,\\ V_{i-1}(q,\xi^{\prime})&\text{else}\end{array}\right.\]
taken over \(\xi^{\prime}\) such that there is an edge from \((t+i-1,(q,\xi^{\prime}))\) to \((t+i,(q,\xi))\). Then the desired maximum value \(\#_{a}(v)\) is the maximum label \(V_{n}(q,\xi)\) such that \(q=\mathit{accept}\) and \((t+n,(q,\xi))\) is reachable.
Additionally we are asked to compute a word \(v\) witnessing this value. To do so the labeling algorithm computes a set of directed edges, namely for each \((t+i,(q,\xi))\) with \(i>0\) to a vertex \((t+i-1,(q,\xi^{\prime}))\) witnessing the maximum value above. This set of edges defines a partial function that, for each \(i>0\), maps vertices in the \((t+i)\)-th copy to vertices in the \((t+i-1)\)-th copy. To compute \(v\) as desired start at a vertex \((t+n,(q_{n},\xi_{n}))\) witnessing the maximal value \(\#_{a}(v)\) and iterate this partial function to get a sequence of vertices \((t+i,(q_{i},\xi_{i}))\). Then \(v:=\lambda(q_{1})\cdots\lambda(q_{n})\) is as desired.
It is clear that all this can be done in time linear in the size of the graph.
### Beyond regularity
A straightforward generalization of stopwatch automata allows \(\beta\) to take value \(\infty\). An _unbounded stopwatch automaton_ is a stopwatch automaton where \(\beta\) is the function constantly
\(\infty\). We note that model-checking is undecidable already for _simple_ such automata (see [16, Proposition 1] for a similar proof). These simple automata use two stopwatches \(x,y\) that are nowhere active (i.e., \(\zeta=\emptyset\)), all guards check \(z=0\) or \(z\neq 0\), and all actions are either \(z:=z+1\) or \(z:=z\dot{-}1=\max\{z-1,0\}\) for some \(z\in\{x,y\}\).
**Proposition 22**.: _There is no algorithm that given a simple unbounded stopwatch automaton decides whether it accepts the empty word._
Proof.: Recall, a _two counter machine_ operates two variables \(x,y\) called _counters_ and is given by a finite non-empty sequence \((\pi_{0},\ldots,\pi_{\ell})\) of _instructions_\(\pi_{i}\), namely, either \(z\dot{:=}z+1,z\dot{:=}z\dot{-}1\), "Halt" or "if \(z=0\), then goto \(j\), else goto \(k\)" where \(z\in\{x,y\}\) and \(j,k\leqslant\ell\); exactly \(\pi_{\ell}\) is "Halt". The computation (without input) of the machine is straightforwardly explained. It is long known that it is undecidable whether a given two counter machine halts or not.
Given such a machine \((\pi_{0},\ldots,\pi_{\ell})\) it is easy to construct a simple automaton that accepts the empty word if and only if the two counter machine halts. It has states \(Q=\{0,1,\ldots,\ell\}\) understanding \(\mathit{start}=0\) and \(\ell=\mathit{accept}\); \(\Sigma\) and \(\lambda\) are unimportant, and \(\Delta\) is defined as follows. If \(\pi_{i}\) is the instruction \(z\dot{:=}z+1\), then add the edge \((i,g,\alpha,i+1)\) where \(g\) is trivial and \(\alpha\) changes \(z\) to \(z+1\). If \(\pi_{i}\) is the instruction \(z\dot{:=}z\dot{-}1\), proceed similarly. If \(\pi_{i}\) is "if \(z=0\), then goto \(j\), else goto \(k\)" add edges \((i,g,\alpha,j),(i,g^{\prime},\alpha,k)\) where \(g\) checks \(z=0\) and \(g^{\prime}\) checks \(z\neq 0\) and \(\alpha\) computes the identity.
What seems to be a middle ground between unbounded stopwatches and stopwatches with a constant bound is to let the bound grow with the length of the input word.
The definition of a stopwatch automaton \(\mathbb{A}=(Q,\Sigma,X,\lambda,\beta,\zeta,\Delta)\) can be generalized letting \(\beta:X\times\mathbb{N}\to\mathbb{N}\) be _monotone_ in the sense that \(\beta(x,n)\leqslant\beta(x,n^{\prime})\) for all \(x\in X,\ n,n^{\prime}\in\mathbb{N}\) with \(n\leqslant n^{\prime}\). We call this a _\(\beta\)-bounded stopwatch automaton_ and call \(B_{\mathbb{A}}:\mathbb{N}\to\mathbb{N}\) defined by
\[B_{\mathbb{A}}(n):=\prod_{x\in X}(\beta(x,n)+1)\]
the _bound of_\(\mathbb{A}\). For each \(n\in\mathbb{N}\) we have a stopwatch automaton \(\mathbb{A}(n):=(Q,\Sigma,X,\beta_{n},\zeta,\lambda,\Delta)\) where \(\beta_{n}:X\to\mathbb{N}\) maps \(x\in X\) to \(\beta(x,n)\); note \(B_{\mathbb{A}(n)}=B_{\mathbb{A}}(n)\).
The _language \(L(\mathbb{A})\) accepted_ by a \(\beta\)-bounded stopwatch automaton \(\mathbb{A}\) contains a word \(w\) over \(\Sigma\) if and only if \(w\in L(\mathbb{A}(|w|))\).
**Proposition 23**.: _A language is accepted by some stopwatch automaton if and only if it is accepted by some \(\beta\)-bounded stopwatch automaton with bounded \(\beta\)._
Proof.: Let \(\mathbb{A}\) be a \(\beta\)-bounded stopwatch automaton for bounded \(\beta\). There is \(n_{0}\in\mathbb{N}\) such that \(\beta(x,n)=\beta(x,n_{0})\) for all \(x\in X\) and \(n\geqslant n_{0}\). Hence \(L(\mathbb{A}(n_{0}))\) and \(L(\mathbb{A})\) contain the same words of length at least \(n_{0}\). Since there are only finitely many shorter words, and \(L(\mathbb{A}(n_{0}))\) is regular by Theorem 15, also \(L(\mathbb{A})\) is regular.
Theorem 20 on feasible model checking generalizes:
**Corollary 24**.: _Let \(X\) be a finite set and assume \(\beta:X\times\mathbb{N}\to\mathbb{N}\) is such that \(\beta(x,n)\) is computable from \((x,n)\in X\times\mathbb{N}\) in time \(O(n)\). Then there is an algorithm that given a word \(w\) and a \(\beta\)-bounded stopwatch automaton \(\mathbb{A}\) with bound \(B_{\mathbb{A}}:\mathbb{N}\to\mathbb{N}\) decides whether \(w\in L(\mathbb{A})\) in time_
\[O\big{(}\|\mathbb{A}\|^{2}\cdot B_{\mathbb{A}}(|w|)\cdot|w|\big{)}.\]
If \(\beta(x,n)\) grows slowly in \(n\) this can be considered tractable. Any growth, no matter how slow, leads to non-regularity:
**Proposition 25**.: _Let \(f:\mathbb{N}\to\mathbb{N}\) be unbounded and non-decreasing. Then there is a \(\beta\)-bounded stopwatch automaton \(\mathbb{A}=(Q,\Sigma,X,\lambda,\beta,\zeta,\Delta)\) with \(\beta(x,n)=f(n)\) for all \(x\in X\) and all \(n\in\mathbb{N}\) such that \(L(\mathbb{A})\) is not regular._
Proof.: Let \(\Sigma\) be the three letter alphabet \(\{a,b,c\}\), and let \(L\) contain a length \(t\) word over \(\Sigma\) if it has the form \(a^{s}b^{s}c^{*}\) for some \(s<f(t)\). Since \(f\) is unbounded, \(L\) contains such words for arbitrarily large \(s\). It thus follows from the Pumping Lemma, that \(L\) is not regular.
It suffices to define a \(\beta\)-bounded stopwatch automaton \(\mathbb{A}\) such that that accepts a word of sufficiently large length \(t\) if and only if it belong to \(L\). The states are \(\mathit{start},\mathit{accept},q_{a},q_{b},q_{c}\) with \(\lambda\)-labels \(a,a,a,b,c\), respectively. We use stopwatches \(x_{a},y_{a},x_{b}\) all with bound \(f(t)\) and declare \(x_{a},y_{a}\) active in \(q_{a}\) and \(\mathit{start}\), and \(x_{b}\) active in \(q_{b}\). There are transitions from \(\mathit{start}\) to \(q_{a}\), from \(q_{a}\) to \(q_{b}\), from \(q_{b}\) to \(q_{c}\), and from \(q_{c}\) to \(\mathit{accept}\) - described next.
The transition from \(\mathit{start}\) to \(q_{a}\) has guard \(x_{a}=0\) and action \(y_{a}\coloneqq 1\). For sufficiently large \(t\), the bound \(f(t)\) of \(x_{a}\) is positive. Then any initial accepting computation (of \(\mathbb{A}\) on a word of length \(t\)) spends \(0\) time in \(\mathit{start}\), and thus starts \((\mathit{start},[0,0,0])\stackrel{{ 0}}{{\to}}(q_{a},[0,1,0])\); we use a notation like \([1,2,3]\) to denote the assignment that maps \(x_{a}\) to \(1\), \(y_{a}\) to \(2\), and \(x_{b}\) to \(3\).
The transition from \(q_{a}\) to \(q_{b}\) has guard \(x<y\) and trivial action. An initial accepting computation on a word of length \(t\) can stay in \(q_{a}\) for some time \(r\) reaching \((q_{a},[r,r+1,0])\) for \(r<f(t)\), or reaching \([f(t),f(t),0]\) for \(r\geqslant f(t)\) due to the bound of \(x,y\). In the latter case the transition to \(q_{b}\) is disabled and _accept_ cannot be reached. Staying in \(q_{a}\) for any time \(s<f(t)\) allows the transition to \(q_{b}\).
The transition from \(q_{b}\) to \(q_{c}\) has guard \(x_{a}=x_{b}\) and trivial action. The transition from \(q_{c}\) to _accept_ has trivial guard and action.
We can now prove that stopwatch automata are exponentially more succinct than finite automata as was expressed in Proposition 16.
Proof of Proposition 16.: Consider the previous proof for the function \(f\) constantly \(k\). Clearly, then \(L\) is regular. By the Pumping Lemma, a finite automaton accepting \(L\) has at least \(k\) states. The stopwatch automaton \(\mathbb{A}\) accepts \(L\) and has size \(O(\log k)\). Indeed, the size of a binary encoding of \(\mathbb{A}\) is dominated by the bits required to write down the bound \(k\) of the stopwatches.
Discussion and a lower bound
We suggest the model-checking problem for stopwatch automata and finite words (over some finite alphabet) as an answer to our central question in Section 1.2, the quest for a model for algorithmic laws concerning activity sequences. This section discusses to what extent this model meets the three desiderata listed in Section 1.2, and mentions some open ends for future work.
### Summary
ExpressivityStopwatch automata are highly expressive, namely, by Theorems 15 and 1, equally expressive as \(\mathsf{MSO}\) (over finite words). In particular, [24] argued that Regulation 561 is expressible in \(\mathsf{MSO}\), so it is also expressible by stopwatch automata. In Section 5.5 we showed that a straightforward generalization of stopwatch automata can go even beyond \(\mathsf{MSO}\). Future research might show whether this is useful for modeling actual laws.
**Example 26**.: Imagine an employee who can freely schedule his work and choose among various activities \(\Sigma\) to execute at any given time point. The employer favors an activity \(a\in\Sigma\) and checks at random time-points that the employee used at least a third of his work-time on activity \(a\) since the previous check. The set of \(w\in\Sigma^{*}\) with \(\#_{a}(w)\geqslant|w|/3\) is not regular but is accepted by a simple \(\beta\)-bounded stopwatch automaton with one stopwatch \(x\) and bound \(\beta(x,t)=\lceil t/3\rceil\).
NaturalityWe stressed that expressivity alone is not sufficient, _natural_ expressivity is required. This is an informal requirement, roughly, it means that the specification of a law should be readable, and in particular, not too large. In particular, as emphasized in Section 2.1, constants appearing in laws bounding durations of certain activities should not blow up the size of the formalization (like it is the case for \(\mathsf{LTL}\)). We suggest that our expression of Regulation 561 by a stopwatch automaton is natural.
There is a possibility to use stopwatch automata as a _law maker_: an interface that allows to specify laws in a formally rigorous way without assuming much mathematical education. It is envisionable to use graphical interfaces akin to the one provided by UPPAAL10 to draw stopwatch automata. A discussion of this possibility as well as the concept of "readability" is outside the scope of this paper.
Footnote 10: [https://uppaal.org/](https://uppaal.org/)
TractabilityThe main constraint of a model-checking problem as a formal model for algorithmic law is its computational tractability. In particular, the complexity of this problem should scale well with the constants appearing in the law. This asks for a fine-grained complexity analysis taking into account various aspects of a typical input, and, technically, calls for a complexity analysis in the framework of parameterized complexity theory. Theorem 20 gives a model-checker for stopwatch automata. Its worst case time complexity upper
bound scales transparently with the involved constants, and, most importantly, the runtime is not exponential in these constants. This overcomes a bottleneck of many model-checkers designed in the context of system verification (see Section 2.2). Theorems 18 and 21 give similar algorithms for consistency-checking and scheduling.
### Parameterized model-checking
We have an upper bound \(O(\|\mathbb{A}\|^{2}\cdot B_{\mathbb{A}}\cdot|w|)\) to the worst case runtime of our model-checker. The troubling factor is \(B_{\mathbb{A}}\): the runtime grows fast with the stopwatch bounds of the automaton. Intuitively, these bounds stem from the constants mentioned by the law as duration constraints on activities. At least, this is the case for our formalization of Regulation 561: we explicitly mentioned 17 constants \(\bar{t}=(t_{0},\ldots,t_{16})\) which determine our automaton, specifically its bounds, guards and actions. To wit, \(\bar{t}\) determines bounds on stopwatches as follows:
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \cline{2-11} \(x_{break}\) & \(x_{cd}\) & \(x_{day}\) & \(x_{dr}\) & \(x_{dd}\) & \(x_{week}\) & \(x_{ww}\) & \(x_{dw},x_{dw}^{\prime}\) & \(x_{wr}\) & \(x_{pw}\) & \(x_{c1},x_{c2}\) \\ \hline \(t_{16}\) & \(t_{0}+1\) & \(t_{3}+1\) & \(t_{4}\) & \(t_{8}+1\) & \(t_{10}+1\) & \(t_{12}+1\) & \(t_{11}+1\) & \(t_{14}\) & \(t_{16}+1\) & \(t_{14}-t_{15}\) \\ \hline \end{tabular} The other stopwatches have bounds independent of \(\bar{t}\in\mathbb{N}^{17}\). For any choice of \(\bar{t}\) we get an automaton \(\mathbb{A}(\bar{t})\) that accepts exactly the words that represent activity sequences that are legal according to the variant of Regulations 561 obtained by changing these constants to \(\bar{t}\). It is a matter of no concern to us that not all choices for \(\bar{t}\) lead to meaningful laws. We are interested in how the runtime of our model-checker for Regulation 561 depends on these constants. By Theorem 20 we obtain:
**Corollary 27**.: _There is an algorithm that given \(\bar{t}\in\mathbb{N}^{17}\) and a word \(w\) decides whether \(w\in L(\mathbb{A}(\bar{t}))\) in time_
\[O\big{(}t_{16}^{2}\cdot t_{0}\cdot t_{3}\cdot t_{4}\cdot t_{8}\cdot t_{10} \cdot t_{12}\cdot t_{11}^{2}\cdot t_{14}\cdot(t_{14}-t_{15})^{2}\ \cdot\ |w|\big{)}.\]
For the actual values of \(\bar{t}\) in Regulation 561 the above product of the \(t_{i}\)'s evaluates to the number
\[6006978697267786744332288000000000000000000.\]
This casts doubts whether the factor \(B_{\mathbb{A}}\) in our worst-case runtime \(O(\|\mathbb{A}\|^{2}\cdot B_{\mathbb{A}}\cdot|w|)\) should be regarded tractable. Can we somehow improve the runtime dependence from the constants?
For the sake of discussion, note that \(B_{\mathbb{A}}\) is trivially bounded by \(t_{\mathbb{A}}^{c_{\mathbb{A}}}\) where \(c_{\mathbb{A}}\) is the number of stopwatches of \(\mathbb{A}\) and \(t_{\mathbb{A}}\) is the largest bound of some stopwatch of \(\mathbb{A}\) (as in Section 2.4). Intuitively, \(c_{\mathbb{A}}\) is "small" but \(t_{\mathbb{A}}\) is not. In the spirit of parameterized complexity theory it is natural to ask whether the factor \((B_{\mathbb{A}}\text{ or })\)\(t_{\mathbb{A}}^{c_{\mathbb{A}}}\) can be replaced by \(f(c_{\mathbb{A}})\cdot t_{\mathbb{A}}^{O(1)}\) for some computable function \(f:\mathbb{N}\to\mathbb{N}\). We now formulate this question precisely in the framework of parameterized complexity theory.
The canonical parameterized version of our model-checking problem is
\begin{tabular}{|l l|} \hline _Input:_ & a stopwatch automaton \(\mathbb{A}=(Q,\Sigma,X,\lambda,\beta,\zeta,\Delta)\) and \(w\in\Sigma^{*}\). \\ _Parameter:_ & \(\|\mathbb{A}\|\). \\ _Problem:_ & \(w\in L(\mathbb{A})\)? \\ \hline \end{tabular}
Our model-checker of Theorem 20 witnesses that this problem is fixed-parameter tractable. Indeed, \(\|\mathbb{A}\|^{2}\cdot B_{\mathbb{A}}\leqslant f(\|\mathbb{A}\|)\) for some computable \(f:\mathbb{N}\to\mathbb{N}\) because the circuits in \(\mathbb{A}\) have size \(\geqslant\log B_{\mathbb{A}}\). Intuitively, that \(B_{\mathbb{A}}\) is bounded in terms of the parameter \(\|\mathbb{A}\|\) means that the parameterized problem above models instances where \(B_{\mathbb{A}}\) is "small", in particular \(\beta\) takes "small" values. But there are cases of interest where this is not true: the constant \(t_{10}:=10080\) in Regulation 561 is not "small". In the situation of such an algorithmic law, the above parameterized problem is the wrong model.
A better model parameterizes a model-checking instance \((\mathbb{A},w)\) by the size of \(\mathbb{A}\) but discounts the stopwatch bounds. More precisely, consider the following parameterized problem:
\begin{tabular}{|l l|} \hline \(p\)-SWA & \\ _Input:_ & a stopwatch automaton \(\mathbb{A}=(Q,\Sigma,X,\lambda,\beta,\zeta,\Delta)\) and \(w\in\Sigma^{*}\). \\ _Parameter:_ & \(|Q|+|\Sigma|+|X|+|\Delta|\). \\ _Problem:_ & \(w\in L(\mathbb{A})\)? \\ \hline \end{tabular}
Note that the algorithm of Theorem 20 does not witness that this problem would be fixed-parameter tractable. We arrive at the precise question:
_Is \(p\)-SWA fixed-parameter tractable?_
### A lower bound
In this section we prove that the answer to the above question is likely negative:
**Theorem 28**.: \(p\)-SWA _is not fixed-parameter tractable unless every problem in the W-hierarchy is fixed-parameter tractable._
We refer to any of the monographs [26, 35, 27] for a definition of the _W-hierarchy_\(\mathsf{W}[1]\subseteq\mathsf{W}[2]\subseteq\cdots\). As mentioned in Section 1.2, the central hardness hypothesis of parameterized complexity theory is that already the first level \(\mathsf{W}[1]\) contains problems that are not fixed-parameter tractable. We thus consider Theorem 28 as strong evidence that the answer to our question is negative.
We prove Theorem 28 by a reduction from a parameterized version of the _Longest Common Subsequence Problem (LCS)_. This classical problem takes as inputs an alphabet \(\Sigma\), finitely many words \(w_{0},\ldots,w_{k-1}\) over \(\Sigma\) and a natural number \(m\). The problem is to decide whether the given words have a common subsequence of length \(m\): such a subsequence is a length \(m\) word \(a_{0}\cdots a_{m-1}\) over \(\Sigma\) (the \(a_{i}\) are letters from \(\Sigma\)) that can be obtained from every \(w_{i},i<k\), by deleting some letters. In other words, for every \(i<k\) there are \(j^{i}_{0}<\cdots<j^{i}_{m-1}<|w_{i}|\) such that for all \(\ell<m\) the word \(w_{i}\) has letter \(a_{\ell}\) at position \(j^{i}_{\ell}\). For example, both \(bbaccb\) and \(bbaccb\) are common subsequences of \(abbaaccb\) and \(bbacccacbb\).
This problem received considerable attention in the literature and has several natural parameterized versions [9, 10, 8, 50]. We consider the following one:11
Footnote 11: In [35] the notation \(p\)-LCS refers to a different parameterization of LCS.
\begin{tabular}{|l l|} \hline \(p\)-LCS & \\ _Input:_ & an alphabet \(\Sigma\), words \(w_{0},\ldots,w_{k-1}\in\Sigma^{*}\) for some \(k\in\mathbb{N}\), and \(m\in\mathbb{N}\). \\ _Parameter:_ & \(k+|\Sigma|\). \\ _Problem:_ & do \(w_{0},\ldots,w_{k-1}\) have a common subsequence of length \(m\)? \\ \hline \end{tabular}
The statement that \(p\)-LCS is fixed-parameter tractable means that it can be decided by an algorithm that on an instance \((\Sigma,w_{0},\ldots,w_{k-1},m)\) runs in time
\[f(k+|\Sigma|)\cdot(|w_{0}|+\cdots+|w_{k-1}|)^{O(1)}\]
for some computable function \(f:\mathbb{N}\to\mathbb{N}\). The existence of such an algorithm is unlikely due to the following result:
**Theorem 29** ([8]).: \(p\)-LCS _is not fixed-parameter tractable unless every problem in the W-hierarchy is fixed-parameter tractable._
Proof of Theorem 28:.: Let \((\Sigma,w_{0},\ldots,w_{k-1},m)\) be an instance of \(p\)-LCS, so \(\Sigma\) is an alphabet, \(w_{0},\ldots,w_{k-1}\in\Sigma^{*}\) and \(m\in\mathbb{N}\). Let \(w:=w_{0}\cdots w_{k-1}\) be the concatenation of the given words, and consider \(w^{m}\), the concatenation of \(m\) copies of \(w\). We construct a \(P(\Sigma)\)-labeled stopwatch automaton \(\mathbb{A}=(Q,\Sigma,X,\lambda,\beta,\zeta,\Delta)\) that accepts \(w^{m}\) if and only if \(w_{0},\ldots,w_{k-1}\) have a common subsequence of length \(m\).
An initial accepting computation of \(\mathbb{A}\) on \(w^{m}\) proceeds in \(m\)_rounds_, each round reads a copy of \(w\). In round \(\ell<m\) the computation guesses a position within each of the words \(w_{0},\ldots,w_{k-1}\) copied within \(w\), and ensures they all carry the same letter. These positions are stored in registers (i.e., nowhere active stopwatches) \(x_{0},\ldots,x_{k-1}\) with bounds \(|w_{0}|+1,\ldots,|w_{k-1}|+1\), respectively. Our intention is that the value of \(x_{i}\) after round \(\ell<m\) equals the position \(j_{\ell}^{i}\) in the definition of a common subsequence.
Our intention is that an initial accepting computation in round \(\ell<m\) cycles though \(k\) many _guess parts_ of the automaton. Within guess part \(0\), the computation reads \(w_{0}\) (within copy \(\ell\) of \(w\) in \(w^{m}\)), within guess part \(1\) the computation reads \(w_{1}\) and so on. The states of \(\mathbb{A}\) are the states of the guess parts plus an an additional state _accept_. Each guess part consists of a copy of the states _start_, _end_, and _guess\((a)\)_ for \(a\in\Sigma\). The \(\lambda\)-labels of _start_ and _end_ are \(\Sigma\), the \(\lambda\)-label of _guess\((a)\)_ is \(\{a\}\). The start state of \(\mathbb{A}\) is _start_ in guess part \(0\).
We intend that the computation in guess part \(i<k\) spends some time \(t<|w_{i}|\) in _start_, then spends exactly one time unit in some state _guess\((a)\)_, and then spends time \(|w_{i}|-t\) in _end_ before switching to the next guess part. The position guessed is \(t\) and stored as the value of \(x_{i}\). Writing momentarily \(w_{i}=a_{0}a_{1}\cdots a_{|w_{i}|-1}\) the computation reads the (possibly empty) word \(a_{0}\cdots a_{t-1}\) in state _start_, then reads \(a_{t}\) in state _guess\((a_{t})\)_, and then reads the (possibly empty) word \(a_{t+1}\cdots a_{|w_{i}|-1}\) in state _end_.
We enforce this behavior as follows. There are transitions from _start_ (in guess part \(i\)) to _guess\((a)\)_ for every \(a\in\Sigma\), and for every \(a\in\Sigma\) from _guess\((a)\)_ to _end_. We use a stopwatch \(y_{i}\) with bound \(|w_{i}|+1\) active in all states of guess part \(i\) and a stopwatch \(z\) with bound \(2\) active in the states _guess\((a)\)_, \(a\in\Sigma\), of any guess part. It will be clear that initial accepting computations enter guess part \(i\) with both \(y_{i}\) and \(z\) having value \(0\). The transitions from
_start_ to \(\text{\it guess}(a)\), \(a\in\Sigma\), have guard checking \(x_{i}<y_{i}<|w_{i}|\) and action setting \(x_{i}:=y_{i}\). The transitions from \(\text{\it guess}(a)\), \(a\in\Sigma\), to _end_ have guard checking \(z=1\) and action setting \(z:=0\). The state _end_ in guess part \(i<k-1\) has a transition to _start_ in guess part \(i+1\); for \(i=k-1\) this transition is to _start_ in guess part \(0\). These transitions have guard checking \(y_{i}=|w_{i}|\) and action setting \(y_{i}:=0\).
Observe that the computation spends time \(|w_{i}|\) in guess part \(i<k\) and increases the value of \(x_{i}\). Hence the values of \(x_{i}\) after each round form an increasing sequence of positions \(<|w_{i}|\). We have to ensure that the values of \(x_{0},\ldots,x_{k-1}\) after a round are positions in the words \(w_{0},\ldots,w_{k-1}\), respectively, that carry the same letter. Write \(\Sigma=\{a_{0},\ldots,a_{|\Sigma|-1}\}\). We use a register \(\tilde{x}\) with bound \(|\Sigma|-1\). In guess part \(0\), the action of the transition from \(\text{\it guess}(a_{j})\) to \(\text{\it end}\) also sets \(\tilde{x}:=j\). In the guess parts \(i<k\) for \(i\neq 0\), the guards of the transitions from _start_ to \(\text{\it guess}(a_{j})\) check that \(\tilde{x}=j\).
We count rounds using a register \(\tilde{y}\) with bound \(m\). We let the action of the transition from _end_ in guess part \(k-1\) to _start_ in guess part \(0\) set \(\tilde{y}:=\tilde{y}+1\). From copy \(0\) of _start_ there is a transition to _accept_ with guard \(\tilde{y}=m\). This completes the construction of \(\mathbb{A}\).
To prove the theorem, assume \(p\)-SWA is fixed-parameter tractable, i.e., there is an algorithm deciding \(p\)-SWA that on an instance \((\mathbb{A},w)\) runs in time \(f(k^{\prime})\cdot|w|^{O(1)}\) where \(k^{\prime}\) is the parameter of the instance, and \(f:\mathbb{N}\rightarrow\mathbb{N}\) is a nondecreasing computable function. By Theorem 29 is suffices to show that \(p\)-LCS is fixed-parameter tractable.
Given an instance \((\Sigma,w_{0},\ldots,w_{k-1},m)\) of \(p\)-LCS answer "no" if \(m>|w_{0}|\). Otherwise compute the automaton \(\mathbb{A}\) as above and then compute an equivalent stopwatch automaton \(\mathbb{A}^{\prime}\) as in the construction behind Proposition 11. It is clear that \((\mathbb{A}^{\prime},w^{m})\) is computable from \((\Sigma,w_{0},\ldots,w_{k-1},m)\) in polynomial time (since \(m\leqslant|w_{0}|\)). Then \((\mathbb{A}^{\prime},w^{m})\) is a "yes"-instance of \(p\)-SWA if and only if \((\Sigma,w_{0},\ldots,w_{k-1},m)\) is a "yes"-instance of \(p\)-LCS. Hence to decide \(p\)-LCS it suffices to run the algorithm for \(p\)-SWA on \((\mathbb{A}^{\prime},w^{m})\). This takes time \(f(k^{\prime})\cdot|w^{m}|^{O(1)}\) where \(k^{\prime}\) is the parameter of \((\mathbb{A}^{\prime},w^{m})\). By construction, it is clear that \(k^{\prime}\leqslant g(k+|\Sigma|)\) for some computable \(g:\mathbb{N}\rightarrow\mathbb{N}\) (in fact, \(k^{\prime}\leqslant(k+|\Sigma|)^{O(1)}\)). Since \(m\leqslant|w_{0}|\), the time \(f(k^{\prime})\cdot|w^{m}|^{O(1)}\) is bounded by \(f(g(k+|\Sigma|))\cdot(|w_{0}|+\cdots+|w_{k-1}|)^{O(1)}\). Thus, \(p\)-LCS is fixed-parameter tractable.
Recall, \(p\)-SWA is meant to formalize the computational problem to be solved by general purpose model-checkers in algorithmic law. Being general purpose, the set of activities \(\Sigma\) should be part of the input, it varies with the laws to be modeled. Nevertheless one might ask whether the hardness result in Theorem 28 might be side-stepped by restricting attention to some fixed alphabet \(\Sigma\).
This is unlikely to be the case. Let \(p\)-SWA\((\{0,1\})\) denote the restriction of \(p\)-SWA to instances with \(\Sigma=\{0,1\}\). We have the following variant of Theorem 28:
**Theorem 30**.: \(p\)_-SWA\((\{0,1\})\) is not fixed-parameter tractable unless \(\mathsf{FPT}=\mathsf{W}[1]\)._
Proof.: Note that the reduction \((\Sigma,w_{0},\ldots,w_{k-1},m)\mapsto(\mathbb{A}^{\prime},w^{m})\) (for \(m\leqslant|w_{0}|\)) in the proof above constructs an automaton \(\mathbb{A}^{\prime}\) over the same alphabet \(\Sigma\). It is thus a reduction from the restriction of \(p\)-LCS to instances with \(\Sigma=\{0,1\}\) to \(p\)-SWA\((\{0,1\})\). Now, [50] showed that this restriction is \(\mathsf{W}[1]\)-hard.
## Acknowledgements
We thank Raul Espejo Boix for a critical reading of Section 4. Part of this work has been done while the first author has been employed by Formal Vindications S.L. The second author received funding under the following schemes: ICREA Academia, projects PID2020-115774RB-I00 and PID2019-107667GB-I00 of the Spanish Ministry of Science and Innovation, 2022 DI 051, Generalitat de Catalunya, Departament d'Empresa i Coneixement and 2017 SGR 270 of the AGAUR. The second author leads the covenant between the University of Barcelona and Formal Vindications S.L.
|
2301.10865 | Persistent topological Laplacian analysis of SARS-CoV-2 variants | Topological data analysis (TDA) is an emerging field in mathematics and data
science. Its central technique, persistent homology, has had tremendous success
in many science and engineering disciplines. However, persistent homology has
limitations, including its incapability of describing the homotopic shape
evolution of data during filtration. Persistent topological Laplacians (PTLs),
such as persistent Laplacian and persistent sheaf Laplacian, were proposed to
overcome the drawback of persistent homology. In this work, we examine the
modeling and analysis power of PTLs in the study of the protein structures of
the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) spike receptor
binding domain (RBD) and its variants, i.e., Alpha, Beta, Gamma, BA.1, and
BA.2. First, we employ PTLs to study how the RBD mutation-induced structural
changes of RBD-angiotensin-converting enzyme 2 (ACE2) binding complexes are
captured in the changes of spectra of the PTLs among SARS-CoV-2 variants.
Additionally, we use PTLs to analyze the binding of RBD and ACE2-induced
structural changes of various SARS-CoV-2 variants. Finally, we explore the
impacts of computationally generated RBD structures on PTL-based machine
learning, including deep learning, and predictions of deep mutational scanning
datasets for the SARS-CoV-2 Omicron BA.2 variant. Our results indicate that
PTLs have advantages over persistent homology in analyzing protein structural
changes and provide a powerful new TDA tool for data science. | Xiaoqi Wei, Jiahui Chen, Guo-Wei Wei | 2023-01-25T23:15:43Z | http://arxiv.org/abs/2301.10865v2 | # Persistent topological Laplacian analysis of SARS-CoV-2 variants
###### Abstract
Topological data analysis (TDA) is an emerging field in mathematics and data science. Its central technique, persistent homology, has had tremendous success in many science and engineering disciplines. However, persistent homology has limitations, including its inability to handle heterogeneous information, such as multiple types of geometric objects; being qualitative rather than quantitative, e.g., counting a 5-member ring the same as a 6-member ring, and a failure to describe non-topological changes, such as homotopic changes in protein-protein binding. Persistent topological Laplacians (PTLs), such as persistent Laplacian and persistent sheaf Laplacian, were proposed to overcome the limitations of persistent homology. In this work, we examine the modeling and analysis power of PTLs in the study of the protein structures of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) spike receptor binding domain (RBD). First, we employ PTLs to study how the RBD mutation-induced structural changes of RBD-angiotensin-converting enzyme 2 (ACE2) binding complexes are captured in the changes of spectra of the PTLs among SARS-CoV-2 variants. Additionally, we use PTLs to analyze the binding of RBD and ACE2-induced structural changes of various SARS-CoV-2 variants. Finally, we explore the impacts of computationally generated RBD structures on a topological deep learning paradigm and predictions of deep mutational scanning datasets for the SARS-CoV-2 Omicron BA.2 variant. Our results indicate that PTLs have advantages over persistent homology in analyzing protein structural changes and provide a powerful new TDA tool for data science.
Key words: Mutation and binding induced protein structural changes, Persistent Laplacian, Persistent sheaf Laplacian, Topological data analysis, Topological deep learning, Spectral data analysis.
###### Contents
* 1 Introduction
* 2 Results
* 2.1 PTL analysis of RBD structural changes induced by mutations
* 2.2 PTL analysis of RBD structural changes induced by its binding to ACE2
* 2.3 Impacts of computationally generated mutant structures on PTL-based topological deep learning predictions
* 3 Theories and methods
* 3.1 Persistent topological Laplacians
* 3.2 TopLapGBT and TopLapNet
* 4 Concluding remarks
* 5 Appendix
## 1 Introduction
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is the cause of the ongoing global coronavirus disease 2019 (COVID-19) pandemic. Its evolution and future direction are of major concern. It was well established that the emergence of SARS-CoV-2 new variants is dictated by mutation-induced infectivity strengthening [11] and antibody resistance (or vaccine breakthrough) [41], two molecular mechanisms that determined the natural selection at the population scale. More specifically, the binding of the viral spike protein, particularly the receptor-binding domain (RBD), to the human receptor angiotensin-converting enzyme 2 (ACE2) facilitates the entry of the virus into host cells [20, 38]. In early 2020, it was hypothesized that natural selection favors those SARS-CoV-2 RBD mutations that strengthen the RBD-ACE2 binding, which leads to higher viral infectivity [11]. The hypothesis was initially supported by the frequency analysis of 89 single RBD mutations found from the genotyping of 15,140 complete SARS-CoV-2 genome samples [11] and later confirmed beyond doubt by the evolution pattern of 651 RBD mutations found from the genotyping of 506,768 SARS-CoV-2 genomes extracted from COVID-19 patients up to early 2021 [40].
The vaccine breakthrough mechanism was not discovered until vaccines became widely available in industrialized countries in the summer of 2021. It was found that an RBD mutation that weakens the viral infectivity had an unusually high observed frequency in 2,298,349 complete SARS-CoV-2 genomes isolated from patients. This abnormal statistics was found to strongly correlate with the vaccination rates in a few industrialized countries, including Denmark, the United Kingdom, France, Bulgaria, the United States, etc [41]. To understand this correlation, the mutational impact of a set of 130 antibodies extracted from Covid patients that targets the RBD was studied. It was found that the abnormal
mutation on the RBD has a very strong ability to disrupt the binding of most antibody-RBD complexes, which gives rise to antibody resistance (or vaccine breakthrough) at the population scale [41].
As discussed above, the reveal of the natural selection mechanisms of SARS-CoV-2 evolution is a typical example of a data-driven discovery that cannot be achieved by individual experimental laboratories. In fact, the discovery utilized results from tens of thousands of experimental laboratories around the world [11, 41]. Machine learning, including deep learning, and also data-driven approach, played an essential role in the discovery. Deep learning methods can offer some of the most accurate predictions of biomolecular properties, including the binding affinity of protein-protein interactions (PPIs). This approach becomes particularly advantageous and outperforms other methods when good-quality experimental data are available. However, structure-based machine learning, including deep learning methods encounter difficulties in PPI predictions due to their intricate structural complexity and high dimensionality.
Advanced mathematics, such as topological data analysis (TDA), can provide an effective abstraction of PPIs [39]. TDA is an emerging mathematical field that utilizes algebraic topology approaches to analyze data. Its main tool is persistent homology [5, 14, 15, 22, 25, 37, 49, 52], which integrates classical homology and filtration to create a multiscale analysis of data, resulting in a family of topological invariants. Through analyzing the signature and change of topological invariants during filtration, one can infer the shape of data [5]. However, persistent homology has limitations. Firstly, it is insensitive to homotopic shape evolution that does not involve any topological change. Secondly, roughly speaking, it cannot distinguish between a five-member ring and a six-member ring. Thirdly, it is incapable of differentiating different types of atoms, unable to describe directed relations, and indifferent to structured data such as functional groups. To overcome the first two limitations, persistent spectral graph [42], also known as persistent Laplacian [30, 44], was proposed. This method not only returns the full set of topological invariants as persistent homology does but also captures additional homotopic shape evolution of data and is more quantitative in its non-harmonic spectra. In addition to mathematical analysis [30], computational algorithms, such as HERMES software package [44] and homotopy continuation [47], were developed to facilitate topological deep learning, an emerging paradigm first introduced in 2017 [2, 3] for biomolecular studies, i.e., persistent Laplacian-assisted protein-ligand binding [31] and protein-protein binding [9, 45]. Neither persistent homology nor persistent Laplacian is sensitive to heterogeneous information in data. The element-specific persistent homology was designed to alleviate this difficulty. This approach has had tremendous success in deciphering biomolecules [3, 2] and in worldwide computer-aided drug design competitions [32]. Inspired by this success, various new TDA methods have been proposed [1, 18, 26, 27]. Recently, a more elegant theory, persistent sheaf Laplacian, was proposed to embed heterogeneous information, such as geometry and partial charges, in topological analysis [48], utilizing the theory of cellular sheaves [19, 53]. Both persistent Laplacian and persistent sheaf Laplacian belong to a class of persistent topological Laplacians (PTLs) [46]. PTLs are a family of multiscale topological spectral methods, including continuous (evolutionary) Hodge Laplacians defined on manifolds [13] and all other discrete multiscale
topological Laplacians, namely, persistent sheaf Laplacians [48], persistent spectral graphs [42], persistent path Laplacians [43], persistent topological hypergraph Laplacians [6], persistent hyperdigraph Laplacians [6], etc. Among them, persistent path Laplacians were designed to describe directed graphs (digraphs) and directed networks, while persistent topological hypergraph Laplacians and persistent hyperdigraph Laplacians can further deal with structured data. These new TDA methods can generate efficient mathematical representations of macromolecules either being used to model molecular structures or being used jointly with machine learning models for predicting various properties of molecules [9]. In the past three years, TDA approaches have been applied to SARS-CoV-2 related databases to predict PPI binding free energy (BFE) changes of RBD-ACE2 and RBD-antibody complexes induced by RBD mutations [7, 8]. Particularly, the non-harmonic spectra of PTLs can further unveil the homotopic geometric deformation induced by RBD mutations.
Although sequence-based approaches offer good predictions of mutational impacts on proteins, structure-based methods outperform other approaches [34]. In machine-learning-assisted directed evolution and protein engineering and machine-learning-based PPI and protein folding stability predictions, mutant structures are typically not available and are conventionally created by computational means for the machine learning predictions [3, 2, 7, 8, 27], which is a source of errors. It is interesting and important to quantify such errors. Fortunately, since SARS-COV-2 variants are some of the most studied subjects, some of their three-dimensional (3D) structures are available in the literature, which offers an opportunity for in-depth analysis and comparison.
Our objectives for this work are three-fold. We are interested in both the structural changes of the wild type RBD induced by mutations and the structural changes of the wild type RBD or mutant RBDs induced by their binding to ACE2. To quantify structural changes we first perform alignment of structures and calculate the distances between corresponding atoms (e.g., C\({}_{\alpha}\)). Then, we compute PTLs of different structures to further characterize their structural changes. Finally, we study how the difference between experimentally determined mutant structures and computationally generated mutant structures affects PTL-based machine learning and topological deep learning predictions of PPIs. This is important because we want to understand machine learning models' stability with respect to structural perturbations and approximations. To this end, we utilize the 3D structures of SARS-CoV-2 RBD-ACE2 complexes of the wild type and mutants such as Alpha, Beta, Gamma, and Omicron BA1 and BA2. We also employed the 3D spike protein structures of the wild type and mutants such as Alpha, Beta, and Omicron BA1 and BA2. Persistent Laplacian and persistent sheaf Laplacian are tested in our studies. To quantitatively analyze the influence of computationally generated structures on machine learning models, we used two topological machine learning models, namely TopLapGBT and TopLapNet [9] and a deep mutational scanning (DMS) dataset based on Omicron BA2 [36]. We found that for this dataset, the effects introduced by computationally generated structure on TopLapGBT are not significant. However, they may slightly reduce the accuracy of TopLapNet.
## 2 Results
### PTL analysis of RBD structural changes induced by mutations
To understand the structural differences of RBD between the wild type and mutants in RBD-ACE2 complex, we align the RBDs of SARS-CoV-2 variants Alpha (PDB ID: 8DLK[28]), Beta (PDB ID: 8DLN[28]), Gamma (PDB ID: 8DLQ[28]), BA.1 (PDB ID: 7T9L[29]), and BA.2 (PDB ID: 7XB0[24]) along with the wild type RBD (PDB ID: 6M0J[23]) in Figures 2 and 3. For Alpha, Beta, Gamma, BA.1, and BA.2, the maximal distances between corresponding atoms of mutant RBDs and the wild-type RBD are 9.14A, 9.33A, 9.87A, 7.44A, and 14.32A respectively. For each mutant, the residues are recorded if they have at least one atom whose distance to the corresponding atom in wild-type RBD is more than 7.16A, which is half of the maximal distance, 14.32A. For variants Alpha, Beta, and Gamma, such a residue is R346, while in BA.1 such residue is K386. BA.2 has most such residues, which are N370, A372, K378, and K386, containing atoms deviating from the wild type. However, these residues are not in the receptor-binding motif (RBM, residues 438-506) that interacts directly with ACE2.
Alternatively, for Alpha, Beta, Gamma, and BA.1 variants, we also change the threshold from 7.16A to the half of maximal distance (4.57A, 4.67A, 4.94A, and 3.72A, respectively). Then in the Alpha variant, such residues are T333, R346, K378, K386, R408, and N450. In Beta and Gamma variants, such residues are T333, R346, K378, K386, and R408. In BA.1 such residues are T333, N334, E340, R346, N360, D364, Y369, K378, K386, F392, R408, K424, N450, K462, and H519. Also most large C\({}_{\alpha}\) structural changes occur at the
Figure 1: Sequence alignment of RBDs of the wild type, Alpha, Beta, Gamma, BA.1, and BA.2. Alpha has one RBD mutation N501Y. Beta has three RBD mutations K417N, E484K, and N501Y. Gamma has three RBD mutations K417T, E484K, and N501Y. BA.1 has 15 RBD mutations G339D, S371L, S373P, S375F, K417N, N440K, G446S, S477N, T478K, E484A, Q493R, G496S, Q498R, N501Y, and Y505H. BA.2 has 16 RBD mutations G339D, S371F, S373P, S375F, T376A, D405N, R408S, K417N, N440K, S477N, T478K, E484A, Q493R, Q498R, N501Y, and Y505H.
Figure 2: (a) Wild type RBD-ACE2 complex. The RBD is colored by light grey and mutated residues in Alpha, Beta, Gamma, BA.1 and BA.2 are marked. (b, c, d, e, f) Atoms of the wild type RBD are colored by their distances to corresponding atoms in a mutant RBD. Subfigures (a), (b), (c), (b), and (f) corresponds to the Alpha, Beta, Gamma, BA.1, and BA.2 variants, respectively. Pink and red corresponds to 0Å and 14.32Å respectively. For each mutant we record the residues that have at least one atom whose distance to the corresponding atom in the wild type RBD is larger than 7.16Å. In Alpha, Beta, and Gamma, such residue is R346. In BA.1, such residue is K386. In BA.2, such residues are N370, A372, K378, and K386. These residues are marked in (g). (Plots generated by ChimeraX [33].)
Figure 3: Atoms of the wild type RBD are colored by their distances to corresponding atoms in a mutant RBD. Subfigures (a), (b), (c), and (b) corresponds to the Alpha, Beta, Gamma, and BA.1, variants, respectively. Each alignment has its own color range. For each mutant, we record the residues that have at least one atom whose distance to the corresponding atom in the wild type RBD is more than half of the maximal distance (4.57Å, 4.67Å, 4.94Å, and 3.72Å) between corresponding atoms. In Alpha, such residues are T333, R346, K378, K386, R408, and N450. In Beta and Gamma, such residues are T333, R346, K378, K386, and R408. In BA.1, such residues are T333, N334, E340, R346, N360, D364, Y369, K378, K386, F392, R408, K424, N450, K462, and H519. These residues are marked in (e). (Plots generated by ChimeraX [33].)
coil regions of the RBD. For the BA.2 variant, the half of maximal distance is 7.16A and we have recorded such residues that have at least one atom whose distance to the corresponding atom in the wild-type RBD is more than 7.16A.
To quantify the total structural differences between the wild type and mutants, we calculate the sum of squares of distances between corresponding C\({}_{\alpha}\) atoms. The results of Alpha, Beta, Gamma, BA.1, and BA.2 are 69 A\({}^{2}\), 70 A\({}^{2}\), 67A\({}^{2}\), 93A\({}^{2}\), and 255A\({}^{2}\), respectively as shown in Figure 4. The large values for BA.1 and BA.2 are consistent with fact that BA.1 and BA.2 are strongly antibody disruptive [10; 12]. The large structural changes induced by BA.2 mutations create significant mismatch between antibodies and antigens, making BA.2 one of the most antibody resistant variants [12]. Arguably, the amount of mutation-induced structural changes in RBD-ACE2 complexes also strongly correlates with viral infectivity changes.
Figure 4: The total structural changes of RBD between the wild type and mutants in RBD-ACE2 complex. Given an alignment of a mutant RBD to the wild type RBD, the total structural changes is defined to be the sum of squares of distances between corresponding C\({}_{\alpha}\) atoms in RBD.
Figure 5: Illustration of persistent (sheaf) Betti numbers of element nonspecific persistent Laplacian (PL) and persistent sheaf Laplacian (PSL) of the residue 501 mutation site at different filtration values, i.e., radii (unit: Å). The wild type (PDB ID: 6M0J) and Alpha (PDB ID: 8DLK) are given in the first row. The Beta (PDB ID: 8DLN) and Gamma (PDB ID: 8DLQ) are given in the second row. BA.1 (PDB ID: 7T9L) and BA.2 (PDB ID: 7XB0) are given the third row.
Figure 6: Illustration of the first nonzero eigenvalues of element nonspecific persistent Laplacian (PL) and persistent sheaf Laplacian (PSL) of the residue 501 mutation site at different filtration values, i.e., radii (unit: Å). The wild type (PDB ID: 6M0J) and Alpha (PDB ID: 8DLK) are given in the first row. The Beta (PDB ID: 8DLN) and Gamma (PDB ID: 8DLQ) are given in the second row. BA.1 (PDB ID: 7T9L) and BA.2 (PDB ID: 7XB0) are given the third row.
Figure 7: Illustration of persistent Betti numbers (red line) and the first nonzero eigenvalues (blue line) of element nonspecific persistent Laplacians of the wild type N501 mutation site at different filtration values, i.e., radii (unit: Å). Alpha filtration is used. The graphs from top to bottom represent the results of dimension-0, dimension-1, and dimension-2 Laplacians.
We are also interested in the topological characterization of the mutation-induced conformational changes. To this end, we employ persistent Laplacian (PL) and persistent sheaf Laplacian (PSL) to examine the local RBD structural changes induced by the mutation N501Y (a common mutation that exists in Alpha, Beta, Gamma, BA.1, and BA.2). For the wild type and mutants, the residue 501 mutation site is defined as the set of neighborhood heavy atoms (C, N, and O) in RBD such that the distance of any atom in the set to the residue 501 C\({}_{\alpha}\) is smaller than 10A. We calculate persistent Laplacians and persistent sheaf Laplacians for mutation sites of the wild type and variants and compare the persistent (sheaf) Betti number and the smallest nonzero eigenvalues of spectra at different filtration values. Persistent Laplacians and persistent sheaf Laplacians can be calculated as either element non-specifically or element specifically (i.e., considering carbon, nitrogen, and oxygen atoms separately). We first employ the element non-specific approach and compare the results of the wild type and variants. The results of persistent Laplacian and persistent sheaf Laplacian are shown in Figures 5 and 6. The \(x\) axis represents the filtration values of Rips filtration, such that at a filtration value \(r\) the Rips complex is constructed by considering balls of radius \(r\). The sudden changes of persistent (sheaf) Betti number and the first nonzero eigenvalues near \(r=0.65\)A reflect the fact that most neighboring atoms are about 1.3A away from each other. In Figure 5, The number of atoms is reflected in the initial 0-th Betti numbers. The 0-th Betti number dramatically decreases around 0.65 A because covalent bond distances are about 1.5A. The 0-th Betti number decreases further from 1.2A to 1.7A due to other many non-covalent bonds.
In Figure 6, the results of the wild type and mutants almost coincide, except that the first nonzero eigenvalues of persistent sheaf Laplacians of BA.1 and BA.2 near \(r=0.65\)A have very different values. The results of persistent Laplacians are quite different from those of persistent sheaf Laplacians at large filtration values. The significant changes around \(r=0.65\)A are due to the topological changes.
We are also interested in understanding whether higher dimensional persistent Laplacians can offer an additional characterization of biomolecules. Figure 7 presents the higher dimensional persistent Laplacian analysis of the wide type RBD near the N501 residue. Obviously, higher dimensional persistent Laplacian offers significant structural information about the distributions of circles and cavities of the macromolecule. Most dimension-1 circles occur in the range of 1.5-2.4A, whereas most 2-dimensional cavities locate around 1.8-2.8A. 2-dimensional cavities are short-lived in the filtration, indicating the lack of multiple large cavities in the structure (at most one large cavity in the structure). This distribution can be used to understand interaction forces. For example, the length of hydrogen bonds ranges from 2-3.6A(corresponding to 1-1.8 A in the filtration radii). This information is valuable for the design of machine learning representations, including the selection of the set of filtration intervals. We also note that the peak of \(\lambda_{2}^{r,0}\) is at the left of \(\beta_{2}^{r,0}\). It's possible that when \(r\) is in the range of 1.2A-1.5A, many 2-simplices are born but no 2-cycles are formed yet.
The element-specific results of the residue 501 mutation site of the wild type, and variants Alpha, Beta, Gamma, BA.1, and BA.2 are shown in Figures 8 and 9, as well as in Figures 13 and 14 in the Appendix. We observe that the difference between the first nonzero
Figure 8: Illustration of the first nonzero eigenvalues of element-specific persistent Laplacian of the residue 501 mutation site at different filtration values, i.e., radii (unit: Å). The wild type (PDB ID: 6M0J) and Alpha (PDB ID: 8DLK) are given in the first row. The Beta (PDB ID: 8DLN) and Gamma (PDB ID: 8DLQ) are given in the second row. BA.1 (PDB ID: 7T9L) and BA.2 (PDB ID: 7XB0) are given the third row.
Figure 9: Illustration of the first nonzero eigenvalues of element-specific persistent sheaf Laplacian of the residue 501 mutation site at different filtration values, i.e., radii (unit: Å). The wild type (PDB ID: 6M0J) and Alpha (PDB ID: 8DLK) are given in the first row. The Beta (PDB ID: 8DLN) and Gamma (PDB ID: 8DLQ) are given in the second row. BA.1 (PDB ID: 7T9L) and BA.2 (PDB ID: 7XB0) are given the third row.
eigenvalues is much more obvious. For instance, in Figure 8 there is a higher spike near 0.7A in the graph of Alpha carbon atoms, and two spikes near 1.3A and 1.7A disappear in the graph of the Alpha variant's oxygen atoms. In Figure 8, all results of carbon atoms have similar shapes, implying a relatively stable RBD carbon atom structure. In the results of nitrogen atoms, we notice that the results of Alpha, Beta, and Gamma variants resemble each other, and the same can be said of the results of BA.1 and BA.2 variants. In the results of oxygen atoms, the results of Alpha, Beta, and Gamma still resemble each other, but the results of BA.1 and BA.2 are quite different. The results of the wild type are unique in the sense that it has one or two spikes near 1.3A or 1.7A. These results indicate that element-specific persistent Laplacians and element-specific persistent sheaf Laplacians are better approaches in characterizing SARS-CoV-2 variants than element-non-specific approaches. We know that nitrogen and oxygen atoms are sparser in a protein, so if we use element nonspecific approach, nitrogen atoms and oxygen atoms will first form edges with neighboring carbon atoms, and we are not able to infer distances between nitrogen atoms or oxygen atoms. This explains why element specific approach outperforms element nonspecific approach.
### PTL analysis of RBD structural changes induced by its binding to ACE2
We investigate how binding to ACE2 changes the spike protein RBD structure from the closed state to the open state for the wild type, Alpha, Beta, BA.1, and BA.2 variants. The PDB IDs of the spike protein of wild type, Alpha, Beta, BA.1 and BA.2 used in this section are 7DF3 [51], 7LWS [17], 7LYM [17], 7TF8 [16] and 7XIX [4]. The analysis of the Gamma variant is eliminated due to the lack of experimental structure. We first align each of the three RBDs in the closed-state spike protein to the RBD in the RBD-ACE2 complex. The maximal distances between corresponding atoms in the RBM of the three alignments of BA.1 are 8.76A, 13.49A, and 9.44A, which are larger than those of alignments of the wild type and other mutants. For each alignment, we record the RBM residues that have at least one atom whose distance to the corresponding atom is larger than 5.28A, i.e., half of the mean maximal distances between corresponding atoms in RBM of the three alignments of BA.1. In wild-type RBD, such residues are K444 and K458. In Alpha there are no such residues; In Beta, chains A and B have K458; chain C has T478 and P479. In BA.1, each chain has different such residues: chain A has K440, Y453, K458, K478, and F486; chain B has K440, Y453, R457, K458, R466, Y473, Q474, K478, F486, F490, R493; and chain C has K440, Y453, Y473, K478, F486. In BA.2 such residues are E465, K478, and G482.
We also calculate the total structural changes of the RBM between the closed state RBD and the open state RBD induced by its binding to the human ACE2. Here, the total structural changes are defined to be the sum of squares of distances between C\({}_{\alpha}\) atoms in the RBM. Since spike protein is a trimer, we calculate the total structural changes for each chain and report the average (see Figure 10). It turns out that the average total structural changes induced by binding to ACE2 do not increase too much with respect to the number of RBD mutations.
Figure 11: Illustration of persistent Betti numbers (red line) and the first nonzero eigenvalues (blue line) of persistent Laplacian of the RBD binding site of the wild type RBD-ACE2 complex (PDB ID: 6M0J) and closed state spike protein (PDB ID: 7DF3, Chain ID: A) at different filtration values, i.e., radii (unit: Å). The graphs from top to bottom represent the results of carbon atoms, nitrogen atoms, and oxygen atoms, respectively.
Figure 10: The total structural changes of the RBM between the closed state RBD and the open state RBD induced by ACE2 binding. Here the total structural changes are defined to be the sum of squares of distances between C\({}_{\alpha}\) atoms in the RBM.
Now, we calculate persistent Laplacians and persistent sheaf Laplacians for the RBD binding site in the closed state spike protein and the RBD-ACE2 complex. For the wild type and mutants, we define the RBD binding site as the set of RBD residues whose C\({}_{\alpha}\)s are within 10A from the C\({}_{\alpha}\)s of ACE2 residues. We choose 10A as the cutoff distance, because if we used 11A then the RBD binding site would include non-RBM residues. Spike protein as a trimer has three chains. In the results of alignments, the recorded residues of the wild type, Alpha, and BA.2 are the same for the three chains. Therefore, for the wild type, Alpha and BA.2 we only use chain A, and for Beta and BA.1, we use all three chains. The study was carried out in an element-specific manner for carbon atoms, nitrogen atoms, and oxygen atoms. The results of the wild type are shown in Figure 11. We noted that persistent Betti numbers cannot distinguish two structures. However, the first nonzero eigenvalues of the persistent Laplacian capture the difference, demonstrating the advantage of persistent Laplacian over persistent homology in protein structure analysis.
Additional analysis is presented in Figures 15, 16, 17, 18, 19, 20, 21, 22, 23, and 24 in the Appendix. In Figure 15, the results of the wild type, Alpha, Beta, BA.1, and BA.2 RBD binding sites are quite similar except that the wild type RBD binding site has relatively lower first nonzero eigenvalues near \(r=0.7\)A. It is seen that peak appears or disappears in the graph of the nitrogen atoms, whereas for BA.1 and BA.2, the results of the nitrogen atoms resemble each other, sometimes even coincide.
In general, the first nonzero eigenvalues of the persistent Laplacian are able to distinguish the structural difference before and after the complex formation in various variants. In contrast, the harmonic spectra, or equivalently, persistent homology, cannot always capture the structural changes.
The results of persistent Laplacians and persistent sheaf Laplacians are similar in this work. However, this similarity is due to the specific implementation of persistent sheaf Laplacians. In general, persistent sheaf Laplacians enable the embedding of non-geometric chemical and physical information of biomolecules in topological and spectral representations.
Impacts of computationally generated mutant structures on PTL-based topological deep learning predictions
Figure 12: Atoms of BA.2 RBD (PDB ID: 7XB0) are colored by their distances to corresponding atoms in the computationally generated structure. Blue, white, and red corresponds to 0Å, 7.57Å, and 15.14Å respectively. We record the residues that have at least one atom whose distance to the corresponding atom in wild type RBD is more than 7.57Å. Such residues are 370, 375, 378, 386, 387, and 519.
The understanding of PPIs is a vital task in computational biology. With the availability of large amounts of good quality data, machine learning approaches have demonstrated their unique capability [27]. This is specifically true for the prediction of SARS-CoV-2 infectivity and antibody resistance using topological deep learning [10]. In this work, we explore the impact of computationally generated structures on the predictive accuracy of our topological deep learning framework. We use a BA.2 RBD deep mutational scanning dataset which involves the systematical mutations of each residue on the BA.2 RBD to 19 others and records corresponding binding affinity changes [36].
The deep mutational scanning covers the RBD residues from 333 to 527. In order to apply machine learning models, such as TopLapGBT and TopLapNet [9], to this dataset, BA.2 RBD mutants need to be computationally generated based on a BA.2 RBD structure and the choice of the BA.2 RBD structure can affect the performance of machine learning models. We can employ either an experimentally determined BA.2 RBD-ACE2 complex structure or a BA.2 RBD-ACE2 complex structure computationally generated based on an experimentally determined BA.1 RBD-ACE2 complex structure. These two complexes are systematically mutated to all possible mutants in the deep mutational scanning dataset [36]. Two deep learning models, namely TopLapGBT and TopLapNet, are used to predict the binding affinity changes induced by all BA.2 RBD mutations. The results from these complexes are compared to examine the performance of computationally generated mutants. Here, we computationally generate mutant structures for mutations L371F, T376A, D405N, R408S, S446G, and S496G.
When the given BA.2 RBD structure is experimentally determined (PDB ID: 7XB0), and the resulting models are referred to as ExpTopLapGBT (experimental TopLapGBT) and ExpTopLapNet. When the BA.2 RBD structure is computationally generated from BA.1 RBD (PDB ID: 7T9L) by Jackal [50], the resulting model is referred to as ComTopLapGBT (computational TopLapGBT) or ComTopLapNet. The distances of corresponding atoms between the experimentally determined RBD (PDB ID: 7XB0) and the RBD generated computationally from BA.1 RBD (PDB ID: 7T9L) is shown in Figure 12.
To evaluate the validity of computationally generated BA.2 complex structure, we compare the results of two topological deep learning methods, ExpTopLapGBT and ComTopLapGBT, on the predictions of the RBD deep mutational scanning dataset. We split the dataset into 10 folds, and for each fold, we use the other 9 folds as the training set to build a machine learning model, which is used to predict ACE2-binding affinity changes for the fold.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Method & \(R_{p}(Exp,True)\) & \(R_{p}(Com,True)\) & \(R_{p}(Exp,Com)\) \\ \hline TopLapGBT & 0.901 & 0.898 & 0.990 \\ \hline TopLapNet & 0.879 & 0.849 & 0.925 \\ \hline \end{tabular}
\end{table}
Table 1: \(R_{p}(Exp,True)\) is the correlation coefficient between predictions of ExpTopLapGBT (or ExpTopLapNet) and true affinity changes. Here, \(R_{p}(Com,True)\) is the correlation coefficient between predictions of ComTopLapGBT (or ComTopLapNet) and true affinity changes. \(R_{p}(Exp,Com)\) is Pearson the correlation coefficient between the predictions of ExpTopLapGBT and ComTopLapNet). A random state affects the 10-fold splitting and the training of GBT and neural networks.
Therefore, for a given 10-fold splitting we get the ExpTopLapGBT and ComTopLapGBT predictions of RBD-ACE2 binding affinity changes for the deep mutational scanning dataset. We denote \(R_{p}(Exp,True)\) the Pearson correlation coefficient between ExpTopLapGBT predicted binding affinity changes and experimental binding affinity changes. Similarly, \(R_{p}(Com,True)\) (or \(R_{p}(Exp,Com)\)) is the Pearson correlation coefficient between ComTopLapGBT predicted binding affinity changes and experimental binding affinity changes (or ExpTopLapGBT predicted binding affinity changes).
The results of TopLapGBT and TopLapNet are shown in Table 1. Generally, the performance of models using experimentally determined structures is better than that of models using the computationally generated structure. This is not surprising since the computationally generated structure is an approximation of the experimental structure. The performance of ExpTopLapGBT and ComTopLapGBT are extremely close, whereas the performance of ComTopLapNet differs very much from that of ExpTopLapNet. We also see that ExpTopLapGBT outperforms ExpTopLapNet.
## 3 Theories and methods
### Persistent topological Laplacians
Persistent topological Laplacians (PTLs) are a family of topological data analysis methods that are topological, multiscale, and spectral. Loosely speaking, their kernel space dimensions coincide with the topological invariants or Betti numbers in each topological dimension and their non-harmonic spectra describe homotopic shape evolution during filtration or multiscale analysis. Various discrete PTLs, e.g., persistent Laplacian [42], persistent sheaf Laplacian [48], persistent path Laplacian [43], and persistent directed hypergraph Laplacian [6] have been proposed for point cloud data. For volumetric data, evolutionary de Rham-Hodge method has been developed [13], which is defined on a family of evolving manifolds. Evolutionary de Rham-Hodge method is based on differential geometry, algebraic topology, multiscale analysis, and partial differential equations. In this work, we focus on persistent Laplacian and persistent sheaf Laplacian.
Suppose \(K\) and \(L\) are two simplicial complexes and \(K\) is a subset of \(L\). We denote by \(C^{K}\) and \(C^{L}\) the simplicial chain complexes of \(K\) and \(L\) with real coefficients. As a chain group \(C_{q}\) in a simplicial chain complex is formally generated by simplices, it is naturally a finite-dimensional inner product space, and the adjoint of boundary map \(\partial_{q}\) is well defined. Let \(C^{L,K}_{q+1}\) be the subspace \(\{c\in C^{L}_{q+1}\mid\partial^{L}_{q+1}(c)\in C^{K}_{q}\}\) of \(C^{L}_{q+1}\) and \(\partial^{L,K}_{q+1}\) the restriction of \(\partial^{L}_{q+1}\) to \(C^{L,K}_{q+1}\). The \(q\)-th persistent Laplacian \(\Delta^{K,L}_{q}\) is defined by
\[\partial^{L,K}_{q+1}(\partial^{L,K}_{q+1})^{*}+(\partial^{K}_{q})^{*}\partial ^{K}_{q}. \tag{1}\]
Before we define the persistent sheaf Laplacian, we need to explain what a cellular sheaf is first. A cellular sheaf \(\mathscr{S}\) is a simplicial complex \(X\) (viewed as a cell complex) with an assignment to each cell \(\sigma\) of \(X\) a finite-dimensional vector space \(\mathscr{S}(\sigma)\) (referred to as the stalk of \(\mathscr{S}\) over \(\sigma\)) and to each face relation \(\sigma\leqslant\tau\) (i.e., \(\sigma\subset\overline{\tau}\)) a linear morphism of vector
spaces denoted by \(\mathscr{S}_{\sigma\leqslant\tau}\) (referred to as the restriction map of the face relation \(\sigma\leqslant\tau\)), satisfying the rule
\[\rho\leqslant\sigma\leqslant\tau\Rightarrow\mathscr{S}_{\rho\leqslant\tau}= \mathscr{S}_{\sigma\leqslant\tau}\mathscr{S}_{\rho\leqslant\sigma}\]
and \(\mathscr{S}_{\sigma\leqslant\sigma}\) is the identity map of \(\mathscr{S}(\sigma)\). Like a simplicial complex, a cellular sheaf gives rise to a sheaf cochain complex. The \(q\)-th sheaf cochain group \(C^{q}_{\mathscr{S}}\) is the direct sum of stalks over \(q\)-dimensional cells. To define coboundary maps, we can globally orient the simplicial complex \(X\) and obtain a signed incidence relation, i.e. an assignment to each \(\sigma\leqslant\tau\) an integer \([\sigma:\tau]\). Then the coboundary map \(d^{q}:C^{q}_{\mathscr{S}}\to C^{q+1}_{\mathscr{S}}\) is defined by
\[d^{q}|_{\mathscr{S}(\sigma)}=\sum_{\sigma\leqslant\tau}[\sigma:\tau]\mathscr{ S}_{\sigma\leqslant\tau}.\]
Now suppose we have two cellular sheaves \(\mathscr{S}\) on \(K\) and \(\mathscr{T}\) on \(L\) such that \(K\subseteq L\) and stalks and restriction maps of \(K\) are identical to those of \(L\).
Let \(C^{q+1}_{\mathscr{S},\mathscr{T}}=\{c\in C^{q+1}_{\mathscr{T}}\mid(d^{q}_{ \mathscr{T}})^{*}(c)\in C^{q}_{\mathscr{S}}\}\). We denote the adjoint map of \((d^{q}_{\mathscr{T}})^{*}|_{C^{q+1}_{\mathscr{T},\mathscr{T}}}\) as \(d^{q}_{\mathscr{S},\mathscr{T}}\) and define the \(q\)-th persistent sheaf Laplacian \(\Delta^{\mathscr{S},\mathscr{T}}_{q}\) as
\[\Delta^{\mathscr{S},\mathscr{T}}_{q}=(d^{q}_{\mathscr{S},\mathscr{T}})^{*}d^{q }_{\mathscr{S},\mathscr{T}}+d^{q-1}_{\mathscr{T}}(d^{q-1}_{\mathscr{S}})^{*}.\]
Of course, to define adjoint maps cochain groups need to be inner product spaces. In this work, cellular sheaves are constructed in the same way as in Section 2.4 of [48]. The non-geometrical information we consider is the set of atomic partial charges. We employ partial charges from the PDB2PQR package [21]. We build a Rips filtration of graphs. For each simplicial complex \(X\), we denote each vertex by \(v_{i}\), the edge connecting \(v_{i}\) and \(v_{j}\) by \(e_{ij}\) and the partial charges \(q_{i}\). Then the cellular sheaf is such that each stalk is \(\mathbb{R}\), and for face relation \(v_{i}\leqslant e_{ij}\) the morphism is the multiplication by \(q_{j}/r_{ij}\), where \(r_{ij}\) is the length of \(e_{ij}\). Spectra of persistent sheaf Laplacians are not used in TopLapGBT and TopLapNet.
### TopLapGBT and TopLapNet
TopLapGBT and TopLapNet [9] have been employed to study mutational effects on protein-protein interaction. Gradient boosted trees are employed in TopLapGBT, whereas, TopLapNet is based on artificial neural networks. Both models are constructed by using persistent
Laplacians. These methods require the 3D structures of both wide-type PPI complexes and mutant complexes. For instance, the AB-Bind S645 dataset[35] includes 645 mutants with experimentally determined BFE changes across 29 antibody-antigen complexes. Mutant structures can be computationally generated based on experimentally determined structures of the wild-type antibody-antigen complexes and mutation information (chain id, residue id, mutant residue, etc.). Representation of mutant structures, including the persistent homology and persistent Laplacian representations, and other auxiliary representations, can be used as feature vectors to train machine learning models (such as gradient boosting trees and deep neural networks) that can predict mutation-induced BFE changes.
When we apply persistent homology and persistent Laplacian to the study of protein-protein interactions, we always extract the atoms within a certain cutoff distance \(r\) of the binding site and construct a distance matrix such that if two atoms are in the same protein then the distance between them is an extremely large constant number. If we want to further characterize the interaction between atoms of certain elements \(E_{1}\) and \(E_{2}\), we can consider the point cloud formed by the atoms of an element \(E_{1}\) of protein \(A\) within \(r\) of the binding site, and the atoms of element \(E_{2}\) of protein \(B\) within \(r\) of the binding site. After the calculation of persistent homology and persistent Laplacian, the next step is to transform the barcodes of persistent homology or spectra of persistent Laplacians into vector representations of fixed lengths. For barcodes, there are at least two ways: either we divide the interval \([0,r]\) into bins of even length and count the occurrence of bars, birth values, and death values in each bin, or we simply compute statistics such as sum, maximum, minimum, mean, and standard deviation for bar lengths, birth values, and death values. The former method is often applied to 0-dimensional barcodes and the latter to 1-dimensional and 2-dimensional barcodes. For the spectrum of a persistent Laplacian, we separate zero eigenvalues (harmonic spectra) and nonzero eigenvalues (non-harmonic spectra). We use the number of zero eigenvalues, the sum, the minimum, the maximum, the mean, the standard deviation, the variance, and the sum of squares of nonzero eigenvalues.
In this study, we use scikit-learn to build a gradient boosting tree whose parameters are n_estimators=20000, learning_rate = 0.005, max_features ='sqrt', max_depth = 9, min_samples_split = 3, subsample = 0.4, and n_iter_no_change=500. Additionally, we use PyTorch to build a neural network with 7 hidden layers and each layer has 8000 neurons.
## 4 Concluding remarks
Persistent topological Laplacians (PTLs) are a class of newly proposed multiscale topological spectral approaches in data science. These methods can be used either in a discrete setting for point cloud data [42, 48, 43] or in a continuous setting for volumetric data [13]. Their mathematical underpinnings for discrete formulations are algebraic topology, sheaf theory, and combinatorial graphs, while those for the continuous formulations is algebraic topology, differential geometry, and partial differential equation. Among mutants Alpha, Beta, Gamma, Omicron BA.1, and Omicron BA.2, BA.2 has the largest total structural changes from the wild type, which agrees with the significant antibody escape of the Omicron BA.2 variant. As to the total structural changes of a closed state RBD induced by
its binding to ACE2, total structural changes of Alpha, Beta, Omicron BA.1, and Omicron BA.2 do not differ too much. It is noted that most large structural changes of C\({}_{\alpha}\) occur at flexible random coil regions at the epitope.
We also demonstrate how to use PTLs to characterize structural changes induced by SARS-CoV-2 variant spike protein receptor-binding domain (RBD) mutations and by its binding to human angiotensin-converting enzyme 2 (ACE2). Two PTLs, namely persistent Laplacian and persistent sheaf Laplacian, are utilized in our work. We also analyze two implementations, i.e., element-nonspecific and element-specific Laplacian models of persistent Laplacian and persistent sheaf Laplacian. We show that persistent Laplacian and persistent sheaf Laplacian provide similar results. These methods capture homotopic shape evolution information, which persistent homology cannot offer. We expect other persistent topological Laplacians, such as persistent path Laplacian [43], can uncover similar information. Additionally, element-specific approaches reveal more information than element-nonspecific ones as shown in literature [3].
More specifically, the results of persistent Laplacian and persistent sheaf Laplacian indicate that at the residue 501 mutation site, the structure of RBD carbon atoms is less affected by mutations than that of RBD nitrogen atoms and RBD oxygen atoms, partially due to the fact that the bond nature of carbon atoms is mostly covalent, whereas the distances among oxygen atoms are mostly non-covalent. The topological Betti numbers of oxygen atoms are associated with possible hydrogen bonds. Additionally, structural similarity of mutation sites is observed in the spectra of persistent Laplacian and persistent sheaf Laplacian.
As for the RBD structural changes induced by its binding to ACE2, for wild type, Alpha and Beta, a significant difference can be observed in the result of nitrogen atoms, whereas for BA.1 and BA.2, a significant difference can be observed in the result of oxygen atoms. We show that the non-harmonic spectra (the first non-zero eigenvalues) of persistent Laplacian are more sensitive to structural changes than the harmonic spectra. Therefore, persistent Laplacian has an advantage over persistent homology in protein analysis.
Finally we test how a computationally generated structure impacts the prediction of PTL-based machine learning models, i.e., TopLapGBT and TopLapNet [9]. The results indicate that a computationally generated structure harms the performance of TopLapGBT and TopLapNet. However, TopLapGBT is much less affected by a computationally generated structure than TopLapNet, implying a resistance to the structural approximation from computations. TopLapNet is more affected by a computationally generated structure probably because neural networks are more prone to overfitting than gradient-boosted trees.
This work reveals that PTLs are a class of powerful new methods for topological data analysis (TDA) or more precisely, spectral data analysis (SDA). These methods can certainly be applied to the data analysis in other fields and disciplines, including image science, physical science, medical science, social science, engineering, financial industrial, musical science [46], etc.
## Data availability
The related datasets studied in this work are available at: [https://github.com/WeilabMSU/PTLvirus](https://github.com/WeilabMSU/PTLvirus).
## Acknowledgment
This work was supported in part by NIH grants R01GM126189 and R01AI164266, NSF grants DMS-2052983, DMS-1761320, and IIS-1900473, NASA grant 80NSSC21M0023, MSU Foundation, Bristol-Myers Squibb 65109, and Pfizer.
|
2305.13643 | Hardware Trojans in Power Conversion Circuits | This report investigates the potential impact of a Trojan attack on power
conversion circuits, specifically a switching signal attack designed to trigger
a locking of the pulse width modulation (PWM) signal that goes to a power
field-effect transistor (FET). The first simulation shows that this type of
attack can cause severe overvoltage, potentially leading to functional failure.
The report proposes a solution using a large bypass capacitor to force signal
parity, effectively negating the Trojan circuit. The simulation results
demonstrate that the proposed solution can effectively thwart the Trojan
attack. However, several caveats must be considered, such as the size of the
capacitor, possible current leakage, and the possibility that the solution can
be circumvented by an adversary with knowledge of the protection strategy.
Overall, the findings suggest that proper protection mechanisms, such as the
proposed signal-parity solution, must be considered when designing power
conversion circuits to mitigate the risk of Trojan attacks. | Jacob Sillman, Ajay Suresh | 2023-05-23T03:37:36Z | http://arxiv.org/abs/2305.13643v1 | # Hardware Trojans in Power Conversion Circuits
###### Abstract
This report investigates the potential impact of a Trojan attack on power conversion circuits, specifically a switching signal attack designed to trigger a locking of the pulse width modulation (PWM) signal that goes to a power field-effect transistor (FET). The first simulation shows that this type of attack can cause severe overvoltage, potentially leading to functional failure. The report proposes a solution using a large bypass capacitor to force signal parity, effectively negating the Trojan circuit. The simulation results demonstrate that the proposed solution can effectively thwart the Trojan attack. However, several caveats must be considered, such as the size of the capacitor, possible current leakage, and the possibility that the solution can be circumvented by an adversary with knowledge of the protection strategy. Overall, the findings suggest that proper protection mechanisms, such as the proposed signal-parity solution, must be considered when designing power conversion circuits to mitigate the risk of Trojan attacks.
## I Introduction
Hardware Trojans are a serious threat to the security and reliability of electronic systems, and power conversion circuits are particularly vulnerable due to their closed-loop, analog nature. Among hardware Trojans, power block hardware Trojans are especially dangerous since they are designed to be small in size, have low-power operation, and can be triggered with a small number of gates. This makes them difficult to detect and almost impossible to remove without damaging the circuit.
This report aims to present a study of the effects of analog Hardware Trojans on power conversion circuits, with a specific focus on power block Hardware Trojans. As noted, these Trojans are designed to be small in size, low power, and hard to detect, making them a severe threat to power conversion circuits. Power block hardware Trojans are typically placed inside the power generation block, where they can live with minimal leakage of side-channel information due to the large electromagnetic fields and hot operation. Furthermore, since power generation circuits are closed-loop and analog, they are challenging to test with conventional methods, and the only well-monitored ports of a power block are the input and output power.
For the overall impact and motivation behind such a Trojan, this report considers the threat model of untrusted third-party intellectual property (3PIP), untrusted system-on-chip (SoC) developers, and untrusted foundries. Power intellectual property is easy to reverse engineer on the side of the foundry, making it challenging to obfuscate and protect against hardware Trojans.
## II Threat Models
### _Untrusted Third Party IP_
The first threat model pertains to an untrusted third-party intellectual property (3PIP) that designers might have acquired, with a built-in trojan. In this case, designers use a third-party IP to reduce the design complexity and development cost. However, they have no control over the design of the IP, and if the third-party intentionally inserts a Trojan into the IP, it will remain unnoticed by the designers. This poses a severe threat to the entire system and can result in significant damages.
### _Untrusted System on Chip Developer_
The second threat model deals with an attacker who maliciously adds a Trojan to the system during the design phase. This attacker exists within the design house and has the necessary knowledge and expertise to insert a Trojan while designing the circuit. Since the attacker has access to the entire design, they can inject the Trojan in any part of the circuitry, making it more challenging to detect.
### _Untrusted Foundry_
The third threat model involves an untrusted foundry that deviates from the netlist and adds a Trojan during the fabrication process. In this case, the attacker has access to the hardware, and they can modify the circuitry to insert a Trojan. Since the Trojan is inserted during the fabrication process, it becomes more challenging to detect as it is masked by the hardware's natural variations.
## III Experimental Setup
### _Trojan_
For the scope of this paper, we only modelled the trigger circuit of the trojan. The circuit that generates the trigger signal can take a variety of different forms. Some condition circuits are digital in nature, and can take the form of sequential or combinational logic [3]. These circuits still need to be small in area and gate-number in order to escape detection. There are other types of condition circuits that are analog in nature, such as large-delay trojans that combine the effects of gate-oxide leakage current and Miller capacitance to create delay signals up to 2 days in length [2].
The trojan trigger circuit being tested is a switching signal attack that is designed to trigger a locking of the pulse width
modulation (PWM) signal that goes to a power field-effect transistor (FET). The purpose of this attack is to cause a system to either overvolt or disable entirely. This attack is accomplished by using an OR/NOR gate with both the PWM signal and the trojan signal as inputs, with the output going to the gate of the power FET.
When the trojan is triggered, the power FET is locked with either a high or low voltage, preventing the proper voltage regulation and resulting in potential system damage. This type of attack can be particularly damaging in the context of an IC buck converter, where the voltage is regulated and converted for a variety of applications. The use of a power FET in the attack also indicates that the trojan is designed to target power conversion circuits specifically, further underscoring the potential impact of such an attack.
The simulated trigger circuit contains a total of 2 gates, 7 transistors (could be implemented with XOR as well). This makes it particularly difficult to detect through conventional testing methods, such as inserting test patterns to assess loop functionality. Additionally, power generation circuits are typically closed-loop and analog, which limits the ability to insert test patterns to evaluate system functionality.
Overall, a well-designed power-management Hardware Trojan can have a small footprint, a rare trigger condition, and be difficult to detect without destructive testing. This highlights the potential threat of such attacks and underscores the need for continued research and development of methods to detect and prevent Hardware Trojans in power conversion circuits.
### _DC-DC Circuit_
To test the effects of the Trojan, a DC-DC converter topology is simulated (as shown above). It has a load current of 10 mA and operates at 93.3% efficiency. The output ripple of the converter is 23.8 mVpp. The converter is designed using an inductor with a value of 55.5 uH and an ESR of 0.777 ohms, along with a capacitor with a value of 40 nF and an ESR of 0.358 ohms. The power FETs used in this circuit have a fanout ratio of 4.
## IV Key Findings
During the simulation, the Trojan was inserted into the PMOS (upper transistor) PWM signal, resulting in a lock-down of the PMOS signal to a low voltage state. As a result, the PMOS remained on at all times, causing overvoltage in the system. The simulation showed that at a load current of 10mA, the system voltage reached as high as 1.2V. This overvoltage could potentially push a processor to a voltage corner, leading to functional failure. The waveform obtained from the simulation clearly depicts the overvoltage caused by the Trojan attack (Figure 2). Table 1 shows an extrapolation of the simulation results to show the various other effects that a similar attack could have on this circuit.
Fig. 1: Logic Trigger for Trojan
Fig. 2: Shift of operating voltage from 1V to 1.2V
## V Impacts
As shown clearly through the above simulation, hardware trojans can cause subtle changes in the supply voltage of electronic systems that can have significant impacts on a variety of applications that use power converter topologies in their systems similar to the one simulated above. The increase in supply voltage, prolonged over a period of time can lead to Negative Bias Temperature Instability (NBTI) phenomena, which can degrade the performance of electronic components over time [1]. NBTI can lead to a decrease in transistor speed and an increase in leakage current, which can cause a degradation in circuit performance and reliability. [1]
In sensors that are very sensitive to subtle changes in supply voltage, such as those used in critical applications like medical devices, Hardware Trojans can have severe consequences. For example, pacemakers rely on highly precise sensors that can detect subtle changes in voltage levels to regulate the heart's rhythm. If a hardware Trojans were to compromise the pacemaker's sensors, it could cause the device to malfunction or provide inaccurate readings, leading to severe health consequences such as heart attacks or even death.
Furthermore, sensors used in medical devices often operate in harsh environments that can cause significant fluctuations in the supply voltage. These fluctuations can exacerbate the impact of Hardware Trojans on the reliability of the device.
## VI Potential Mitigation
A second simulation was conducted to implement and test a potential mitigation to this kind of Hardware Trojan. The proposed solution involved using a large capacitor to force signal parity. The capacitor was placed at the root of PWM signal generation, which could be at the output of a VCO or comparator network. The other end of the capacitor was tied to the same net as the power FET gate. This ensured that the parity of the true PWM signal generation was checked and what voltage was being measured at the gate of the FET (Figure 3).
The simulation showed that if the voltage at the gate of the FET and the PWM were the same, no current would flow through the capacitor, and normal functionality would be maintained. However, if the PWM and the gate voltage were opposite in phase for any amount of time, current would flow through the capacitor, alter the gate voltage, and trigger the Trojan. This effectively negated the Trojan circuit, turning the inserted OR/NOR gate into a phase-shift network or delay network (Figure 4).
It should also be noted that this kind of signal-parity solution could be applied to any number of switching topologies. However, there are some caveats to consider. Firstly, the capacitor must be large enough to pass the PWM signal. For the simulation, 500 pF was required at 1 MHz switching frequency, which was not possible to fabricate, so an external capacitor was suggested. Secondly, the travel time of the PWM signal may automatically incur some phase shift between the plates of the capacitor. This could impact efficiency metrics due to a slight change in duty cycle measured at the FET gate, even without Trojan interference. This could be preemptively considered during design. Finally, an adversary with knowledge about this kind of protection strategy could fairly easily circumvent it by placing their gate on either side of the capacitor.
## VII Conclusions and Evaluation
In conclusion, this report has demonstrated the potential impacts of a Trojan attack on power management circuits. The simulation results clearly show how a simple OR/NOR gate can be used to trigger a locking of the PWM signal and cause overvoltage or system failure. The results also indicate that such attacks can be difficult to detect through conventional testing methods, which highlights the need for more sophisticated protection strategies.
The proposed solution of using a large bypass capacitor to force signal parity appears to be an effective countermeasure against this type of Trojan attack. However, this approach
Fig. 4: Effect of parity 500 pF capacitor
Fig. 3: Trigger circuit thwarted with parity capacitor @ 500 pF
comes with its own set of limitations and caveats, such as the need for a large enough capacitor to pass the PWM signal and the possibility of current leakage impacting efficiency metrics.
Overall, this report highlights the importance of protecting power management circuits against Trojan attacks and the need for more advanced protection strategies. As technology continues to advance, it is crucial that researchers and designers remain vigilant in identifying potential vulnerabilities and developing effective countermeasures to ensure the reliability and security of critical systems.
## VIII Project Contributions
|
2306.15762 | Toward Mesh-Invariant 3D Generative Deep Learning with Geometric
Measures | 3D generative modeling is accelerating as the technology allowing the capture
of geometric data is developing. However, the acquired data is often
inconsistent, resulting in unregistered meshes or point clouds. Many generative
learning algorithms require correspondence between each point when comparing
the predicted shape and the target shape. We propose an architecture able to
cope with different parameterizations, even during the training phase. In
particular, our loss function is built upon a kernel-based metric over a
representation of meshes using geometric measures such as currents and
varifolds. The latter allows to implement an efficient dissimilarity measure
with many desirable properties such as robustness to resampling of the mesh or
point cloud. We demonstrate the efficiency and resilience of our model with a
generative learning task of human faces. | Thomas Besnier, Sylvain Arguillère, Emery Pierson, Mohamed Daoudi | 2023-06-27T19:27:15Z | http://arxiv.org/abs/2306.15762v1 | # Toward Mesh-Invariant 3D Generative Deep Learning with Geometric Measures
###### Abstract
3D generative modeling is accelerating as the technology allowing the capture of geometric data is developing. However, the acquired data is often inconsistent, resulting in unregistered meshes or point clouds. Many generative learning algorithms require correspondence between each point when comparing the predicted shape and the target shape. We propose an architecture able to cope with different parameterizations, even during the training phase. In particular, our loss function is built upon a kernel-based metric over a representation of meshes using geometric measures such as currents and varifolds. The latter allows to implement an efficient dissimilarity measure with many desirable properties such as robustness to resampling of the
mesh or point cloud. We demonstrate the efficiency and resilience of our model with a generative learning task of human faces.
## 1 Introduction
In this paper, we focus on the generation of believable deformations of 3D faces, which has practical applications in various graphics fields, including 3D face design, augmented and virtual reality, as well as computer games and animated films. Despite the rapid progress in 3D face generation thanks to deep learning, existing methods have not yet been able to learn from unregistered scans with varying parameterizations (see Figure 1). Indeed, one major restriction of the current methods such as graph convolutional networks [3, 44] for generating these facial deformations is their reliance on a unified graph structure (required by the network architecture), along with point correspondence (for the loss function) between the target and the predicted mesh. This is especially problematic when handling real-world data: surfaces can be acquired with different technologies (LIDAR, 3D scans, Neural radiance fields [34],...) which results in inconsistent and heterogeneous databases [56, 48, 24].
Therefore, costly registration algorithms are often needed to create consistent datasets, and can even require manual intervention. As the registration of large point clouds or meshes can require hours or even days of processing time, especially with the growing number of available databases, this has motivated ongoing research on efficient methods and hardware acceleration techniques to make them more practical for real-world applications.
To address these issues, we propose an auto-encoder architecture that can, by design, be trained on inconsistent datasets with no unifying graph structure, resolution or point correspondence. Moreover, our approach works directly with explicit surface meshes, avoiding complex computation with volumetric data or functional description of the shapes. The main tools employed in our approach consist of a PointNet [6] encoder with the ability to process point clouds with variable number of points and map them to a low dimensional latent space, alongside a novel loss function that can withstand variations in parameterization. In particular, the proposed loss is robust to resampling of the mesh thanks to a representation of shapes in terms of geometric measures such as varifolds. Recently, by using a kernel metric on the space of these varifolds, Kaltenmark _et al._[28] have proposed a general
framework for 2D and 3D shape similarity measures, invariant to parameterization and equivariant to rigid transformations. This framework showed state-of-the-art performances in different shape matching frameworks, which motivates our approach. To the best our knowledge, this is the first use of kernel metric in the space of geometric measures as cost function in deep learning.
Our results show that the presented model is indeed able to learn on meshes with different parameterization. Moreover, our learned auto-encoder demonstrates expressive capabilities to rapidly perform interpolations, extrapolations and expression transfer through the latent space. Our main contributions are as follow:
* We propose a generative learning method using a parameterization invariant metric based on geometric measure theory. More precisely, we use kernel metrics on varifolds, with a novel multi-resolution kernel. We use it as a dissimilarity measure between the generated mesh and the target mesh during training. We highlight in particular many desirable properties of this metric compared to other metrics used in unsupervised geometric deep learning.
Figure 1: Several reparameterized meshes from the COMA dataset [44]. Captured geometric data does not have identical graph structure and correspondence between points. Even the resolution driven by the number of points/vertices is subject to variations along a dataset of scans.
* We propose a robust training method for face registration. It is composed of an asymmetric auto-encoding architecture, that allows to learn efficiently on human faces, with a loss function based on a varifold multi-resolution metric. This approach allows us to learn on inconsistent databases with no correspondence between vertices. Moreover, we are able to learn efficiently an expressive latent space.
* To validate the robustness of our approach, we conduct several experiments, using the trained model, including face generation, interpolation, extrapolation and expression transfer.
## 2 Related work
### Geometric deep learning on meshes
Within the field of geometric deep learning, a critical challenge is generalizing operators such as convolution and pooling to meshes. The first idea was a direct application of convolutions on the well-suited transformation of 3D meshes, such as multi-view images [41], or 3D (Convolutional neural networks) CNNs on volumetric data [21]. Recent advances have shown that the latter, with other representations such as signed distance or radiance fields can be used in 3D CNNs and allow the reconstruction of 3D shapes. It was successfully used in applications such as geometric aware image generation [57], shape reconstruction from images [5], or partial shape completion [10]. However, these approaches remain expensive and time-consuming to obtain detailed shapes, making them unpractical for generative tasks compared to mesh-based approaches [17].
By proposing a permutation invariant approach to learning on point clouds, PointNet [6] has opened a new and practical way to easily encode information of 3D points. Several improvements have been applied over the years with multi-scale aggregation techniques, such as PointNet++ [42], KPConv [52], or the PointTransformer [58], and recently, PointNext [43] has shown that such an approach can scale on large databases. However, while PointNet showed provable robustness to different discretization, PointNet++, and its derivatives are based on fixed-size neighborhood aggregation, thus being sensitive to the mesh discretization.
In the meantime, surface-based filters have been proposed to improve results and take in account the geometry of the surface. The first filters pro
pose to exploit the graph structure of surface meshes [33], and apply Graph Neural Networks (GNN) on the underlying graph. The advantage is that it becomes easy to generalize popular CNNs architecture like autoencoders or U-Net. However, original GNN lacks expressivity because of their isotropy (they gather information in all directions equally), and several anisotropic filters have been proposed [16, 35, 50] and shown state-of-the-art results on shape classification and segmentation. Whether or not they can be applied to generative models remains however an open question. Recently, Lemeunier et al. [30] proposed to work in the spectral domain to learn on human bodies, but this approach needs the meshes to be converted to the spectral domain during learning and it is unclear how to adapt it to unparameterized data. Neural3DMM [3], which is based on the spiral ordering of mesh neighborhoods, has shown its efficiency on 3D faces [1, 38].
In this work, we follow recent advances [1] and we propose an asymmetric autoencoder, with a PointNet architecture for computing the latent vector from an unregistered mesh: the simplicity of PointNet combined to its proven robustness makes it the ideal candidate, as opposed to more expressive, but less parameterization robust approaches. In the contrary, the decoder is made of a template-dependent architecture, namely SpiralNet, for two reasons: the proven results on human faces, and the fact that spiral convolutions incorporate a better prior on the deformation model. We illustrate this with a very simple experiment in which a model learn a Multi-Layer Perceptron (MLP) and an SpiralNet decoder on the task of mapping a single vector to a target face. We observe in Figure 2 that the target shape is reached faster, and the intermediate shape are representing human faces, as opposed to the MLP decoder. This property will allow our model to reach more easily a suitable registration of shapes.
### Robust generative learning in 3D
In the literature, "robustness" is often a welcomed side property of the model [25, 50, 6], but these models still rely on a consistent database for the training phase.
In 3D machine learning, while supervised learning relies on training data that has a precise and consistent order, unsupervised learning operates without any prior correspondence and instead utilizes self-organization to model the inherent geometry of the data. Here, we explore tools to allow **complete unsupervised** 3D generative deep learning. Little work has been done on
this subject, especially for generative tasks, as in [23] or [13]. Most of the time, unsupervised learning tasks are performed by considering a functional representation of shapes [19, 46, 4] but these methods have not been applied for face generation yet. We also mention [32, 1] who proposed an hybrid supervised/unsupervised learning protocols. Most of the other unsupervised registration methods are computed with non learning method such as LD-DMM (Large deformation diffeomorphic metric mapping) [2] or elastic shape matching [26] with a recent exception in [13] where the authors learn an unsupervised diffeomorphic registration with an auto-encoder architecture that uses optimal transport for the loss function.
### Geometric loss functions
In the following, we consider 3D datasets with elements being meshes. We describe such a mesh \(X=\{V(X),E(X),F(X)\}\) with \(V(X)=(x_{i})_{i}\in\mathbb{R}^{3\times n_{X}}\) the set of vertices of \(X\), \(E(X)\) its edges and \(F(X)\) its faces with \(n_{X}=|V(X)|\) being the resolution of the mesh. Following this, \(\hat{X}\) will refer to a reconstruction of \(X\).
As mentioned in the introduction, our goal is to learn explicit 3D data on inconsistent databases. In real-world situations, without prior registration,
Figure 2: Visualization of the learning process of a single mesh using our model with different decoders. The first row shows the results using a MLP as the decoder and the for the second row, we used a spiral convolution. In addition to a slower learning process, the MLP struggles to map a linear space (the latent space) to a high-dimensional and non-linear one.
we usually lack the correspondence between points, making it difficult to compute the mean squared error (MSE), or any similar euclidean distance. As a result, we investigate dissimilarity metrics that are robust or invariant to the parameterisation with a relevant geometric meaning. We review some conventional distances, some of them will be used to evaluate our method:
* The Mean-squared error metric requires correspondence between each points \[\mathcal{L}^{MSE}(X,\hat{X})=\frac{1}{n_{X}}\sum_{x\in X}\|x-\hat{x}\|_{2}^{2}\] (1) It is the most commonly used dissimilarity measure for supervised learning tasks.
* The Hausdorff distance (H) is a strong metric with a powerful ability to generalize to heterogeneous spaces [45]. \[\mathcal{L}^{H}(X,\hat{X})=\max\left\{\sup_{x\in X}d(x,\hat{X}),\sup_{\hat{x} \in\hat{X}}d(X,\hat{x})\right\}\] (2) where \(d\) is a pre-defined distance, usually euclidean. Unfortunately, optimizing with respect to this metric will imply to correct only one point at each step of the gradient which make it an inefficient loss function.
* The Chamfer distance (CD) [55] is strongly linked with the iterative closest point algorithm (ICP) as it is basically the objective function to minimize for this algorithm: \[\mathcal{L}^{CD}(X,\hat{X})=\frac{1}{n_{X}}\sum_{\hat{x}\in V(\hat {X})}\min_{x\in V(X)}\|\hat{x}-x\|_{2}^{2}\\ +\frac{1}{n_{\hat{X}}}\sum_{x\in V(X)}\min_{\hat{x}\in V(\hat{X} )}\|\hat{x}-x\|_{2}^{2}\] (3) In fact, as a loss function we only require the directed Chamfer distance (DCD) given by the first term in the previous expression \[\mathcal{L}^{DCD}(X,\hat{X})=\frac{1}{n_{X}}\sum_{\hat{x}\in V(\hat{X})}\min_{ x\in V(X)}\|\hat{x}-x\|_{2}^{2}\] (4)
This loss is notably used for existing unsupervised learning tasks in [23; 32; 1; 9]. However, the Chamfer distance can suffer from poor performances for at least two reasons: the first one is that the use of min operator makes the loss unstable, because it is not fully differentiable with respect to a mesh positions. Second, it can be sensitive to outliers or collapse on points of a mesh. Generally, this loss is regularized using an additional term constraining the mesh deformation during the training phase. Various techniques exists, such as adding an edge loss \(\mathcal{L}^{edges}\) with respect to a template mesh \(X^{t}\). It can also be combined with a Laplacian loss to ensure the smoothness of the reconstructed mesh.
* The Wasserstein distance (also called the _earth-mover distance_) from optimal transport theory is very popular to compute a distance between shapes. However, solving the optimal transport problem is NP-hard [39], and the distance can be approximated with the debiased Sinkhorn divergence (SD) developed in [14] and recently used in [20]. To compute this distance, we represent a mesh \(X\) as an aggregation of Dirac's measure \(A(X)=\sum_{i}a_{i}^{X}\delta_{c_{i}^{X}}\) which gives a sum representing the center of faces \((c_{i}^{X})_{i}\), weighted by their corresponding area \((a_{i}^{X})_{i}\). \[\mathcal{L}^{SD}(X,\hat{X})=OT_{\epsilon}(A(X),A(\hat{X}))- \frac{1}{2}OT_{\epsilon}(A(X),A(X))\\ -\frac{1}{2}OT_{\epsilon}(A(\hat{X}),A(\hat{X}))\] (5) with \[OT_{\epsilon}(\alpha,\beta) = \min_{\pi\in\Pi}\sum_{i=1}^{N}\sum_{j=1}^{M}\pi_{i,j}\frac{1}{p} \|c_{i}^{X}\ -\ c_{j}^{\hat{X}}\|_{p}\ +\ \epsilon KL(\pi\|\alpha\ \otimes\ \beta)\] where \(\pi\) is the regularized transport plan. The Kullback-Leibler divergence (KL) is a regularization term, called entropic penalty, and the blur \(\epsilon\) is a hyperparameter that indicates how strong is the approximation of the Wasserstein distance with \(OT_{\epsilon}\xrightarrow[\epsilon\to 0]{}OT\) the true Wasserstein distance. This hyperparameter needs to be adapted to each different task. Moreover, we observed in our experiments that the computation and differentiation of the distance can still be expensive, and makes it unpractical for learning on large datasets.
In contrast to all methods, the varifold approach is built using reproducing kernels, and thus have a spatial support: the outliers are not seen by the loss. Moreover, by taking into account every relationship in each pair of points, it is fully differentiable with respect to mesh position. Finally, the use of normals in the loss, allows to account for the shape of the surface, instead of seeing a mesh as just a cloud of points of \(\mathbb{R}^{3}\). We denote our proposed loss \(\mathcal{L}^{GM}\) and summarize its advantages in Table 1.
As we will see of the experiments, we do not need any regularization to obtain plausible mesh as outputs of our method.
## 3 Our approach
To complete the face generation task, we propose a simple auto-encoder detailed in Figure 3. The main originality of our model comes from the loss function based on a representation of meshes with discrete geometric measures.
### Geometric measure theory applied to surfaces
Let \(S\) be a parameterized surface embedded in \(\mathbb{R}^{3}\). This surface and its triangulations can be understood as points in a high-dimensional space of shapes (infinite in the case of continuous surfaces). We look for a robust metric on this particular space, suitable for unsupervised learning tasks. If we take a diffeomorphism \(\phi\) acting on the parameter space, such a metric should not differentiate between a given shape \(S\) and a reparameterization \(\tilde{S}=S\circ\phi\) of this shape: \(S\) and \(\tilde{S}\) are at distance \(0\) for this metric.
\begin{table}
\begin{tabular}{l|c|c|c|c|c} Property / Loss & \(\mathcal{L}^{MSE}\) & \(\mathcal{L}^{H}\) & \(\mathcal{L}^{DCD}\) & \(\mathcal{L}^{SD}\) & \(\mathcal{L}^{GM}\) \\ \hline Unsupervised & ✗ & ✓ & ✓ & ✓ & ✓ \\ Smooth gradient & ✓ & ✗ & ✗ & ✓ & ✓ \\ Position & ✓ & ✓ & ✓ & ✓ & ✓ \\ Orientation & ✗ & ✗ & ✗ & ✗ & ✓ \\ Tunable & ✗ & ✗ & ✗ & ✓ & ✓ \\ \end{tabular}
\end{table}
Table 1: Summary of the properties of the aforementioned dissimilarity metrics
**Definition 3.1** (Varifold representation of surfaces): _The varifold \(\mu_{S}\) associated to a continuous surface shape \(S\) is the measure on \(\mathbb{R}^{3}\times\mathbb{S}^{2}\) such that, for any continuous test function \(u:\mathbb{R}^{3}\times\mathbb{S}^{2}\mapsto\mathbb{R}\):_
\[\mu_{S}(u)=\int_{\mathbb{R}^{3}\times\mathbb{S}^{2}}ud\mu_{S}=\int_{S}u(x, \vec{n}_{x})\text{d}\sigma(x) \tag{6}\]
_where \(\vec{n}_{x}\) is the normal of \(S\) at \(x\) and \(d\sigma\) the area measure of the surface \(S\)._
The key property that motivates the use of varifolds in our context is that for any two parameterized shapes \(S\) and \(\tilde{S}\), \(\mu_{S}=\mu_{\tilde{S}}\) if and only if \(\tilde{S}\) is a reparameterization of \(S\)[8].
Moreover, there is a natural discrete version of varifolds for meshes as follows. If \(f\) is a triangle (e.g. a face in a triangular mesh) with center \(c(f)\), and normal \(\vec{n}_{f}\), the corresponding discrete varifold \(\mu_{f}\) is given by a Dirac mass \(\delta^{\vec{n}_{f}}_{c(f)}\) at \((c(f),\vec{n}_{f})\) weighted by \(a(f)\) the area of \(f\). In other words, for any continuous test function \(u\) on \(\mathbb{R}^{3}\times\mathbb{S}^{2}\),
\[\mu_{f}(u):=a(f)u(c(f),\vec{n}_{f}).\]
and we can write \(\mu_{f}=a(f)\delta^{\vec{n}_{f}}_{c(f)}.\) Therefore, we can extend this to a triangular mesh:
Figure 3: Architecture overview. The encoder takes a mesh \(X\) of any parameterization as input and outputs a deformation \(\Delta\) added to a chosen template to obtain a registered mesh \(\hat{X}\) with a similar topology as the template.
**Definition 3.2** (Discrete varifold representation of surfaces): _Let a mesh \(X=\{V(X),E(X),F(X)\}\), where \(V(X)\), \(E(X)\), and \(F(X)\) are respectively, the set of vertices, edges, and faces. The varifold representation associated to \(X\) is the measure on \(\mathbb{R}^{3}\times\mathbb{S}^{2}\), given by_
\[\mu_{X}:=\sum_{f\in F(X)}\mu_{f}=\sum_{f\in F(X)}a(f)\delta_{c(f)}^{\vec{n}_{f }},\]
_with \(\mu_{f}=a(f)\delta_{c(f)}^{\vec{n}_{f}}\) as described above._
This representation is well suited for triangular meshes as each triangle is represented by a measure on the position of its center \(c(f)\) and its orientation \(\vec{n}_{f}\) given by a point on the 2-sphere, all weighted by the area of the triangle \(a(f)\).
Moreover, we have the following result, which is an easy consequence of Proposition 1 from [28] combined with Corollary 1 from [37]:
**Theorem 3.1**: _Take \(u:\mathbb{R}^{3}\times\mathbb{S}^{2}\to\mathbb{R}\) a bounded \(k_{u}\)-Lipschitz function, with supremum \(\|u\|_{\infty}\). Let \(S\) a be surface, and \(X\) a triangular mesh drawn from \(S\) whose vertices belongs to \(S\), with greatest edge length \(\eta_{X}\), and smallest angle \(\theta_{X}\) among its faces. Denote \(\kappa_{S}\) the greatest principal curvature over \(S\), and \(a(S)\) its surface area. Then, there is a universal constant \(C\) such that_
\[|\mu_{S}(u)-\mu_{X}(u)|\leq Ca(S)\frac{(\kappa_{S}+1)}{\sin\theta_{X}}(k_{u}+ \|u\|_{\infty})\eta_{X}\]
Consequently, for any two good triangulations with relatively small edges \(X\) and \(\hat{X}\) of \(S\), \(\mu_{X}\simeq\mu_{\hat{X}}\), making discrete varifolds natural tools for mesh-invariant purposes.
**Comparing shapes with kernel metrics.**
With this representation of shapes, we compute dissimilarities between shapes, both continuous and discrete, by using kernels on \(\mathbb{R}^{3}\times\mathbb{S}^{2}\). Following the works of [53, 8, 28, 40], we use a product \(k=k_{p}k_{n}\), with \(k_{p}\) a kernel on \(\mathbb{R}^{3}\) and \(k_{n}\) a kernel on \(\mathbb{S}^{2}\).
To ensure invariance under the action of rigid motions (rotations and translations), we choose a radial basis function \(\rho\) to drive the position kernel \(k_{p}\) and a zonal kernel \(\gamma\) for the orientation kernel \(k_{n}\). Details on admissible functions for \(\gamma\) and \(\rho\) can be found in [22, 49].
\[k_{p}:\begin{cases}\mathbb{R}^{3}\times\mathbb{R}^{3}&\to\mathbb{R}\\ (x,\hat{x})&\mapsto\rho(|x-\hat{x}|)\end{cases} \tag{7}\]
\[k_{n}:\begin{cases}\mathbb{S}^{2}\times\mathbb{S}^{2}&\rightarrow\mathbb{R}\\ (\vec{n}_{x},\vec{n}_{\hat{x}})&\mapsto\gamma(\langle\vec{n}_{x},\vec{n}_{\hat{x }}\rangle)\end{cases} \tag{8}\]
These kernels are extrinsic in the sense that they are defined on the ambient space \(\mathbb{R}^{3}\times\mathbb{S}^{2}\), and use the euclidean distances.
Then, we can derive a correlation between any two measures \(\mu,\hat{\mu}\) on \(\mathbb{R}^{3}\times\mathbb{S}^{2}\) as
\[\langle\mu,\hat{\mu}\rangle_{k}=\int_{\mathbb{R}^{3}\times\mathbb{S}^{2}}\int _{\mathbb{R}^{3}\times\mathbb{S}^{2}}k_{p}(x,\hat{x})k_{n}(\vec{n}_{x},\vec{n} _{\hat{x}})d\mu(x,\vec{n})d\hat{\mu}(\hat{x},\vec{n}_{\hat{x}})\]
This gives a parameterization-independent correlation between two surfaces \(S\) and \(\hat{S}\) through the kernel \(k\) as follows:
\[\langle\mu_{S},\mu_{\hat{S}}\rangle_{k}=\iint_{S\times\hat{S}}k_{p}(x,\hat{x} )k_{n}(\vec{n}_{x},\vec{n}_{\hat{x}})\mathrm{d}\sigma(\hat{x})\mathrm{d}\sigma (x) \tag{9}\]
For the discrete setting, we write the correlation between two faces \(f\) and \(\hat{f}\)
\[\langle f,\hat{f}\rangle_{k}=a(f)a(\hat{f})k_{n}(\vec{n}_{f},\vec{n}_{\hat{f}} )k_{p}(c(f),c(\hat{f})). \tag{10}\]
This can be summed along the meshes to give a discretized version of Equation (9). For \(X\) and \(\hat{X}\) two meshes, the correlation is
\[\langle\mu_{X},\mu_{\hat{X}}\rangle_{k}=\sum_{f\in F(X)}\sum_{f\in F(\hat{X})} a(f)a(\hat{f})k_{p}(c(f),c(\hat{f}))k_{n}(\vec{n}_{f},\vec{n}_{\hat{f}}) \tag{11}\]
Now for some kernels, these formulas actually give a positive definite dot-product on the space of measures, so that \(\mu=\hat{\mu}\) if and only if \(\|\mu-\hat{\mu}\|^{2}=\langle\mu-\hat{\mu},\mu-\hat{\mu}\rangle=0\). From there, we define the "geometric measure" (GM) loss associated to such a kernel by
\[\mathcal{L}_{k}^{GM}(X,\hat{X}) =\langle\mu_{X}-\mu_{\hat{X}},\mu_{X}-\mu_{\hat{X}}\rangle_{k}\] \[=\langle\mu_{X},\mu_{X}\rangle_{k}+\langle\mu_{\hat{X}},\mu_{\hat {X}}\rangle_{k}-2\langle\mu_{X},\mu_{\hat{X}}\rangle_{k}. \tag{12}\]
For well-chosen kernels, this function is fully differentiable and can be written in closed form, making this loss function suitable for GPU accelerated computations. Moreover, thanks to Theorem 3.1, this GM loss is robust to a mesh change in both \(X\) and \(\hat{X}\).
exponential kernel \(x\mapsto\exp\left(\frac{|x|}{\sigma}\right)\). On the right is displayed the 1D plot of the three kernels: in red the Gaussian kernel, in blue the Cauchy kernel and in green the exponential one. Next,
2D plots of the Gaussian, Cauchy and exponential kernel respectively from left to right.
**Choice of kernel and loss function.** Several kernels are suitable for \(k_{p}\) such as Gaussian, linear and Cauchy kernels. We display some of them in Figure 4.
By default, we use a Gaussian kernel for the position and, depending on the type of geometric measure, we either use a linear (current), squared (varifold) or exponential (oriented varifold) zonal kernel on \(\mathbb{S}^{2}\) for \(k_{n}\). In particular, \(k_{p}\) is defined by a scale parameter \(\sigma\). Typically, this parameter should be chosen in order to encompass the structure of local neighborhoods across the meshes.
To improve the versatility and efficiency of our loss, we propose to use a sum of kernels with different scale parameter for each term, such that, our final loss is defined as
\[\mathcal{L}^{GM}=\sum_{i}\lambda_{i}\mathcal{L}^{GM}_{k_{i}} \tag{13}\]
with \(k_{i}\) associated with the scale \(\sigma_{i}\) and a scalar weighting coefficient \(\lambda_{i}\). The number of kernels and the coefficients \((\lambda_{i})_{i}\) are hyperparameters of the model. For our task, we observed experimentally that setting \(\lambda_{i}=\left(\frac{\sigma_{i}}{\max_{i}\sigma_{i}}\right)^{2}\) could gives good enough results. This way, we penalize small scales but still allow the metric to distinguish fine structures on the mesh made up of small triangles.
### Application: face generation
Human face modeling involves deformable geometries with small but meaningful changes. In general, we model a face shape \(S\) as the combination of an identity (shape of the face) and an expression (reversible deformation from a neutral expression): this is the so-called **morphable model**[18], [15]. It has seen many improvements over the past few years thanks, in part, to computer vision research and deep learning. Indeed, using nonlinear, deep representations presents the potential to outperform traditional linear or multi-linear models in terms of generalization, compactness, and specificity. In addition, the implementation of deep networks for parameter estimation allows for quick and dependable performance even with uncontrolled data.
\[S=\mathrm{Id}+\mathrm{Expr}=\mathrm{Template}+\Delta\mathrm{Id}+\Delta \mathrm{Expr} \tag{14}\]
and we set \(\Delta=\Delta\mathrm{Id}+\Delta\mathrm{Expr}\) the total deformation of the template to match the target face.
The COMA dataset [44] encapsulates this representation as it is made of 12 identities executing 12 different expressions. Each expression is a sequence of meshes during which the subject starts from a neutral face, executes the expression and goes back to a neutral face. Each sequence is made from 25 to more than 200 meshes and the sequence length is not consistent across the identities. A distinction is made between the unregistered meshes obtained from scans and their registered counterpart providing us with two distinct databases.
## 4 Experiments
The model corresponding to the architecture presented in Figure 3 is trained end-to-end, both the encoder and the decoder weights are optimized at the same time. The Python code is built over the one from [3] using the Pytorch framework. All measurements were conducted using the same machine (a laptop) with an NVIDIA Corporation / Mesa Intel(r) UHD Graphics (TGL GT1) as GPU and Intel(r) Core i5-9300HF 2.40GHz CPU with 8,00 Go RAM.
### Implementation details
Our model takes as input a mesh of any parameterization and gives out a mesh in the COMA topology which has 5023 vertices and 9976 faces with
a fixed graph structure (the topology of the output solely depends on the topology of the chosen template which can be adapted).
The encoder is a combination of a simple PointNet architecture (PN), a spatial transformer (TF) as described in [27] to improve invariance to euclidean transformations of the input and a fully connected layer (FC). We use the parameters of [12], that were optimized on the COMA dataset. The filter sizes for the decoder are [128, 64, 64, 64, 3]. It starts with a fully connected layer (FC) and then alternately performs up-sampling (US) and spiral (de)-convolution (SC). The parameters are taken from the original Neural3DMM [3] paper.
* _Encoder:_ PN(64,1024) \(\rightarrow\) TF(64) \(\rightarrow\) FC(128,64,64)
Figure 5: Illustration of a morphable model: a template mesh (in red) is deformed into a given mesh (in blue).
* _Decoder:_ FC(128) \(\rightarrow\) US(4) \(\rightarrow\) SC(128) \(\rightarrow\) US(4) \(\rightarrow\) SC(64) \(\rightarrow\) US(4) \(\rightarrow\) SC(64)
The model learns to code the deformation from a fixed template mesh \(X^{t}\) that does not belong to the training dataset. It is trained for 100 epochs with a latent space of fixed size 128. We used Adam optimizer with a learning rate of \(10^{-3}\) and a batch size equal to 16.
Unfortunately, this loss alone is optimal when the size of the triangles is relatively constant along the mesh. In order to cope with this limitation, we use what we call a **multi-kernel metric** with a collection of different scale \((\sigma_{i})_{i}\). Experimentally, we observe that the loss function induces poor performances when we do not use enough kernels but also when we use to much of them (see Figure 6).
Even with a carefully optimized loss function, we still observe some noise in the reconstruction. To correct this, we propose to use a post-processing step with a Taubin smoothing [51]. We highlight the improvement in quality of the reconstructed mesh in Figure 7. More sophisticated techniques can be applied such as using the pretrained model from Kim _et al.[29]_ to remove the noise.
### Learning performances
We train our model on 11 out of the 12 identities of the COMA dataset and we evaluate the performance on the remaining identity. Therefore, we
Figure 6: Qualitative comparison for the reconstruction of a target mesh (on the left) with a growing number of kernel (summed) as we go on the right.
assess the generalizability of the model and we compare the performances to other unsupervised methods. FLAME [31] is a face morphable model. In the paper, the authors use this model to register faces easily. They minimize a regularized Chamfer distance and use landmark information to register a face model. To be fair to our method, we do not use landmark information and minimize the mesh-to-mesh distance. 3DCODED [23] is a deep learning model using a Pointnet encoder and a MLP decoder that deforms a template mesh. Their unsupervised version uses Chamfer distance as a loss function, and is regularized using an edge loss and a Laplacian loss. This model has demonstrated its efficiency for human body registration, and we modify it in order to apply it on human faces. We also tried a more recent model: Deep Diffeomorphic Registration [13] (DDFR). It is a deep learning model that learns a diffeomorphism of the ambient space in order to morph a source mesh (the template) to a target mesh. Unfortunately, the training process has shown overflowing computational cost while showing limiting ability to produce expressive faces. This can be explained as diffeomorphisms of the ambient space can hardly separate the lips and produce "sliding motions" which are not differentiable.
We compute the reconstruction error according to 3 metrics: the Hausdorff distance \(d_{H}\), the Chamfer distance \(d_{CD}\) and a Varifold distance \(d_{V}\) with \(k_{p}\) being a Gaussian kernel with \(\sigma=0.1\). This specific scale has been chosen as it is roughly ten times the average size of a triangle. The results are presented in Table 2.
We highlight the following observations:
1. The FLAME model struggles to reconstruct "extreme" facial expressions such as a wide opened mouth in the 4th row of figure Figure 7.
\begin{table}
\begin{tabular}{l|l|l|l} \hline Model & \(d_{H}\) & \(d_{CD}\) (\(\times 10^{-4}\)) & \(d_{V}\) (\(\times 10^{-4}\)) \\ \hline
3DCODED [23] & 0.018 & 0.312 & 0.267 \\ FLAME [31] & 0.012 & 0.109 & 0.013 \\ \hline Ours & 0.010 & 0.091 & 0.011 \\ Ours (+ filter) & **0.009** & **0.088** & **0.011** \\ \end{tabular}
\end{table}
Table 2: Reconstruction error when learning faces from 11 out of the 12 identities of COMA and tested on the remaining identity which constitutes around 1200 meshes for the test set. The error is averaged along this test set.
We also report a longer time for the registration.
2. 3DCODED, while being a lot faster than linear methods such as FLAME, shows poor performance for learning human faces. In fact, the model only output indistinguishable deformations from the template mesh.
3. Our model surpass FLAME in term of expressivity and 3DCODED in terms of efficiency.
### Ablation study
Next, we show that the effectiveness of our model stems in particular from the loss function. We compare some unsupervised losses mentioned in section 3 with our geometric measure based loss \(\mathcal{L}^{GM}\) without changing any other parameters. We focus on the encoding and decoding of 4 sequences of expression (#1: _bare teeth_, #2: _cheeks_in_, #3: _high_smile_ and #4: _mouth_extreme_ in Table 3). Here, \(\mathcal{L}^{GM}\) is computed with a Gaussian kernel for \(k_{p}\) and a squared zonal kernel for \(k_{n}\) which correspond to the framework of unoriented varifolds.
We conducted the test using \(\mathcal{L}^{SD}\) with different value of approximation \(\epsilon\) as it is the most challenging candidate to surpass \(\mathcal{L}^{GM}\). In spite of this, setting \(\epsilon\) to be less than \(10^{-4}\) showed declining performances in addition to much higher computational cost.
\begin{table}
\begin{tabular}{l|c c c c|c c c c|c c c c} Loss function & \multicolumn{4}{c|}{Hausdorff} & \multicolumn{4}{c|}{Chamfer (\(\times 10^{-4}\))} & \multicolumn{4}{c}{Varifold (\(\times 10^{-4}\))} \\ \hline & \#1 & \#2 & \#3 & \#4 & \#1 & \#2 & \#3 & \#4 & \#1 & \#2 & \#3 & \#4 \\ \(\mathcal{L}^{DCD}\) & 0.031 & 0.031 & 0.030 & 0.031 & 0.45 & 0.44 & 0.42 & 0.44 & 0.38 & 0.38 & 0.38 & 0.38 \\ \(\mathcal{L}^{DCD}+\mathcal{L}^{edges}\) & 0.030 & 0.029 & 0.030 & 0.029 & 0.43 & 0.42 & 0.43 & 0.45 & 0.35 & 0.34 & 0.36 & 0.34 \\ \(\mathcal{L}^{SD}(\epsilon=0.0001)\) & 0.018 & 0.017 & 0.017 & 0.017 & 0.101 & 0.099 & 0.094 & 0.102 & 0.016 & 0.017 & 0.015 & 0.016 \\ \(\mathcal{L}^{SD}(\epsilon=0.001)\) & 0.024 & 0.027 & 0.023 & 0.023 & 0.135 & 0.133 & 0.120 & 0.134 & 0.023 & 0.023 & 0.021 & 0.022 \\ \(\mathcal{L}^{SD}(\epsilon=0.01)\) & 0.036 & 0.041 & 0.030 & 0.031 & 0.218 & 0.198 & 0.152 & 0.155 & 0.030 & 0.035 & 0.032 & 0.034 \\ \(\mathcal{L}^{GM}\) (ours) & 0.010 & 0.011 & 0.09 & 0.010 & 0.089 & 0.084 & 0.086 & 0.085 & 0.011 & 0.011 & 0.010 & 0.011 \\ \end{tabular}
\end{table}
Table 3: Ablation study: We trained our model using different unsupervised losses (only the loss is changed) and we report the mean reconstruction error when learning expressions in the COMA dataset. The reported evaluation is performed on meshes that does not belongs to the training data. We account for the quality of the learning process in relation to the loss function employed for the task.
### Robustness
We evaluate our model on different reparameterised meshes of faces to demonstrate that our model learns the geometry of the shape instead of the graph structure. This experience is conducted on one identity executing all 12 expressions. The proposed reparameterizations of the meshes are displayed in Figure 8.
In a similar fashion as in [50], we test the robustness of our model against three different reparameterizations. _UpDown_ is obtained by subdividing the
Figure 7: Qualitative results on the reconstruction of facial expressions from the COMA dataset with our AE model. On the left is the target mesh, the first pair of reconstructions shows the reconstructed mesh with a linear method (FLAME) and on the right is displayed the reconstruction with our model. The first proposed reconstruction is obtained without any post-processing and the other one with a Taubin smoothing (parameters: \(\lambda=0.5\), \(\mu=-0.53\)) with its corresponding MSE heatmap of error.
mesh and then performing a quadric edge simplification. _Iso_ is the result of one iteration of explicit isotropic remeshing. Finally, the _Variable_ parameterization is obtained by dividing the mesh in two-half: the top and the bottom part. On the top part, we perform a simplification to diminish the number of triangles and on the bottom part, we subdivide the mesh in order to get a much higher density of triangles. These new meshes are obtained via Meshlab [11] remeshing routines, using the _pymeshlab1_ Python library.
Footnote 1: [https://pymeshlab.readthedocs.io/](https://pymeshlab.readthedocs.io/)
We stress the robustness of our model by comparing the outputs of the model when given with the original parameterisation as input and the reparameterization as input. The results are summarized in Table 4, where we display the relative difference between the two outputs. While we have in
\begin{table}
\begin{tabular}{l|l|l} \hline & Hausdorff & Chamfer \\ \hline Original - Original & 0 & 0 \\ Original - UpDown & 0.0035 & 0.0096 \\ Original - Iso & 0.0012 & 0.0016 \\ Original - Variable & 0.0018 & 0.0030 \\ \end{tabular}
\end{table}
Table 4: Relative difference of reconstruction when the model is tested on reparameterized meshes. The evaluation is performed on meshes that do not belong to the training set in order to evaluate the robustness of the learning process.
Figure 8: Examples of reparameterized meshes of a single expressive face, from left to right: original parameterization, _UpDown_, _Iso_ and _Variable_
deed a slight difference between the outputs, the worst relative difference is around 2% in Chamfer distance. Qualitative results that we display in Figure 9, highlight this robustness visually.
### Training on inconsistent batches of meshes
As stated above, our model is able to train on mesh data with variable resolution, hence it can be trained on sampled scans from COMA which makes an inconsistent database. We show that the model is still capable of learning identity and expressions. The only required pre-processing step is a rigid alignment (with scaling) to the chosen template.
As the scans are highly detailed (several gigabytes for each subject), we train our model on one identity with its 12 expressions. We compare the results of our model trained on the registered meshes against another model, with same parameters, trained on the raw scans. We synthesize the results in Table 5 and display a few examples of reconstruction in Figure 10.
Figure 9: An example of a mesh, its three reparameterizations and the corresponding registration with our model.
### Evaluating the latent space
The advantage with our solution is that we can directly operate in the latent space to deform any face. We use this property for three applications, interpolation between faces, extrapolation of a face motion and expression transfer between faces.
**Interpolation.** We compute a linear interpolation \((z_{t})_{t\in[0,1]}\) between a source latent vector \(z_{0}\) and a target one \(z_{1}\) with
\[z_{t}=(1-t)z_{0}+tz_{1}.\]
We display the resulting interpolation on faces in Figure 11, between two identities, two poses and the interpolation of two faces with both characteristics being different. The figure show that the results are visually satisfying.
**Extrapolation.** Given a initial motion of a face (two close meshes starting a motion), we would like to extrapolate the full motion. This can be formulated easily in the latent space: from the two meshes' latent codes \(z_{1},z_{2}\), we shoot a time dependent path \(z_{t}\) from the inital speed \((z_{2}-z_{1})\):
\[z_{t}=z_{1}+t(z_{2}-z_{1}).\]
\begin{table}
\begin{tabular}{l c c c c} \hline Expression & \multicolumn{2}{c}{Registered} & \multicolumn{2}{c}{Raw scans} \\ \hline & Hausdorff & Chamfer & Hausdorff & Chamfer \\ \hline Neutral & 0.009 & 0.075 & 0.014 & 0.24 \\ bareteeth & 0.010 & 0.075 & 0.014 & 0.26 \\ cheeks\_in & 0.010 & 0.074 & 0.014 & 0.25 \\ eyebrow & 0.009 & 0.076 & 0.014 & 0.28 \\ high\_smile & 0.010 & 0.076 & 0.014 & 0.26 \\ lips\_back & 0.009 & 0.073 & 0.014 & 0.25 \\ lips\_up & 0.009 & 0.075 & 0.015 & 0.24 \\ mouth\_down & 0.009 & 0.077 & 0.014 & 0.25 \\ mouth\_extr & 0.009 & 0.075 & 0.012 & 0.24 \\ mouth\_mid & 0.009 & 0.074 & 0.013 & 0.27 \\ mouth\_open & 0.009 & 0.074 & 0.013 & 0.28 \\ mouth\_side & 0.009 & 0.076 & 0.013 & 0.27 \\ mouth\_up & 0.009 & 0.075 & 0.014 & 0.29 \\ \end{tabular}
\end{table}
Table 5: Reconstruction error for each expression
We display the resulting motions in Figure 12. We observe that the desired motion is reproduced. At large time steps, some non natural deformations start to appear, but we are still able to recognize the expression of the face.
**Expression transfer in the latent space.** Thank to the auto-encoding architecture, we also demonstrate its ability to perform complex mesh manipulation such as expression transfer with simple arithmetic operations in the latent space (additions and subtractions). We also demonstrate the robustness of such operations as in Figure 13.
In a similar fashion, manipulating the latent space to cancel an expression and recover the neutral face is possible as shown in Figure 14.
Figure 10: (a): Registered case, fixed resolution, point-wise correspondence and no noise. (b): Raw scans, variable resolution, no correspondence and noise.
Figure 11: Linear interpolation in the latent space. The first row shows an interpolation between two neutral identities, the second row shows the result of an interpolation between a neutral face and one of its expressions. The last row shows an interpolation between a neutral identity and an expressive other identity.
Figure 12: Linear extrapolation (in orange) from two meshes (on the left) of a sequence describing an expression.
Figure 14: An example of neutralization of an expression to recover a neutral face. From an expressive face on the left, we subtract a similar expression from another identity, which we cancel with its corresponding neutral face to obtain an estimation of the neutral face of the first identity (on the right).
Figure 13: An example of a robust expression transfer: from an expressive face encoded as \(Z_{1}\), we subtract its neutral identity encoded as \(Z_{2}\) and replace it with another one encoded as \(Z_{3}\).
Discussion
In this last section, we discuss some areas for improvement and implications of the presented work.
### Complexity
Regarding the complexity of our model, it is made of simple elements and most of the computation complexity comes from the calculation of the loss at each epoch, during training.
The computation of the kernel metric is already optimised with the KeOps library [7] which uses symbolic matrices to avoid memory overflow. Consequently, training has a squared polynomial complexity.
Once the model is trained, the registration is performed with a simple function evaluation which makes it a lot faster than non-learning method such as FLAME, elastic matching or LDDMMs.
We report the training time for a single epoch, having the setting detailed in 4.1, in Table 6.
### Limitations
As we can observe, some rexgions with high curvature are hardly reconstructed, especially around the eyes, the lips and the nostrils. We believe it is due to the fact that the varifold struggles to take into account both large and finer structures on the mesh. Therefore, it is possible that the model can be enhanced using normal cycles [47] which take curvature information into account. But this comes at a high computational cost.
We also point out that our model is limited by the encoder which certainly has the advantage of being robust. But, this robustness comes at a cost regarding the performances as we show in Figure 15. Indeed, the encoder of our model struggles to keep small deformations (such as a slight eyebrow movement) during the encoding part.
\begin{table}
\begin{tabular}{c c c c} Model & 3DCODED & Ours (\(\mathcal{L}^{DCD}\)) & Ours (\(\mathcal{L}^{GM}\)) \\ \hline Time & 2min38 & 3min30 & 17min \\ \end{tabular}
\end{table}
Table 6: Mean time per epoch reported during our experiment
Overall, we believe that using our loss function based on geometric measures has the potential to yield superior results than the Chamfer distance which is widely used for similar tasks. Indeed, during our experiments, the Chamfer loss has shown poor capability to set an effective objective function, especially when used to generate faces.
### Perspectives related to recent mesh-invariant models
In the recent literature, models based on differential operators acting on the mesh, such as DiffusionNet [50] and DeltaConv [54] shows promising results with reassuring theoretical facts. However, these models have still not been applied to generative tasks and may require additional work but will be investigated in the future. In particular, their reliance on the intrinsic properties
Figure 15: Examples of failed reconstruction of the expression. On the left is displayed the target and on the right its reconstruction with our model. The identity is preserved but the model fails to encode the expression.
of a surface, such as the Laplacian, makes them sensitive to noise and topology changes, as opposed to our PointNet-based auto-encoder. Our results for unsupervised generative tasks are thus competitive with the state-of-the-art as demonstrated by the experiences. In particular, while there is still a gap between fully supervised and unsupervised methods, our work opens up the possibility of extending the amount of available data for supervised generative tasks. Towards this objective, the incorporation of mesh invariant deep learning models within our framework is a promising avenue of work to improve the expressiveness of such data.
## 6 Conclusion
In this paper, we propose a novel deep learning based approach for face registration. We use a varifold representation of shapes and extend kernel metrics on varifold with a multi resolution kernel. Our asymmetric auto encoder allows to learn a map from meshes with variable discretization to a low dimensional latent space. We demonstrate that our method allows for an efficient registration of meshes, and the learned latent space allows for powerful and easy deformation on this dataset.
In the future, we plan to extend this approach to new data, such as human bodies, or animals. But the most crucial work to do remains the development of a better encoder with a similar versatility than PointNet.
## Acknowledgments
This work is supported by the ANR project Human4D ANR-19-CE23-0020, and was further supported by Labex CEMPI (ANR-11-LABX-0007-01) and the Austrian Science Fund (grant no P 35813-N). The authors would also like to thank Alexandre Mouton (CNRS, UMR 8524 - LPP, Lille) and Deise Santana Maia (CNRS, UMR 9189 CRIStAL, Lille) for their advice and many fruitful conversations.
Proof of Theorem 3.1 on the independence to mesh structure of the varifold representation
Let \(S\) be a smooth compact surface. Let \(x\mapsto\vec{n}_{S}(x)\) be its outer normal vector field. There is a radius \(r_{S}>0\) such that any \(x\) point with distance less than \(r\) from \(S\) has a unique projection (i.e. closest point) on \(S\), denoted \(\xi(x)\). Note that \(\xi(x)-x=x\pm d(x,S)\vec{n}_{S}(x)\). We have \(r_{S}\leq 1/\kappa_{S}\), with equality for most surfaces.
Let \(f\) be a full triangle (corresponding to a face in a mesh) with its vertices belonging to a surface \(S\), with center \(c\), normal \(\vec{n}_{f}\) and greatest edge length \(\eta\). As long as \(\eta\) is small enough, the projection \(\xi:f\to S\) is one-to-one. We denote \(\Delta=\xi(f).\)
For \(x\) in \(f\), we have \(d(\xi(x),x)\leq\eta\kappa_{\Delta}\), \(\kappa_{\Delta}\) denotes the greatest eigenvalue of the second fundamental form of \(S\) among all points of \(\Delta\). See Figure A.16.
For any \(y=\xi(x)\) in \(\Delta\), and \(\eta\) small enough, we have ([37, 2.2.2]).
\[d(y,c)\leq d(\xi(x),x)+d(x,c)\leq\eta\kappa_{\Delta}+\eta\leq(\kappa_{\Delta}+ 1)\eta.\]
On the other hand, let \(\alpha_{f}=\max_{y\in\Delta}(\vec{n}_{f},\vec{n}_{S}(y))\). Then we have, for any \(y=\xi(x)\) in \(\Delta\)[36][Corollary 1],
\[d(\vec{n}_{S}(y),\vec{n}_{f})\leq\sqrt{2}\sin(\alpha)\leq 6\sqrt{2}\frac{ \kappa_{\Delta}}{\sin\theta_{f}}\eta,\]
with \(\theta\) the smallest of the three angles of \(f\). Moreover, for \(\eta\) small enough [36][Corollary 2],
\[|a(f)-a(\Delta)|\leq 3a(\Delta)\kappa_{\Delta}\eta,\]
Figure 16: Face \(f\) and its projection on a Surface \(S\).
with \(a(\cdot)\) the area of a surface in \(\mathbb{R}^{3}\).
Now take some bounded Lipschitz function \(u:\mathbb{R}^{3}\times\mathbb{S}^{2}\to\mathbb{R}\) with Lipschitz constant \(Lip(u)=k_{u}\). Let \(\mu_{\Delta}\) be the varifold associated with the smooth surface \(\Delta\) and \(\mu_{f}=a(f)\delta_{c}^{\vec{n}_{f}}\) as in our paper.
The main argument follows these two remarks, which are just the results above:
1. for any point \(y=\xi(x)\) in \(\Delta\), the value of \(u\) at \((y,\vec{n}_{S}(y))\) is close to its value at \((c,\vec{n}_{f})\). Indeed: \[|u(y,\vec{n}_{S}(y))-u(c,\vec{n}_{f})|\leq k_{u}(d(y,c)+d(\vec{n}_{S}(y),\vec{ n}_{f})).\] We have estimates for both of these distances that go linearly to \(0\) as \(\eta\) goes to \(0\), so that for any \(C=6\sqrt{2}\), \[|u(y,\vec{n}_{S}(y))-u(c,\vec{n}_{f})|\leq C\frac{(\kappa_{\Delta}+1)}{\sin \theta_{f}}k_{u}\eta\]
2. the difference between the area of \(f\) and that of \(\Delta\) go linearly to \(0\) as \(\eta\) goes to \(0\), so for any \(C\geq 3\) \[|(a(\Delta)-a(f))u(c,\vec{n}_{f})|\leq Ca(\Delta)\kappa_{\Delta}\|u\|_{\infty}\eta,\] with \(\|u\|_{\infty}=\sup_{(x,n)\in\mathbb{R}^{3}\times\mathbb{S}^{2}}|u(x,n)|\).
Letting ourselves be guided by these remarks, we have
\[\mu_{f}(u)=a(f)u(c,\vec{n}_{f})=\int_{\Delta}u(c,\vec{n}_{f})d\sigma(x)+(a(f)- a(\Delta))u(c,\vec{n}_{f}).\]
We therefore get, for \(C\) big enough (e.g., C=10), that \(\|\mu_{\Delta}(u)-\mu_{f}(u)\|\) is bounded by:
\[\int_{\Delta}|u(y,\vec{n}_{S}(y)-u(c,\vec{n}_{f})|d\sigma(y)+|a(f )-a(\Delta)|u(c,\vec{n}_{f})\] \[\leq C\frac{(\kappa_{\Delta}+1)\eta k_{u}a(\Delta)}{\sin\theta_{ f}}+Ca(\Delta)\kappa_{\Delta}\eta|u(c,\vec{n}_{f})|\] \[\leq Ca(\Delta)\frac{(\kappa_{\Delta}+1)}{\sin\theta_{f}}(k_{u}+ \|u\|_{\infty})\eta.\]
Now if \(X\) is a mesh inscribed in \(S\) whose vertices are in \(S\), such that \(\xi\) is bijective from the full triangles of \(X\) onto \(S\), we can sum this estimate over all faces and get
\[|\mu_{S}(u)-\mu_{X}(u)|\leq Ca(S)\frac{(\kappa_{S}+1)}{\sin\theta_{X}}(k_{u}+\|u \|_{\infty})\eta,\]
with \(\kappa_{S}\) the greatest eigenvalue of the second fundamental form of \(S\) (i.e. its greatest principal curvature), \(\eta\) its greatest edge length and \(\theta_{X}\) the smallest angle among all faces of \(X\).
Finally, for two meshes \(X,\hat{X}\) inscribed in \(S\), a triangular inequality immediately gives
\[|\mu_{X}(u)-\mu_{\hat{X}}(u)|\leq Ca(S)\frac{(\kappa_{S}+1)}{\sin\theta_{X}}(k _{u}+\|u\|_{\infty})\eta,\]
with \(C=20\). Similar estimates allow the computation of kernel norms, with the addition of explicit bounds on \((k_{u}+\|u\|_{\infty})\). Indeed, kernel norms are computed by integrating the kernel along the varifolds, and bounds on the Lipschitz constant of the kernel and its \(\|\cdot\|_{\infty}\) are easily computed.
Note that, to keep the proof readable, the estimates were purposely rough, just to give an idea on the order of the convergence. In practice, edges are shorter and areas are smaller near high curvature areas, allowing much better approximation than suggested by the formula.
|
2304.14205 | The Fuzzy Onion: A proposal | It is generally believed that the space has a nontrivial structure which is
apparent on the order of the Planck length. There is a class of models of
three-dimensional quantum spaces constructed using different mathematical
tools. Also, there is another class of models with matrix descriptions of
spaces of various dimensions and geometries with built-in momentum cut-off --
these are called fuzzy spaces; the fuzzy sphere is a prominent example. In this
paper, we describe how to connect various spheres together to foliate a
three-dimensional space dubbed the fuzzy onion. | S. Kováčik, J. Tekel | 2023-04-27T14:11:55Z | http://arxiv.org/abs/2304.14205v1 | # The Fuzzy Onion: A proposal
###### Abstract:
It is generally believed that the space has a nontrivial structure which is apparent on the order of the Planck length. There is a class of models of three-dimensional quantum spaces constructed using different mathematical tools. Also, there is another class of models with matrix descriptions of spaces of various dimensions and geometries with built-in momentum cut-off--these are called fuzzy spaces; the fuzzy sphere is a prominent example. In this paper, we describe how to connect various spheres together to foliate a three-dimensional space dubbed the fuzzy onion.
Introduction
The fuzzy sphere is perhaps the most studied example of a fuzzy space and the simplest case of a quantized space [1, 2, 3, 4]. In the common approach, one describes the field on the fuzzy sphere with a momentum cut-off using Hermitian matrices of a finite size. As the size of the matrices is increased, the cut-off is lifted. The machinery of the field theory on a sphere--and some other examples--can be recast in terms of matrix operators such as taking the trace instead of doing an integration.
The crucial part of the model is the kinetic term of the action that is defined using the finite-size representation of the corresponding symmetry generators, for example, \(SU(2)\) in the case of the sphere. Physics on fuzzy spaces comes with some advantages and disadvantages. Because of the natural cut-off, the theories do not suffer from various UV ill-effects. However, a typical feature of them is the UV/IR mixing which is a common aspect of nonlocal theories. As a result, the field theories in fuzzy spaces are known to possess phases that have no counterpart in ordinary scalar field theories even the infinite matrix size limit which is expected to recover the ordinary space. It is plausible that the standard formulation of the fuzzy theories' actions lacks higher-derivative terms that would hold those effects under control. Despite those difficulties, fuzzy spaces are of considerable interest. Firstly, they appear in various physical scenarios [6, 7, 8, 9, 10, 11] and, secondly, they are straightforward to study numerically [12, 13, 14, 15].
Unfortunately, the known matrix models do not cover the case of the space we are interested in--that is the three-dimensional Euclidean space \({\bf R}^{3}\). This is purely a problem on the matrix side, other formulations of \({\bf R}^{3}\) are known and well-examined [3, 4, 5]. Our goal is to formulate a matrix analog of those theories, to define a matrix formulation of the fuzzy \({\bf R}^{3}\). The first idea is straightforward, one needs to glue together fuzzy spheres of different sizes. However, fuzzy spheres of different radii are described using matrices of different sizes and to take, for example, the radial derivative one needs to subtract matrices of incompatible sizes. We will describe here how to overcome this issue and formulate the fuzzy onion model using a set of concentric fuzzy spheres of increasing radius.
## 2 Scalar field on fuzzy spheres
There are many ways how to think about the fuzzy sphere. Let us introduce it here in a way that allows us to extend the construction to a sequence of concentric fuzzy spheres. Scalar fields on an ordinary sphere can be expanded into spherical harmonics:
\[f(\theta,\varphi)=\sum_{lm}c_{lm}Y_{lm}(\theta,\varphi), \tag{1}\]
where \(l=0,1,2,...\) and \(m=-l,l+1,...,l\). The spherical harmonics are defined as eigenfunctions of the angular momentum operators that follow the \(su(2)\) relations \([\hat{L}_{i},\hat{L}_{j}]=i\varepsilon_{ijk}\hat{L}_{k}\):
\[\hat{\cal L}\ Y_{lm} = \hat{L}_{i}\hat{L}_{i}\ Y_{lm}=l(l+1)Y_{lm}, \tag{2}\] \[\hat{L}_{3}\ Y_{lm} = m\ Y_{lm}. \tag{3}\]
There are of course also finite representations of these relations where \(\hat{L}_{i}^{(N)}\) is an adjoined representation expressed in terms of \(N\times N\) matrices. In this case also \(Y_{lm}^{(N)}\) is a matrix of the same size and the angular momentum \(l\) has an upper limit of \(l=N-1\).
Those two representations can be mapped onto each other if we impose the same limit also on the infinite-dimensional representation. Such truncated spectrum can only approximate the \(\delta\)-distribution. In other words, spatial resolution is limited and is fully recovered only in the infinite-\(N\) limit. The relation between the size of the matrix, the size of the sphere, and the scale of space quantumness captured by the constant \(\lambda\) is
\[r\sim N\lambda. \tag{4}\]
Exact relation is a matter of convenience and can differ in various situations. It is now straightforward to construct a theory of a scalar field that exists on a series of disconnected fuzzy spheres of increasing radius. If we keep the constant of noncommutativity fixed across them we need to increase the corresponding matrix size to have increasing radii. Matrix with \(N=1\) describes the innermost sphere, the \(N=2\) matrix a layer above it and so on. We can put all of the into a single a block-diagonal matrix
\[\Psi=\begin{pmatrix}\Phi^{(1)}&&&&\\ &\Phi^{(2)}&&\\ &&\ddots&\\ &&&\Phi^{(N_{m})}\end{pmatrix} \tag{5}\]
where \(\Phi^{(N)}\) is a Hermitian matrix of size \(N\). For a matrix to be interpretable as a fuzzy sphere we need to specify the Laplace operator. Its angular part is trivial
\[\mathcal{L}\Psi=\begin{pmatrix}\hat{\mathcal{L}}^{(1)}\Phi^{(1)}&&&&\\ &\hat{\mathcal{L}}^{(2)}\Phi^{(2)}&&\\ &&\ddots&\\ &&&\hat{\mathcal{L}}^{(N_{m})}\Phi^{(N_{m})}.\end{pmatrix}, \tag{6}\]
where \(\hat{\mathcal{L}}^{(i)}\) is the Laplace operator acting of the fuzzy sphere of size \(i\).
## 3 Scalar field on the fuzzy onion
So far we have done little to nothing--only gathered a set of fields on \(N_{m}\) fuzzy spheres and placed them into a single matrix \(\Psi\). To have a proper field theory we need to add the radial part of the Laplace operator. To do so we need to define the radial derivative which is difficult as we need to define the radial derivative, \(\partial_{r}\Phi^{(i)}\propto\Phi^{(i+1)}-\Phi^{(i-1)}\) which, as written here, is ill-defined due to subtraction of matrices of different size.
To connect consecutive spheres to obtain a three-dimensional theory, we need to define the following map
\[{\cal U}\Phi^{(i)} = \Phi^{(i+1)}\in{\cal H}(N+1), \tag{7}\] \[{\cal D}\Phi^{(i)} = \Phi^{(i-1)}\in{\cal H}(N-1), \tag{8}\]
where \(\Phi^{(i)}\in{\cal H}(N)\) and \({\cal H}\) is the set of Hermitian matrices of a given size. This map has to involve some kind of information loss as there is a different number of degrees of freedom on each of the layers due to different matrix sizes. The difference is in the modes of the highest angular momentum as they cannot be matched and it is therefore natural to remove or add those.
This can be done in a straightforward way. To move one layer up, first, expand the matrix into spherical harmonics, then take the coefficients of the expansion \(c_{lm}^{(N)}\) and define \(c_{lm}^{(N+1)}=c_{lm}^{(N)}\) for \(l\leq N-1\) and \(c_{lm}^{(N+1)}=0\) for \(l=N\). That means, mapping the coefficients to their corresponding counterparts and setting the rest to zero. To go one layer down, do the opposite: map the coefficients that can be mapped and remove the rest. As expressed together we have
\[\Phi^{(N)} = \sum_{l=1}^{N-1}\sum_{m=-l}^{l}c_{lm}^{(N)}Y_{lm}^{(N)}\] \[{\cal D}\uparrow {\cal U}\downarrow\] \[\Phi^{(N+1)} = \sum_{l=1}^{N-1}\sum_{m=-l}^{l}c_{lm}^{(N+1)}Y_{lm}^{(N+1)}\] where \[c_{l,m}^{(N)} = c_{l,m}^{(N+1)}\mbox{ for: }l\leq N-1\] \[c_{N,m}^{(N+1)} = 0.\]
We can now define the first and the second-order derivative as 1
Footnote 1: Note that here \((\partial_{r})^{2}\neq\partial_{r}^{2}\) as we find each of the definitions optimal—as an approximation—on its own.
\[\partial_{r}\Phi^{(N)}=\frac{{\cal D}\phi^{(N+1)}-{\cal U}\phi^{(N-1)}}{2\lambda}, \tag{9}\]
and
\[\partial_{r}^{2}\Phi^{(N)}=\frac{{\cal D}\phi^{(N+1)}-2\phi^{(N)}+{\cal U}\phi ^{(N-1)}}{\lambda^{2}}. \tag{10}\]
Using equations (6), (9) and (10), we can now define the full Laplace operator acting on the field \(\Psi\) that lives on the fuzzy onion, which is formed using a set of concentric fuzzy spheres of increasing radii
\[\Delta\Psi=\left(\frac{1}{r^{2}}\frac{\partial}{\partial_{r}}\left(r^{2}\frac {\partial}{\partial r}\right)-\frac{{\cal L}^{2}}{r^{2}}\right)\Psi \tag{11}\]
where we have set \(\hbar=1\) and \(r\) acts in a trivial way on each of the layers as \(r\)\(\Phi^{(N)}=N\lambda\)\(\Phi^{(N)}\).
Now we have a complete Laplace operator that contains the information about the underlying space. We call the space the field \(\Psi\) exists on _the fuzzy onion_.
## 4 Potential term
We have defined the kinetic part of the action of a scalar field living in a space foliated with concentric fuzzy spheres that we call the fuzzy onion for obvious reasons. It is a three-dimensional quantum space possessing rotational symmetry that resembles the construction in [3, 4, 5]. There is a subtlety involved: the quantumness takes a different form in the angular and radial directions. While on each fuzzy spheres, it is realized as a momentum cut-off with no distinct structure--meaning there are no precedent points on the spheres, in the radial direction is the quantumness realized as a discrete, that is lattice, structure in which the half-line \(r\in\mathbf{R}_{0}^{+}\) is replaced by a set \(r/\lambda\in\mathbf{N}\). So far we took the largest matrix to be of size \(N=N_{m}\) so our space describes a (quantum) ball. By taking the limit \(N_{m}\to\infty\) we can cover the entire three-dimensional space.
Another way the difference between radial and angular directions manifests itself is in the form of the potential. On a single fuzzy sphere the potential term is can be defined using a polynomial
\[V(\phi)=\mathrm{Tr}\ P(\Phi), \tag{12}\]
where the case of \(P(\Phi)=b\Phi^{2}+c\Phi^{4}\) is a prominent example well-studied in the literature [12, 13, 14, 15, 16]. A straightforward generalisation of this would be to take
\[V(\Psi)=\mathrm{Tr}\ P(\Psi), \tag{13}\]
so, for example, we can have
\[V(\Psi)=\mathrm{Tr}\begin{pmatrix}b(\Phi^{(1)})^{2}+c(\Phi^{(1)})^{4}&&\\ &\ddots&\\ &&b(\Phi^{(N_{m})})^{2}+c(\Phi^{(N_{m})})^{4}\end{pmatrix}=\sum_{j=1}^{N_{m}}V (\phi^{(j)})\,. \tag{14}\]
To put it in words, the total potential energy is the sum of the potential energies of all of the individual fuzzy spheres. An important feature of quantum spaces is the nonlocality. What does it mean in this context? A field on a fuzzy sphere cannot be limited to an area smaller than some elementary patch proportional to \(\lambda^{2}\). That means that the value of the potential energy at some point also influences the value at other points in its vicinity. 2
Footnote 2: Of course, the notion of a point is ill-defined here. We can speak of the north pole on the sphere for example, but what is meant by that is a blurred point-like density centered around this point. This fuzziness is the cause of the nonlocality of this construction.
In (14), the nonlocality is felt across each of the individual fuzzy spheres but not between them. That means that if we produce a field excitation as localized as possible around the north pole of a given fuzzy sphere, it contributes to the potential energy close to that point on the same fuzzy sphere but not to the potential energy of the fuzzy sphere below and above it. The difference stems from the different forms of quantumness in the angular and radial directions.
There is a way how to solve it, how to connect various fuzzy spheres with quartic potential terms and that is to define it as
\[V(\Psi)=b\mathrm{Tr}\ \Psi^{2}+c\ (\mathrm{Tr}\ \Psi)^{2}\,. \tag{15}\]
With this, the field on one layer feels the field on every other as the second term contains multi-trace contributions of the form \(\mathrm{Tr}\ \left(\phi^{(i)}\right)^{2}\mathrm{Tr}\ (\phi^{(j)})^{2}\) with \(i\neq j\). This approach is the other extreme, in the first case all spheres were isolated, and here they are equally connected. Perhaps the best way would be a compromise where each layer effectively interacts only with its neighbors, that is to have contributions of the form \(\mathrm{Tr}\ (\phi^{(i)})^{2}\mathrm{Tr}\ (\phi^{(i+1)})^{2}\). Another option is to define a smeared value of a field in the radial direction of the form
\[\mathcal{S}\phi^{(n)}=\frac{\phi^{(n)}+\sum\limits_{i}\alpha_{i}\left(\mathcal{ U}^{i}\phi^{(n-i)}+\mathcal{D}^{i}\phi^{(n+i)}\right)}{1+\sum\limits_{i} \alpha_{i}}, \tag{16}\]
for example with \(\alpha_{1}=\frac{1}{2},\alpha_{2+}=0\):
\[\mathcal{S}\phi^{(n)}=\frac{\phi^{(n)}+\frac{1}{2}\mathcal{D}\phi^{(n+1)}+ \frac{1}{2}\mathcal{U}\phi^{(n-1)}}{2}. \tag{17}\]
Then we can take
\[V(\Psi)=\sum\limits_{j=1}^{N_{m}}V\left(\mathcal{S}\phi^{(j)}\right), \tag{18}\]
where \(V\) is defined in (12) as an ordinary fuzzy-sphere potential but we are now using fields that have been smeared across various layers.
## 5 Conclusion
In this paper, we have proposed how to connect a (potentially infinite) set of concentric fuzzy spheres of increasing vacua and the same spatial resolution limit to produce a three dimension quantum space that resembles previous theoretical constructions. An important aspect of this model is that the fields are encoded in a single Hermitian matrix of size \(N_{m}\left(N_{m}+1\right)/2\) of a specific form. We call this model the fuzzy onion.
We have defined a procedure that maps between consecutive layers which allowed us to define the radial part of the Laplace operator and also a smearing procedure across various layers. There are various ways of defining the potential term and it would be interesting to study the differences in their behavior.
A great feature of fuzzy space models expressed in terms of matrices is the accessibility of numerical simulations which offered great insight. There were various formulations of three-dimensional quantum spaces that were well-suited for analytical treatment. Our goal was to extend the fuzzy sphere model to cover the entire three-dimensional space while keeping the construction as close to it as possible to be able to have a connection to the large volume of research work done on it.
The most straightforward application of the fuzzy onion model is to study the phase diagram of a quartic scalar field theory in three-dimensional space using the Hamiltonian Monte Carlo method. Another option is to study the behavior of three-dimensional objects, preferably having rotational symmetry; for example, a stellar core-collapse or normal modes of neutron stars but also heat dissipation in granular matter.
Doing Monte Carlo simulations is not the only method to study the fuzzy onion model. In principle, many models expressed in terms of differential equations with the Laplace operator can be defined. A prominent example is the Schrodinger equation expressed as a matrix equation. Its solutions in the fuzzy onion model could be compared with those obtained in [4] to test the validity of our approach. We will report on those efforts shortly.
## Acknowledgments
This research was supported by VEGA 1/0703/20 and VEGA 1/0025/23 grants and MUNI Award for Science and Humanities funded by the Grant Agency of Masaryk University.
|
2307.03229 | Adaptive projected variational quantum dynamics | We propose an adaptive quantum algorithm to prepare accurate variational time
evolved wave functions. The method is based on the projected Variational
Quantum Dynamics (pVQD) algorithm, that performs a global optimization with
linear scaling in the number of variational parameters. Instead of fixing a
variational ansatz at the beginning of the simulation, the circuit is grown
systematically during the time evolution. Moreover, the adaptive step does not
require auxiliary qubits and the gate search can be performed in parallel on
different quantum devices. We apply the new algorithm, named Adaptive pVQD, to
the simulation of driven spin models and fermionic systems, where it shows an
advantage when compared to both Trotterized circuits and non-adaptive
variational methods. Finally, we use the shallower circuits prepared using the
Adaptive pVQD algorithm to obtain more accurate measurements of physical
properties of quantum systems on hardware. | David Linteau, Stefano Barison, Netanel Lindner, Giuseppe Carleo | 2023-07-06T18:00:04Z | http://arxiv.org/abs/2307.03229v1 | # Adaptive projected variational quantum dynamics
###### Abstract
We propose an adaptive quantum algorithm to prepare accurate variational time evolved wave functions. The method is based on the projected Variational Quantum Dynamics (pVQD) algorithm, that performs a global optimization with linear scaling in the number of variational parameters. Instead of fixing a variational ansatz at the beginning of the simulation, the circuit is grown systematically during the time evolution. Moreover, the adaptive step does not require auxiliary qubits and the gate search can be performed in parallel on different quantum devices. We apply the new algorithm, named Adaptive pVQD, to the simulation of driven spin models and fermionic systems, where it shows an advantage when compared to both Trotterized circuits and non-adaptive variational methods. Finally, we use the shallower circuits prepared using the Adaptive pVQD algorithm to obtain more accurate measurements of physical properties of quantum systems on hardware.
## I Introduction
Simulation of static and dynamic properties of quantum systems is a notoriously hard task for classical computers. While analytical solutions are available only for specific cases, the amount of time and computing resources required in general by exact numerical methods is exponential in the system size, making the calculations quickly unfeasible. While several approximated many-body numerical techniques have been proposed [1; 2; 3; 4], the accurate description of important physical and chemical phenomena is a very active research problem [5; 6; 7; 8].
In recent years, quantum computers have seen significant developments [9; 10; 11], opening potential opportunities for scientific discoveries. Hardware capabilities continue to advance steadily, and we can already create and manipulate complex many-body quantum systems [12; 13; 14; 15; 16; 17]. However, large-scale fault-tolerant quantum computers remain far in the future, and contemporary devices show limitations in connectivity, size, and coherence times.
Accounting for these constraints, Variational Quantum Algorithms (VQAs) have emerged as the leading strategy to take advantage of near-term quantum devices [18; 19; 20; 21]. In this class of algorithms, the solution of a given problem (e.g. finding the ground state of a physical system) is encoded in a quantum circuit that depends on some parameters optimized with the aid of a classical device. VQAs have not only been proposed for quantum simulations but also for a variety of different applications, such as machine learning [22; 23], combinatorial optimization [24; 25], quantum error correction [26; 27] and compilation [28; 29; 30]. Variational schemes have also been introduced in quantum dynamics [31; 32; 33; 34; 35; 36; 37; 38; 39], as a more efficient alternative to Trotterization [40; 41; 42; 43; 44]. The accuracy of a variational quantum simulation is then tied to the ability of a parameterized circuit to describe time-evolved wave functions. Even if the initial wave function is well-described by the chosen parameterized circuit, the complexity of the time-evolved wave functions varies with time and the chosen circuit may fail to describe them. The choice of the parameterized circuit is therefore crucial and it remains an open problem in variational simulations of quantum dynamics.
Adaptive schemes have been proposed in the context of variational ground state search [45; 46; 47; 48] especially to avoid committing to a particular parameterized circuit. The key idea is to construct the parameterized circuit during optimization. By systematically appending specific quantum gates to the parameterized circuit, adaptive methods have been shown to surpass standard approaches in the number of operations required and in the accuracy of the final results. Moreover, adaptive methods provide flexible circuits suited for dynamics simulations [49; 33]. However, including an adaptive step for dynamics usually requires measurements of additional quantities, that might be difficult to perform, or auxiliary qubits.
In this work, we introduce an adaptive variational algorithm for real-time evolution based on the projected Variational Quantum Dynamics (pVQD) algorithm [36], denoted Adaptive pVQD. The method inherits all the properties of the original pVQD algorithm and integrates the adaptive modification of the parameterized circuit without requiring auxiliary qubits. The structure of this paper is as follows: in Section II we present the algorithm and describe how the adaptive routine is per
formed; in Section III we apply the method to study a time-dependent and a fermionic system, benchmarking the method against Trotter evolution and the original pVQD algorithm; Section IV concludes the paper with some considerations and outlooks on the proposed method.
## II Method
Consider a physical system governed by a Hamiltonian \(H\). For clarity of exposition, we focus on time-independent Hamiltonians. However, this is not a requirement of the algorithm, as we explicitly show in Section III. To simulate the dynamics of quantum systems on a quantum computer, we have to prepare the time-evolved wave function \(|\Psi(t)\rangle=U(t)|\psi_{0}\rangle\), where \(|\psi_{0}\rangle=U_{0}|0\rangle^{\otimes N}\) is the initial state, \(N\) indicates the number of qubits representing the physical system and \(U(t)\) is the unitary time evolution operator. The Adaptive pVQD algorithm aims to approximate the state \(|\Psi(t)\rangle\) using parameterized states of the form
\[|\psi(\mathbf{\theta},\mathbf{A})\rangle=U(\mathbf{\theta},\mathbf{A})|\psi_{0}\rangle=\prod_{ i}e^{-i\theta_{i}A_{i}}|\psi_{0}\rangle, \tag{1}\]
where each real parameter \(\theta_{i}\in\mathbf{\theta}\) is associated to a Hermitian generator \(A_{i}\in\mathbf{A}\). The parameterized state is therefore specified by the set of parameters and operators \(\{\mathbf{\theta},\mathbf{A}\}\), and it can be implemented as a quantum circuit. From now on, we adopt the notation \(|\psi(\mathbf{\theta})\rangle\equiv|\psi(\mathbf{\theta},\mathbf{A})\rangle\) and \(U(\mathbf{\theta})\equiv U(\mathbf{\theta},\mathbf{A})\).
To simulate a physical model until a final time \(t_{f}\), we divide the evolution into small time intervals \(\Delta t\). We further assume that the parameterized state \(|\psi(\mathbf{\theta})\rangle\) is a good approximation of the time-evolved wave function at time \(t\). The wave function at time \(t+\Delta t\) can thus be represented by \(U_{\rm TS}(\Delta t)|\psi(\mathbf{\theta})\rangle\), where \(U_{\rm TS}(\Delta t)\) is a Trotter-Suzuki decomposition of the time evolution operator \(U(\Delta t)\)[40; 41]. In this manuscript we use a first order decomposition, but higher orders can be considered. The choice of the optimal \(\Delta t\) is problem dependent and will be discussed in Section III. We then approximate the evolution step \(t\to t+\Delta t\) using a new set of parameters \(\mathbf{\theta}\to\mathbf{\theta}+d\mathbf{\theta}\) that maximizes the overlap between \(U_{\rm TS}(\Delta t)|\psi(\mathbf{\theta})\rangle\) and \(|\psi(\mathbf{\theta}+d\mathbf{\theta})\rangle\). This can be achieved by minimizing, with respect to \(\mathbf{d\theta}\), the infidelity
\[\mathcal{I}(\mathbf{d\theta},\Delta t)=1-\mathcal{F}(\mathbf{d\theta},\Delta t), \tag{2}\]
where the fidelity
\[\mathcal{F}(\mathbf{d\theta},\Delta t)=|\langle\psi(\mathbf{\theta}+\mathbf{d\theta})|U_ {\rm TS}(\Delta t)|\psi(\mathbf{\theta})\rangle|^{2} \tag{3}\]
can be measured on a quantum device [36].
At each time step, the initial parameters and operators \(\{\mathbf{\theta},\mathbf{A}\}\) are those obtained at the previous time step. Assuming that the set of operators \(\mathbf{A}\) is sufficient to describe the state at time \(t+\Delta t\), we find the parameter shift \(\mathbf{d\theta}^{*}\) that minimizes \(\mathcal{I}(\mathbf{d\theta},\Delta t)\). Details about the minimization routine can be found in Appendix A. If the minimization routine is not successful, new gates built using generators \((A_{0}^{*},A_{1}^{*},\cdots,A_{k}^{*})\) from the operator pool are added to the parameterized circuit following the adaptive procedure described in Section II.1. This adaptive procedure is repeated up until the convergence criteria are met.
The algorithm starts with the initial state \(|\psi_{0}\rangle\) represented by an empty set of operators. As needed, new gates are added through the time evolution until the chosen final time \(t_{f}\). The complete procedure is illustrated in Fig. 1. We note that the original pVQD scheme [36] can be recovered by fixing the set of operators \(\mathbf{A}\) through the entire simulation.
### Adaptive step
When the parameterized circuit \(|\psi(\mathbf{\theta})\rangle\) is not expressive enough to accurately describe the time step evolution by only shifting the variational parameters, we add new gates to it. This is referred to as the adaptive step of the algorithm. Given an operator pool, we determine the best gate to grow the quantum circuit. As first proposed in [45], we look for the operator whose gate maximizes the derivative of the cost function with respect to its parameter. This is achieved by iterating over all the operators in the pool, a step that can be performed in parallel even on different quantum devices.
For ground state methods, the cost function is the energy of the system, while the gradient is obtained by measuring the expectation value of the commutator between the trial operator and the Hamiltonian [45; 50]. We must ensure that is possible to apply a similar procedure when dynamics is considered. In the adaptive scheme proposed in [33], this step requires an additional measurement of the variance of the Hamiltonian with respect to the non-adaptive case. In our method, the gradient of the fidelity with respect to the shift \(d\theta_{a}\) of parameter \(\theta_{a}\) associated with a trial operator \(A_{a}\) has the form
\[\frac{\partial\mathcal{F}}{\partial d\theta_{a}}=\langle\phi(\mathbf{ \theta},\Delta t)|\,e^{-id\theta_{a}A_{a}}[P_{0},iA_{a}]e^{id\theta_{a}A_{a}} \,|\phi(\mathbf{\theta},\Delta t)\rangle\,, \tag{4}\]
where we define the projector \(P_{0}=|\psi_{0}\rangle\!\langle\psi_{0}|\) and the state \(|\phi(\mathbf{\theta},\Delta t)\rangle=U^{\dagger}(\mathbf{\theta})U_{\rm TS}(\Delta t )\,|\psi(\mathbf{\theta})\rangle\) (see Appendix B for the full derivation). To ensure continuity of time evolution, we initially set \(\theta_{a}=0\). We note that measuring the derivative of the fidelity corresponds to measuring the Hermitian operator \([P_{0},iA_{a}]\) with respect to the pVQD circuit \(U^{\dagger}(\mathbf{\theta})U_{\rm TS}(\Delta t)\,|\psi(\mathbf{\theta})\rangle\) modified by the addition of the gate \(e^{id\theta_{a}A_{a}}\). However, we evaluate the derivative using the parameter shift rule [51], as for the minimization routine (for more details, see Appendix A). This operator
search is still parallelizable on multiple devices and does not require auxiliary qubits.
The adaptive step has been lately extended and optimized [46, 48, 50], with new protocols that greatly reduce the computational resources required with respect to the first proposal. In particular, we adopt the scheme presented in [48], which increases the depth of the parameterized circuit \(|\psi(\mathbf{\theta})\rangle\) by 1 at every adaptive step. While the infidelity defined in Eq. (2) remains above a fixed threshold \(\varepsilon\), additional adaptive steps are performed. For a detailed description, see Appendix C.
### Operator pool
The choice of the operator pool is a key ingredient in the success and efficiency of adaptive variational algorithms. Having a complete pool of operators is exponentially complex in the size of the physical system, therefore, one has to make some restrictions in its selection. Many different strategies have been proposed, such as the creation of a minimally complete pool [46, 52], the inclusion of symmetries directly in the operator pool [53], or the extension of a complete pool acting on a subsystem of the studied model [47].
In the study of the dynamics, we can refer to the Trotterization of the time evolution operator to select the pool. In particular, we consider local (L) and non-local (NL) operator pools, respectively, given by
\[\mathcal{A}_{\mathrm{L}} =\{X_{i},Y_{i},Z_{i}\}_{i=0}^{N-1} \tag{5}\] \[\quad\cup\{X_{i}X_{i+1},Y_{i}Y_{i+1},Z_{i}Z_{i+1}\}_{0\leq i\leq N -2},\] \[\mathcal{A}_{\mathrm{NL}} =\{X_{i},Y_{i},Z_{i},X_{i}X_{j},Y_{i}Y_{j},Z_{i}Z_{j}\}_{0\leq i<j \leq N-1}, \tag{6}\]
where \(X_{i},Y_{i}\) and \(Z_{i}\) are the Pauli gates acting on site \(i\). Given that \(\mathcal{A}_{\mathrm{L}}\subset\mathcal{A}_{\mathrm{NL}}\), we expect that \(\mathcal{A}_{\mathrm{NL}}\) will generate more flexible parameterized states. However, not only the choice of \(\mathcal{A}_{\mathrm{NL}}\) leads to a measurement overhead, but the non-local nature of this pool may add long-range controlled-NOT (CNOT) gates to the circuit, according to the device connectivity. In Section III, we report the comparison of the two pools in the study of a fermionic system.
## III Results
We apply the Adaptive pVQD method to the study of the 1D Heisenberg XYZ model with an external driving field and the 2D Fermi-Hubbard model. Both have non-trivial dynamics and open the pVQD method to the study of time-dependent and fermionic systems. In both cases, open boundary conditions were imposed.
### Driven Heisenberg model
Given an open chain of \(L\) spins, the driven Heisenberg XYZ Hamiltonian can be written as:
\[H(t)=\sum_{i=0}^{L-2}(J_{x}X_{i}X_{i+1}+J_{y}Y_{i}Y_{i+1}+J_{z} Z_{i}Z_{i+1})+D(t) \tag{7}\]
where \(J_{x},J_{y}\) and \(J_{z}\) are coupling parameters and \(D(t)\) is the time-dependent driving term. Many different driving terms can be applied to the system. Among those we choose
Figure 1: Flowchart of the time evolution of the Adaptive pVQD algorithm. Starting with a parameter-free circuit, we discretize the time evolution into multiple time steps. At each time step we optimize the parameters to approximate the real time evolution of the quantum system. If the optimization does not converge to the required accuracy, or the ansatz does not contain any parameter, then rotations \(\{R_{A_{i}^{*}}\}\) based on the generators \(\{A_{i}^{*}\}\) are appended to the circuit according to the adaptive step procedure described in Section II.1. The algorithm stops once the final time \(t_{f}\) is reached.
\[D(t)=\sum_{i=0}^{L-1}(-1)^{i}\sin(\omega t)Z_{i}\,, \tag{8}\]
where \(\omega\) is the driving frequency.
First, we investigate the performance of the Adaptive pVQD algorithm with a local pool on a perfect simulator and compare to Trotterized circuits and the original implementation of pVQD. We consider \(J_{x}=1,J_{y}=0.8,J_{z}=0.6\), an antiferromagnetic initial state \(|\psi_{0}\rangle=|0101\rangle\) and a final evolution time \(t_{f}=2\). In the classic version of the pVQD algorithm, we have to choose an ansatz for the time evolved wave function. We consider a circuit equivalent to a Trotter step where all the rotations are defined by variational parameters. The Trotter step circuit implementation for this model is shown in Appendix E. Both the Trotter and the pVQD full circuits are then obtained repeating this structure \(n_{\mathrm{TS}}\) times. In particular, we fix \(n_{\mathrm{TS}}=10\) for the Trotter circuit and \(n_{\mathrm{TS}}=3\) for the pVQD ansatz.
After running the algorithms, we compare the different circuits obtained and use them to measure expectation values of single- and two-spin observables. The results are shown in Fig. 2. The Trotter circuit lags behind variational methods both in terms of accuracy and resource required. The pVQD method instead achieves accurate results until \(t=1.0\), where the associated circuit becomes shallower than the one of Adaptive pVQD. This phenomenon suggests that in that time step the fixed representation power is the main source of error in the variational calculations.
In order to show the flexibility of the Adaptive pVQD, we implement a naive modification of the pVQD algorithm, that we indicate as pVQD with block extensions. In this case, a new step of the Trotterized variational ansatz is added to the circuit once the optimization procedure does not reach the desired accuracy. While this approach does improve the performance of the pVQD algorithm, we remark that it is not general, as it depends on the ansatz structure we have chosen. Furthermore, we can see from the bottom panel of Fig. 2 that the Adaptive pVQD method always produces shallower circuits, with resources tailored to the needs of the specific time step.
Then, we extend the study to systems with different sizes. To this end, we define the integrated exact infidelity
\[\Delta_{\mathcal{I}}^{\mathrm{ex}}(t_{f})=\int_{0}^{t_{f}}\left(1-|\langle \Psi(t)|\psi(\mathbf{\theta})\rangle|^{2}\right)dt \tag{9}\]
with respect to the exact wave function \(|\Psi(t)\rangle\) computed on a classical device. We again fix a final evolution time \(t_{f}=2\) and evaluate \(\Delta_{\mathcal{I}}^{\mathrm{ex}}(t_{f})\) for each method for systems of \(L\in[3,11]\) spins. In particular, we consider a Trotter circuit with a fixed depth of \(n_{\mathrm{TS}}=10\) and one with fixed Trotter step size \(dt=J_{x}t/n_{\mathrm{TS}}=0.05\), the same we use in the Trotter step of the pVQD algorithm. The results are shown in Fig. 3, together with the circuit depth at the end of the time evolution.
We note that the depth of the Adaptive pVQD circuits increases with the system size and converges to the Trotter circuit with fixed depth, while having a lower integrated exact infidelity. We highlight that Fig. 3 only indicates the depth of the final circuit. In the case of Adaptive pVQD, this corresponds to the deepest circuit prepared. The Trotterized circuits with a fixed Trotter step size yield the lowest values for \(\Delta_{\mathcal{I}}^{\mathrm{ex}}\), but \(n_{\mathrm{TS}}=40\) Trotter
Figure 2: Dynamics of the driven Heisenberg XYZ model studied with the Adaptive pVQD algorithm with local pool (L), compared to standard Trotter evolution, pVQD and pVQD with block extensions. The plot shows the results for an open chain of \(L=8\) spins with \(J_{x}=1,J_{y}=0.8\) and \(J_{z}=0.6\). The top and middle panel show the measurements of a single spin observable and a correlator, respectively. The bottom panel shows the number of CNOs in the circuit describing the time-evolved wave function. The simulation started in the antiferromagnetic state \(|\psi_{0}\rangle=|01010101\rangle\), and the infidelity threshold was set to \(\varepsilon=10^{-4}\) for all the variational methods.
steps are required to evolve the system to \(t_{f}=2\), resulting in circuits almost one order of magnitude deeper than any other. We performed multiple pVQD simulations with different variational ansatze equivalent to \(n_{\mathrm{TS}}=1,2,3,8\) Trotter steps. We note that the integrated exact infidelities of pVQD with \(n_{\mathrm{TS}}=1,2,3\) have a steep transition when the number of gates becomes smaller than the adaptive circuit. This phenomenon suggests that the ansatz limitation is the main source of error in the variational calculations, while the adaptive circuit is able to increase effectively its representation power. On the other hand, the standard pVQD calculation with \(n_{\mathrm{TS}}=8\) never undergoes this transition. While the integrated exact infidelity is always lower than the adaptive approach, we have to note that the entire time evolution is performed with a deeper circuit. Finally, we note a plateau in the depth of the circuit required by the adaptive algorithm when \(L>8\). This is similar to what observed in [33], where the system size at which the number of gates required saturates depends on the evolution time.
The adaptive method is able to produce circuits that are orders of magnitude shallower than Trotterization while keeping the accuracy comparable to it. Those circuit can be used to improve the measurement of observables at long times on current quantum devices, which are otherwise limited by the depth of the Trotterization. For this
Figure 4: Observables measured with the IBM Manila device for the driven Heisenberg XYZ model on an open chain with 4 sites, \(J_{x}=1,J_{y}=0.8,J_{z}=0.6\) and an antiferromagnetic initial state \(|\psi_{0}\rangle=|0101\rangle\). The Trotter simulation is performed with a fixed Trotter step size of \(dt=0.2\). The Adaptive pVQD circuits \(|\psi(\boldsymbol{\theta})\rangle\) were obtained with a noiseless simulation that used a local operator pool. The shaded areas correspond to 50 noisy simulations using the noise model of IBM Manila. Each data point and error bar correspond to the mean and the standard deviation, respectively, of 50 experiments performed on hardware. Zero noise extrapolation was applied to both noisy simulations and hardware experiments. Idle qubits were also dynamically decoupled from the active ones.
Figure 3: Adaptive pVQD algorithm with local pool compared to standard Trotter evolution and pVQD for the driven Heisenberg XYZ model. We employ the same setup indicated in Fig. 2 for multiple systems of size \(L\in[3,11]\). The top panel shows the integrated exact infidelity of pVQD and Trotterization over an entire time evolution with final time \(t_{f}=2\) as a function of the system size. The bottom panel shows the circuit depth at the end of the time evolution.
reason, we first run the Adaptive pVQD algorithm on the simulator and use the resulting sets of variational parameters to prepare quantum circuit on the hardware for a system of \(L=4\) spins. In Fig. 4, we compare observables measured both on those variational wave functions and on Trotterized circuits with a fixed Trotter step size of \(dt=0.2\).
In this experiment, the final Trotter circuit has 180 CNOTs. This circuit is beyond what is currently accessible on quantum devices, settling the expectation value of the correlator close to 0 for \(J_{x}t>0.8\). On the other hand, the Adaptive pVQD parameterized circuit \(|\psi(\mathbf{\theta})\rangle\) has 28 CNOTs at the end of the evolution. This improvement in the number of gates is crucial for the application of error mitigation techniques, especially at longer times. In particular, zero noise extrapolation (ZNE [31, 54]) was applied both on the noisy simulations and hardware experiments. We choose a quadratic fit on values obtained with noise scaling factors [1, 2, 3]. Moreover, when running our algorithm on hardware, we dynamically decouple the idle qubits from the active ones using the standard procedure available in Qiskit [55]. We expect that more advanced noise mitigation techniques, such as the one presented in [56], will improve the results on the Trotter circuit. However, this is also true for the variational circuit prepared by the Adaptive pVQD.
### Fermi-Hubbard model
The Hamiltonian of the Fermi-Hubbard model on a \(L_{x}\times L_{y}\) rectangular lattice is given by
\[H=-J\sum_{\langle ij\rangle,\sigma}(c^{\dagger}_{i\sigma}c_{j\sigma}+c^{ \dagger}_{j\sigma}c_{i\sigma})+U\sum_{i=0}^{L_{x}L_{y}-1}n_{i\uparrow}n_{i \downarrow}, \tag{10}\]
where \(c^{\dagger}_{i\sigma}\) (\(c_{i\sigma}\)) is the creation (annihilation) fermionic operator of spin \(\sigma\in\{\uparrow,\downarrow\}\) at site \(i\), \(n_{i\sigma}=c^{\dagger}_{i\sigma}c_{i\sigma}\) counts the number of fermions with spin \(\sigma\) at site \(i\) and \(\langle ij\rangle\) denotes nearest neighbor sites on the lattice. The first term in the Hamiltonian accounts for the hopping between nearest neighbor lattice sites, while the second term describes the on-site interactions.
There are several ways to encode fermionic Hamiltonians into qubit operators [57, 58, 59, 60, 61, 62, 63]. In this work, we consider the Jordan-Wigner mapping [57] to encode each fermionic mode into a qubit. Since every lattice site can host two modes (\(\uparrow\), \(\downarrow\)), \(N=2L_{x}L_{y}\) qubits are required to simulate the Fermi-Hubbard model on a \(L_{x}\times L_{y}\) grid. Before performing a fermionic encoding, we eliminate the spin index via \(c_{i\uparrow}\to c_{i}\) and \(c_{i\downarrow}\to c_{i+N/2}\) (and analogously for the number operator \(n_{i\sigma}\)). We then map each fermionic operator into a spin operator:
\[c_{i} \to Z^{\otimes i}\otimes\sigma^{+}\otimes\mathbb{I}^{\otimes N-i-1}, \tag{11}\] \[c^{\dagger}_{i} \to Z^{\otimes i}\otimes\sigma^{-}\otimes\mathbb{I}^{\otimes N-i-1}, \tag{12}\]
where \(\sigma^{\pm}=(X\pm iY)/2\). The local occupation number can then be identified with the local spin number according to \(n_{i}\in\{0,1\}\mapsto Z_{i}\in\{\uparrow,\downarrow\}\). More details on the fermionic indexing convention and implementing a Trotter step can be found in Appendix E.
Given that the mapping requires an ordering of the fermionic modes, operators that are local in space might generate very long Pauli strings. For example, considering the snake-like pattern, vertical hopping terms generate strings of Pauli \(Z\) with sizes up to \(2L_{x}-2\). This represents a bottleneck in studying fermionic systems with dimensionality higher than 1 on current quantum devices. By restricting the operator pool, we investigate the possibility of describing time-evolved wave functions of the 2D Hubbard model using only local gates. We perform noiseless simulations of a \(2\times 2\) square lattice, comparing local and non-local operator pools. In particular, we measure the expectation values of a local density operator and a density correlator and count the number of CNOTs in the circuits. We use a fixed-depth Trotter simulation and a pVQD with block extension as a benchmark. The results are shown in Fig. 5.
We do not restrict ourselves to specific quantum hardware to keep the comparison as general as possible. Instead, we count the number of CNOTs in a circuit by transpiling it into an abstract device with all-to-all connectivity that is able to perform arbitrary single qubit rotations and CNOTs. The local and non-local pool variants show different behavior over time in the count of CNOTs. We note that the non-local variant always requires fewer CNOTs than its local counterpart. However, some CNOTs are long-range, and their implementation on an actual device can be challenging on hardware with fixed topology and limited connectivity. In contrast, the circuit structure produced by the local pool variant is already suited for current hardware implementation. More details about the Adaptive pVQD output circuits can be found in Appendix D. Moreover, the plot highlights another limitation of the naive pVQD with block extensions approach. Indeed, it always prepare more expensive circuits than the Adaptive pVQD with non local pool and in the end it has similar CNOT requirement to the local variant, while being restricted to use long range gates as required by the Trotter step.
## IV Conclusions
We presented an adaptive version of pVQD, called Adaptive pVQD, to simulate the real-time evolution of quantum systems. This algorithm importantly circumvents the need to choose a fixed ansatz from the beginning of the time evolution. The parameterized quantum circuits are grown adaptively to be both problems and hardware-tailored. This is obtained with a measurement overhead required to determine the best gate among those included in the operator pool.
However, the gate search can be operated in parallel and, in our scheme, does not involve circuits with auxiliary qubits. This makes the Adaptive pVQD algorithm more hardware-efficient than standard methods, as exemplified in this work with the driven Heisenberg model on the IBM quantum hardware. Finally, we have simulated the dynamics of the 2D Hubbard model with only local gates, using the adaptive procedure to mitigate one of the bottlenecks that current quantum devices face in studying fermionic systems. Given the ease of introduction to the standard pVQD algorithm and its benefits, we believe that the adaptive procedure described here can be of great use in the simulation of dynamics both for current and future quantum devices.
## Data Availability
The code used to run the simulations is open source and can be found at [64]. It was written in Python using Qiskit [55]. Exact classical simulations were performed using Qutip [65].
## Acknowledgments
We thank S. Economou for insightful discussions. This research was supported by the NCCR MARVEL, a National Centre of Competence in Research, funded by the Swiss National Science Foundation (grant number 205602).
## Appendix A Minimization routine
Here we present additional details on the minimization routine that we applied throughout the simulations we presented in the main text. In particular, we follow a gradient-based approach, with gradient computed using the parameter-shift rule. Gradient-based and non-gradient-based optimization algorithms for dynamics were previously used for instance in [36] and [37], for both ideal and noisy quantum simulations. The parameter shift rule readily applies here since every Pauli string \(A_{i}\) is involutory, i.e. \(A_{i}^{2}=\mathbb{I}\)[51]. For a fixed set of operators \(\mathbf{A}\), the gradient of the infidelity was thus computed via the parameter shift rule:
\[\frac{\partial\mathcal{I}}{\partial d\theta_{i}}=\frac{\mathcal{I}(\mathbf{\theta }+\mathbf{d}\theta+s\mathbf{e}_{i})-\mathcal{I}(\mathbf{\theta}+\mathbf{d}\theta-s\mathbf{e}_{i}) }{2\sin s}, \tag{10}\]
where \(\mathbf{e}_{i}\) is the standard unit vector, and we fixed \(s=\pi/2\). The gradient was then fed to Adam [66], implemented with the default hyperparameters and a learning rate \(\alpha=0.005\). The shift parameters \(\mathbf{d}\mathbf{\theta}^{*}\) were consequently obtained using Adam.
Two stopping criteria for the optimizer were used: (1) the \(\ell_{\infty}\)-norm of the gradient of the infidelity is below a tolerance and (2) a maximum number of iterations is reached. Finally, as showed in [36], an optimization threshold independent from \(\Delta t\) can be used if \(\mathcal{I}\) is substituted with \(\mathcal{I}/\Delta t^{2}\) as cost function.
## Appendix B Gradient of the Fidelity
In this Appendix, we derive the expression for the gradient of the adaptive step presented in Eq. (4). Given the quantum circuit \(U(\mathbf{\theta})\) that prepares the state
Figure 5: Adaptive pVQD schemes for the Fermi-Hubbard model on a \(2\times 2\) open square lattice (8 qubits) with \(U/J=0.8\). Local (L) and non-local (NL) operator pools are used to perform noiseless simulations and the results are compared to a Trotter evolution with \(n_{\text{TS}}=5\) Trotter steps and pVQD with block extensions. The system starts in the half-filled antiferromagnetic state \(|\psi_{0}\rangle=|n_{0}n_{1}n_{2}\cdots\rangle=|10100101\rangle\). We fixed the infidelity threshold to \(\varepsilon=10^{-4}\). The top and middle panels show the expectation values of an on-site number density operator and a number density correlator over time. The bottom panel shows the number of CNOTs in the circuit describing the time-evolved wave function.
\(U(\mathbf{\theta})|\psi_{0}\rangle\), we want to add the gate \(e^{-i\theta a_{a}A_{a}}\) to it, defining the new state \(|\psi(\mathbf{\theta}+\mathbf{d}\mathbf{\theta})\rangle=U(\mathbf{\theta})\,e^{-i\mathrm{d} \theta a_{a}A_{a}}|\psi_{0}\rangle\). To obtain the gradient of the fidelity with respect to this added parameter \(d\theta_{a}\), it is convenient to first rewrite the fidelity given in Eq. (3) as follows
\[\mathcal{F}(\mathbf{d}\mathbf{\theta},\Delta t) =|\langle\psi(\mathbf{\theta}+\mathbf{d}\mathbf{\theta})|U_{\mathrm{TS}}( \Delta t)|\psi(\mathbf{\theta})\rangle|^{2}\] \[=|\langle\psi_{0}|e^{id\theta_{a}A_{a}}U^{\dagger}(\mathbf{\theta})U_ {\mathrm{TS}}(\Delta t)U(\mathbf{\theta})|\psi_{0}\rangle|^{2}\] \[=\langle\psi_{0}|e^{id\theta_{a}A_{a}}U^{\dagger}(\mathbf{\theta})U_ {\mathrm{TS}}(\Delta t)U(\mathbf{\theta})|\psi_{0}\rangle\] \[\quad\ast\langle\psi_{0}|U^{\dagger}(\mathbf{\theta})U_{\mathrm{TS}} ^{\dagger}(\Delta t)U(\mathbf{\theta})e^{-i\theta_{a}A_{a}}|\psi_{0}\rangle\] \[=\langle\psi_{0}|U^{\dagger}(\mathbf{\theta})U_{\mathrm{TS}}^{ \dagger}(\Delta t)U(\mathbf{\theta})e^{-i\mathrm{d}\theta_{a}A_{a}}|\psi_{0}\rangle\] \[\quad\ast\langle\psi_{0}|e^{id\theta_{a}A_{a}}U^{\dagger}(\mathbf{ \theta})U_{\mathrm{TS}}(\Delta t)U(\mathbf{\theta})|\psi_{0}\rangle\] \[=\langle\phi(\mathbf{\theta},\Delta t)|e^{-id\theta_{a}A_{a}}P_{0}e^{ i\mathrm{d}\theta_{a}A_{a}}|\phi(\mathbf{\theta},\Delta t)\rangle, \tag{22}\]
where we defined \(|\phi(\mathbf{\theta},\Delta t)\rangle=U^{\dagger}(\mathbf{\theta})U_{\mathrm{TS}}( \Delta t)U(\mathbf{\theta})|\psi_{0}\rangle\) and the projector \(P_{0}=|\psi_{0}\rangle\langle\psi_{0}|\). One can then readily differentiate with respect to \(d\theta_{a}\) to obtain
\[\frac{\partial\mathcal{F}}{\partial d\theta_{a}}=\langle\phi(\bm {\theta},\Delta t)|\,e^{-i\mathrm{d}\theta_{a}A_{a}}[P_{0},iA_{a}]e^{id\theta _{a}A_{a}}\,|\phi(\mathbf{\theta},\Delta t)\rangle \tag{23}\]
which precisely corresponds to Eq. (4).
## Appendix C Adaptive step implementation
In this Appendix we illustrate the adaptive procedure we have used in our simulations, based on what was initially proposed in [48]. The overall procedure can be divided in the following steps:
1. _Compute the gradient of the fidelity for each operator in the pool_. To process the pool, the gate \(e^{-i\theta_{a}A_{a}}\) associated to each trial operator \(A_{a}\in\mathcal{A}\) is appended one at a time to the current parameterized circuit \(\{\mathbf{\theta},\mathbf{A}\}\), resulting in the trial circuit \(\{(\mathbf{\theta},0),(\mathbf{A},A_{a})\}\). For the trajectory in parameter space to remain continuous, the new parameter \(\theta_{a}\) is set to \(0\). The gradient of the fidelity with respect to the new parameter is computed for each trial circuit using the parameter shift rule, given explicitly in Eq. (21).
2. _Pick the operator in the pool that maximizes the gradient_. Update the parameters and operators to \(\mathbf{\theta}\rightarrow(\mathbf{\theta},0)\) and \(\mathbf{A}\rightarrow(\mathbf{A},A^{*})\), where \(A^{*}\) is the operator \(A_{a}\) that maximizes the fidelity gradient.
3. _Remove the operators in the pool that act on qubit(s) already acted on_. Given that the operator \(A^{*}\) obtained in Item 2 acts on the qubits indices \(\mathbf{\alpha}\), the subset of the operator pool that also acts on at least one index in \(\mathbf{\alpha}\), namely \[\mathcal{A}_{\mathbf{\alpha}}=\{A_{a}|A_{a}\in\mathcal{A}\text{ acts on }\mathbf{\beta},\mathbf{\beta}\cup\mathbf{\alpha}\neq\emptyset\}\] (24) should be removed from the current operator pool. Hence the pool can be updated as follows: \(\mathcal{A}\rightarrow\mathcal{A}\setminus\mathcal{A}_{\mathbf{\alpha}}\).
4. _Go back to Item 2 until the operator pool is empty_.
5. _Return the new circuit_. The new parameterized circuit is characterized by \(\mathbf{\theta}\rightarrow(\mathbf{\theta},0,\cdots,0)\) and \(\mathbf{A}\rightarrow(\mathbf{A},A_{0}^{*},A_{1}^{*},\cdots,A_{k}^{*})\), assuming that \(k\) new operators were added.
As stated in the main text, this procedure guarantees that the depth of the parameterized circuit \(|\psi(\mathbf{\theta})\rangle\) is increased by \(1\) in each adaptive step [48].
## Appendix D Adaptive pVQD output circuits
We illustrate in Figs. 6 and 7 examples of parameterized circuits obtained with the Adaptive pVQD algorithm in simulations shown in the main text. Each column of operators in the circuits corresponds to an adaptive step.
Figure 6: Variational circuit obtained at \(J_{x}t=2\) in the simulation shown in Fig. 3, using the Adaptive pVQD algorithm and local operator pool.
Figure 7: Variational circuit obtained at \(Jt=4\) in the simulation shown in Fig. 5, using the Adaptive pVQD algorithm and local operator pool.
## Appendix E Trotter step circuit encodings
In this Appendix we provide the circuits we used to implement a single Trotter step of the driven Heisenberg and the Hubbard models. The Trotter step in the driven Heisenberg model is implemented with a checkerboard pattern of the two qubit gates \(R_{XX},R_{YY},R_{ZZ}\), with a layer of single qubit \(R_{Z}\) at the end. We show a sketch in Fig. 8.
To realize the Trotter circuit for the Hubbard model, we first have to establish an ordering in the latices sites and the modes. We number the sites using a snake-like pattern and, as indicated in the main text, we eliminate the spin index via \(c_{i\uparrow}\to c_{i}\) and \(c_{i\downarrow}\to c_{i+N/2}\). Under this ordering, the Jordan-Wigner transformation of the Hamiltonian terms reads
\[c_{i\uparrow}^{\dagger}c_{j\uparrow}+c_{j\uparrow}^{\dagger}c_{i\uparrow} \mapsto\frac{1}{2}\left[X_{i}\prod_{k=i+1}^{j-1}Z_{k}X_{j}+Y_{i }\prod_{k=i+1}^{j-1}Z_{k}Y_{j}\right], \tag{5}\]
\[c_{i\downarrow}^{\dagger}c_{j\downarrow}+c_{j\downarrow}^{\dagger}c_{i\downarrow} \mapsto\frac{1}{2}\Bigg{[}X_{i+N/2}\prod_{k=i+1}^{j-1}Z_{k+N/2}X_ {j+N/2}+ \tag{6}\]
\[\qquad+Y_{i+N/2}\prod_{k=i+1}^{j-1}Z_{k+N/2}Y_{j+N/2}\Bigg{]},\]
\[n_{i\uparrow}n_{i\downarrow} \mapsto\frac{1}{4}(\mathbb{I}-Z_{i})(\mathbb{I}-Z_{i+N/2}), \tag{7}\]
where we assumed \(j>i\) without loss of generality. Given the mapped Hamiltonian, the Trotter step can not be implemented using only \(R_{XX},R_{YY},R_{ZZ}\) and \(R_{Z}\) gates. Indeed, the non locality of the mapping requires some multi-qubit rotation with size up to \(2L_{x}\). The two multi-qubit gates are the rotations generated by the Pauli strings \(XZZX\) and \(YZZY\), which can be decomposed as shown in [44].
Fig. 9 presents our implementation.
Figure 8: Implementation of an antiferromagnetic initial state and a Trotter step for the driven Heisenberg model given in Eq. (7).
Figure 9: (a) Gates used to define a Trotter step. (b) Quantum circuit encoding the first order Trotter step of the Hubbard model with an half-filled antiferromagnetic initial state. |
2303.09595 | Compilation of isomeric ratios of light particle induced nuclear
reactions | Experimental isomeric ratios of light (A$\le$4) particle-induced nuclear
reactions were compiled for the product nuclides having metastable states with
half-lives longer than 0.1 sec. The experimental isomeric ratio data were taken
from the EXFOR library and reviewed. When an experiment reports isomer
production cross sections instead of isomeric ratios, the cross sections taken
from the EXFOR library were converted to the isomeric ratios by us. During
compilation, questionable data (e.g.,preliminary data compiled in EXFOR in
parallel with their final data, sum of isomer production cross sections larger
than the total production cross sections) were excluded. As an application of
the new compilation, goodness-of-fit was studied for the isomeric ratios
predicted by the reaction model code TALYS-1.96. | A. Rodrigo, N. Otuka, S. Takács, A. J. Koning | 2023-03-16T18:45:58Z | http://arxiv.org/abs/2303.09595v2 | # Compilation of isomeric ratios of light particle induced nuclear reactions
###### Abstract
Experimental isomeric ratios of light (A\(\leq\)4) particle-induced nuclear reactions were compiled for the product nuclides having metastable states with half-lives longer than 0.1 sec. The experimental isomeric ratio data were taken from the EXFOR library and reviewed. When an experiment reports isomer production cross sections instead of isomeric ratios, the cross sections taken from the EXFOR library were converted to the isomeric ratios by us. During compilation, questionable data (_e.g._,preliminary data compiled in EXFOR in parallel with their final data, sum of isomer production cross sections larger than the total production cross sections) were excluded. As an application of the new compilation, goodness-of-fit was studied for the isomeric ratios predicted by the reaction model code TALYS-1.96.
+
Footnote †: journal: Atomic Data and Nuclear Data Tables
## 1 Introduction
A nucleus on an excited level formed as a reaction product is typically deexcited to the ground state promptly by a series of gamma-ray emissions. However, this deexcitation may be delayed due to presence of a long-lived excitation level. Such an excitation level is known as the metastable state, whose spin is usually not close to the spin of the ground state and it prevents immediate deexcitation to a lower level. It may further undergo deexcitation by gamma-ray emission to a lower level (isomeric transition) and/or by \(\alpha\)/\(\beta\)-ray emission or electron conversion to a neighboring nuclide. Detection of such radiation allows us to measure the production cross section of the metastable state. Similarly, we can define the production cross section of the ground state, which corresponds to deexcitation of the reaction product decayed into the ground state without going through any metastable state. If there is only one metastable state, the total production cross section \(\sigma_{\rm f}\) is related with the ground state production cross section \(\sigma_{\rm g}\) and metastable state production cross section \(\sigma_{\rm m}\) by \(\sigma_{\rm f}=\sigma_{\rm g}+\sigma_{\rm m}\).
The ratio of production cross sections such as \(\sigma_{\rm m}/\sigma_{\rm g}\) or \(\sigma_{\rm m}/\sigma_{\rm f}\) is known as the isomeric ratio. From the view of theoretical reaction modelling, the isomeric ratio is related with the spin (\(J\)) dependence of the level density of the intermediate and final product nuclei. This distribution has been theoretically modelled by \((2J+1)\exp[-(J+1/2)^{2}/(2\sigma^{2})]\) with the square of the distribution width \(\sigma^{2}\) known as the spin cut-off parameter [1, 2]. Huizenga and Vandenbosch formulated the relationship between \(\sigma\) and the isomeric ratio [3], and various attempts have been made to parameterize \(\sigma\) by using experimental isomeric ratios. Namely, compilation of experimental isomeric ratios contributes to better model description of the isomer production cross sections through adjustment of the spin cut-off parameters to reproduce the compiled isomeric ratios.
The knowledge of the isomeric ratio is also important for nuclear technology. For example, the \({}^{241}\)Am(n,\(\gamma\))\({}^{242}\)Am isomeric ratio is important from the view of nuclear waste management. This is because \({}^{241}\)Am may be produced in fission energy systems by successive neutron captures and \(\beta^{-}\) decay starting from \({}^{238}\)U, and the long-lived metastable state \({}^{242m}\)Am can be further
transmuted into the heavier americium isotopes and finally to \({}^{244}\)Cm [4] due to a large thermal neutron capture cross section of \({}^{242m}\)Am [5](1290\(\pm\)300 b [6]). The isomeric ratios of low-energy neutron-induced reaction products have been evaluated and compiled in File 9 of the ENDF-6 format [7], and are utilized in reactor burn-up calculation. The isomeric ratios for production of some metastable states such as \({}^{99m}\)Tc and \({}^{186m}\)Re in nuclear reactions are also important from the view of medical isotope production [8; 9]. Accessibility to the experimental isomeric ratios is, therefore, important for both research of nuclear reactions and its application.
When the ground and metastable states are unstable and their activities are measurable, the isomeric ratio is related with the counts of the ground and metastable state decays \(N_{g}\) and \(N_{m}\) by
\[\frac{\sigma_{g}}{\sigma_{m}}=\frac{f_{m}}{f_{g}}\left[\frac{N_{g}I_{\gamma m} \varepsilon_{m}}{N_{m}I_{\gamma k}\varepsilon_{g}}-p\frac{\lambda_{g}}{ \lambda_{g}-\lambda_{m}}\right]+p\frac{\lambda_{m}}{\lambda_{g}-\lambda_{m}} \tag{1}\]
[10], where \(N\), \(I_{\gamma}\), \(\varepsilon\), \(p\) and \(\lambda\) are the number of gamma-rays counted, gamma emission probability, gamma-ray detection efficiency, isomeric transition probability and decay constant, respectively. The time factor \(f\) is defined by \(f=[1-\exp(-\lambda t_{i})]\exp(-\lambda t_{c})[1-\exp(-\lambda t_{m})]/\lambda\) with the irradiation time \(t_{i}\), cooling time \(t_{c}\) and measurement time \(t_{m}\). This equation does not require determination of the incident particle flux, which may be a major source of the uncertainty and error in determination of the production cross section. Similarly, prediction of the isomeric ratio by a reaction model is free from the absolute normalization (_e.g._,total reaction cross section constrained by the optical potential). These facts show an advantage to do comparison between measurements and model predictions for the isomeric ratio rather than for the isomer production cross sections.
The experimental isomeric ratios of nuclear reaction products have been compiled in the EXFOR library by the International Network of Nuclear Reaction Data Centres (NRDC) [11]. The compiled data are included in database systems and disseminated to the end users by the data centres [12; 13; 14; 15]. However, the isomeric ratios compiled in EXFOR have not been fully utilized because they are published in various expressions (_e.g._,\(\sigma_{m}/\sigma_{i}\), \(\sigma_{m}/\sigma_{g}\)), and the EXFOR library compiles these ratios as they are published without unification of the expression. When an experimentalist reports \(\sigma_{g}\) and \(\sigma_{m}\) without their ratios, the ratios are not compiled in the EXFOR library, and this also makes the experimental information on the isomeric ratio less accessible.
In the past, assignment of the ground and metastable states has not been done in a consistent manner in EXFOR since assignments may depend on the decay scheme referred to by the experiment (_e.g._,the 69 min state of \({}^{110}\)In, which was known as the ground state in the past but now considered as a metastable state). However, this inconsistency was analysed and improved by the data centres in 2010s [16]. Considering these situations of EXFOR, we decided to compile experimental isomeric ratios which are derived from but are more accessible than those in the EXFOR library. In the following sections, we discuss procedure of compilation and its application to benchmark of the TALYS-1.96 reaction model code [17].
## 2 Procedure
### Criteria of data selection
We defined the scope of our compilation by the following criteria:
* Experimental isomeric ratios or production cross sections compiled in EXFOR as of 23 August 2022.
* Data not superseded. (_i.e._,preliminary data are excluded if their final data are also in EXFOR.)
* Data measured with a monoenergetic photon, neutron, proton, deuteron, triton, helion, or alpha particle beam.
* Data for production of nuclides having ground state and only one metastable state with its half-life longer than 0.1 sec.
The EXFOR data were extracted not directly from the original EXFOR files but from the X4Pro database [18].
In the EXFOR library, the quantity of each dataset is expressed by a REACTION code. For example, the REACTION code (79-AU-197(N,3N)79-AU-195-M,,SIG) expresses the \({}^{197}\)Au(n,3n)\({}^{195m}\)Au cross section. The two codes 79-AU-195-M and,SIG express the product nuclide and quantity, respectively. The combinations of the reaction product and quantity within our scope are summarized in Table 1. Note that the code ELEM/MASS indicates that the atomic and mass numbers of the reaction product are independent variables of the EXFOR dataset. See Chapter 6 of EXFOR Formats Manual [19] for more details about the EXFOR REACTION formalism. The EXFOR library also compiles the ground state production cross section including partial feeding via isomeric transition from a metastable production cross section (_e.g._,34-SE-73-G,M+,SIG) and the production cross section including feeding by decay of another nuclide (_e.g._,13-AL-27,CUM,SIG). Such datasets are not for direct use of isomeric ratio construction and were excluded in the present compilation.
Isomeric ratios of fission products are also excluded for all spontaneous fission datasets, majority of the neutron-induced fission datasets and some other fission datasets. This is because they are compiled in EXFOR as fission product yield ratios FY/BAT rather than the cross section ratios SIG/RAT, and their REACTION coding rule is slightly different (_e.g._,the code indicating partial feeding M+ is not combined with FY/RAT). The readers are reminded that compilation of experimental isomeric fission yield ratios has been recently published by the US National Nuclear Data Center (NNDC) [20]. Sometimes we found an experiment showing completely different trend from the other experiments. When appearance of such an outlier was due to a typo in the EXFOR library, we fixed it. Otherwise, we included such outliers in the present compilation without exclusion.
We also sometimes meet an experiment reporting \(\sigma_{t}\) smaller than \(\sigma_{m}\), and such an experiment was excluded from our compilation unless it was resolved by communication with the experimentalist. This is often due to presence of a ground state production cross section without clear indication of the state in the nuclide symbol in documentation. For example, we experienced this problem for the cross sections tabulated with not \({}^{148g}\)Pm but \({}^{148}\)Pm published by Lebeda et al. [21; 22], for which the author kindly confirmed that they are not the total but the ground state production cross sections, and we were able to keep them in our compilation. The EXFOR datasets corrected and excluded in the above-mentioned procedures are summarized in NRDC technical memos [23; 24]. Another possible reason of the unexpected relation between \(\sigma_{g}\) and \(\sigma_{m}\) is due to large uncertainties in the cross sections (e.g., \({}^{85}\)Rb(p,x)\({}^{84}\)Rb cross sections of Kastleiner et al.[25], where we see \(\sigma_{m}>\sigma_{t}\) at some incident energies though their error bars overlap.).
E.A. Skakun et al. measured (p,n) and (p,\(\gamma\)) isomer productions below 10 MeV with the Kharkiv proton linear accelerator and published several times (_e.g._,[26; 27; 28]). Though they are compiled as independent results in EXFOR, we assumed they are from the same measurements and selected one of them for compilation as summarized in NRDC technical memo [31]. Unfortunately the isomeric ratios in its final publication [28] are compiled in EXFOR by digitization from the figure images and we did not adopt them.
A reference value (_e.g._,monitor cross section, gamma emission probability) adopted by the experimentalist may be different from the currently recommended value. We did not update the originally published data compiled in EXFOR during the present compilation except for the proton-induced activation cross sections published by Levkovskii [29], for which we renormalized the originally published cross sections and compiled in EXFOR A0511 by 192.8/252\(\sim\)0.77 where 252 mb is the \({}^{nat}\)Mo(p,x)\({}^{96}\)Tc cross section at 30 MeV adopted by Levkovskii while 192.8 mb is the value recommended by an IAEA Coordinated Research Project [30].
### Ground and metastable state assignments
The ground and metastable state assignments may depend on the decay data adopted by the experimentalist. During the comprehensive review and improvement of isomeric flagging in EXFOR performed in 2010s [16; 32], we followed the assignment seen in Nuclear Wallet Cards [33].
Some experimentalists do not consider a short-lived metastable state as an isomer. For example, the first metastable state of \({}^{196m}\)Au (8.1 sec) is usually not detectable in an activation measurement designed for detection of \({}^{196g}\)Au (6.2 d) and \({}^{196m2}\)Au (9.6 hr) activities. Consequently, an experimentalist may report their \({}^{196m2}\)Au production cross sections just as \({}^{196m}\)Au production cross sections, which may be wrongly entered in EXFOR as 79-AU-196-M,SIG though this must be 79-AU-196-M2,,SIG. In order to exclude such a dataset compiled with improper isomeric flagging, the reaction product code of each EXFOR dataset was checked against NUBASE [34], and the dataset was excluded when NUBASE defines two or more metastable states or no metastable state for the product nuclide. Typical examples of such nuclides are (1) \({}^{124m2}\)Sb 20 min states (denoted as \({}^{124m}\)Sb in the literature(_e.g._,[35; 36]) and (2) \({}^{30m}\)Al 72.5 sec state, whose production cross sections were reported in the past (_e.g._,[37; 38]) but this state is currently unknown.
### Conversion of cross sections to isomeric ratios
After extraction of the EXFOR datasets within our scope and filtered by the above-mentioned procedures, we converted the extracted data to the isomeric ratios \(\sigma_{m}/\sigma_{t}\). When an experimental work does not provide an isomeric ratio in EXFOR but provide at least two of \(\sigma_{g}\), \(\sigma_{m}\) and \(\sigma_{t}\) at the same incident energy, we converted them to \(\sigma_{m}/\sigma_{t}\) for compilation. When all these three types of the cross sections are available, we did not use \(\sigma_{g}\). If an experiment does not provide any pair of the cross sections at the same incident energy, we simply discarded the experiment.
An experiment may report two or more data points at the same incident energy. When an average value from several measurements is reported, we adopted it while discarded the individual results. For example, Meierhofer et al. [39] reports 6 \(\sigma_{t}\) and 12 \(\sigma_{m}\) values for the \({}^{74}\)Ge(n,\(\gamma\))\({}^{75}\)Ge reaction at the thermal energy, and one may construct 72 \(\sigma_{m}/\sigma_{t}\) values from various combinations of \(\sigma_{t}\) and \(\sigma_{m}\). However, they also report the average of the \(\sigma_{t}\) and \(\sigma_{m}\) values and we adopted only these averages to obtain a single \(\sigma_{m}/\sigma_{t}\) value from this measurement. When the authors report only individual results without their averages, we compiled the isomeric ratios derived from all combinations of the cross sections, and tabulated them with a flag for caution.
Filatenkov et al. performed systematic measurements of neutron activation cross sections and documented their results around 2000 [40; 41]. Later the cross sections were revised with the updated reference data (_e.g._,decay data) and published in 2016 [42]. During our compilation, we found some isomeric ratios in the two original reports are not seen in the 2016 report even in revised forms though the corresponding cross sections are there. We adopted the isomeric ratios derived from the cross sections published in the 2016 report rather than the isomeric ratios published in the original reports. (c.f. [43; 44]). Similarly, we found that an isomeric ratio derived from the high energy (above 660 MeV) cross sections measured by A.R.Balabekyan et al. at JINR (_e.g._,[45; 46; 47; 48; 49; 50]) and compiled in an EXFOR entry is often very close to an isomeric ratio compiled in another EXFOR entry. We carefully identified such pairs to avoid appearance of the isomeric ratios from the same experiment twice (c.f. [51]).
### Uncertainty
The EXFOR library may provide several types of the uncertainties such as the total uncertainty (ERR-T), statistical uncertainty (ERR-S), total systematic uncertainty (ERR-SYS), partial uncertainty (ERR-1, ERR-2 etc.) or uncertainty without further specification
(DATA-ERF). When several of them are in EXFOR, we always selected the largest one in our tabulation. When an isomeric ratio was derived from the respective cross sections, we propagated the uncertainties in the cross sections to the isomeric ratio assuming that the uncertainties in the cross sections are independent. This may overestimate the actual uncertainty since a partial uncertainty (_e.g._,uncertainty in the incident particle flux) may be shared in both cross sections and cancelled when they are converted to the isomeric ratio.
The isomeric ratio plus (minus) its uncertainty in our compilation is sometimes higher (lower) than 1 (0). There are a few such ratios directly taken from the original publication (_e.g._, \({}^{75}\)As(n,p)\({}^{75}\)Ge isomeric ratio in Ref. [52]) but the majority of them are \(\sigma_{m}/\sigma_{t}\) values derived by us. Such values are flagged in the main table for caution.
## 3 Results
Table 2 summarizes the number of reactions and isomeric ratios for each projectile. Very few photon-induced reaction isomeric ratios were found for inclusion in the current compilation. This is because usually photoactivation isomeric ratio measurements are done with bremsstrahlung photon sources, which are not monoenergetic and not for our compilation.
It is not an intention of this article to discuss various findings in individual cases. Nevertheless, we discuss activation measurements of two reactions just to demonstrate what kind of discussion we can do based on the new compilation.
### \({}^{93}\)Nb(n,\(\alpha\))\({}^{90}\)Y
Figure 1 shows the \({}^{93}\)Nb(n,\(\alpha\))\({}^{90}\)Y isomeric ratios as well as the ground and metastable state production cross sections [42, 53, 54, 55, 56, 57, 58]. The 3.2 hr metastable state has two intense gamma lines at 203.53 and 479.51 keV [79] and measurement of its production cross section is straightforward. On the other hand, the 64 hr ground state does not have such a suitable gamma line, and it makes measurement of the ground state production cross section difficult. Filatenkov carefully determined the isomeric ratio by decay-curve analysis for an energetic (\(E_{\rm max}\)=2280 keV) \(\beta^{-}\)-ray by a HPGe detector considering the fact that there are few other reaction products and they do not emit \(\gamma\)-rays at the high energy region. See Sect. 2.7.2 of Ref. [42] for more details. The isomeric ratios reported by Filatenkov are lower than the majority of the isomeric ratios published by others but consistent with the prediction by TALYS. On the other hand, the TALYS calculation with the same default parameters underestimates both ground and metastable state production cross sections.
### \({}^{197}\)Au(d,2n)\({}^{197}\)Hg
Figure 2 shows the \({}^{197}\)Au(d,2n)\({}^{197}\)Hg isomeric ratios as well as the ground and metastable state production cross sections [80, 81, 82, 83, 84, 85, 86, 87, 88]. Not only the 24 hr metastable state but also the 64 hr ground state have characteristic gamma-rays, and it is possible to measure the production cross sections of both states in principle. Similar to the \({}^{93}\)Nb(n,\(\alpha\))\({}^{90}\)Y case, however, the ground state production cross sections are more scattered than the metastable state production cross sections in the literature. Furthermore, we see several groups in the energy dependence of the isomeric ratios.
When the half-life of the ground state is longer than the half-life of the metastable state and the measurement was done after long cooling time allowing complete decay of the co-produced metastable state to the ground state (_i.e._,\(t_{c}\gg 1/\lambda_{m}\)), the cross section derived from the measurement of the ground state activity \(\sigma_{c}=N_{g}/(f_{g}I_{\eta\pi}\epsilon_{g}n\phi)\) with the sample areal density \(n\) and beam flux \(\phi\) is sometimes assumed to be \(\sigma_{c}\sim\sigma_{g}+p\sigma_{m}\). This leads to
\[\sigma_{g}\sim\sigma_{c}-p\sigma_{m} \tag{2}\]
for determination of \(\sigma_{g}\) from \(\sigma_{c}\) and \(\sigma_{m}\). Below we demonstrate that this equation is valid only when \(\lambda_{m}\gg\lambda_{g}\) or \(\sigma_{m}\ll\sigma_{g}\).
Since \(\sigma_{m}=N_{m}/(f_{m}I_{\eta m}\varepsilon_{m}n\phi)\), Eq. (1) can be rewritten to
\[\sigma_{c}=\sigma_{g}+p\left(\frac{f_{m}}{f_{g}}\frac{\lambda_{g}}{\lambda_{g }-\lambda_{m}}-\frac{\lambda_{m}}{\lambda_{g}-\lambda_{m}}\right)\sigma_{m}. \tag{3}\]
If \(t_{c}\gg 1/\lambda_{m},\exp(-\lambda_{m}t_{c})/\exp(-\lambda_{g}t_{c})\to 0\), namely \(f_{m}/f_{g}\to 0\) and
\[\sigma_{c}\rightarrow\sigma_{g}+p\frac{\lambda_{m}}{\lambda_{m}-\lambda_{g}} \sigma_{m}. \tag{4}\]
Therefore, one can determine \(\sigma_{g}\) after long cooling by
\[\sigma_{g}\sim\sigma_{c}-p\frac{\lambda_{m}}{\lambda_{m}-\lambda_{g}}\sigma_{ m}. \tag{5}\]
in general as long as \(\lambda_{m}>\lambda_{g}\).
It follows from Eq. (5) that Eq. (2) is valid only when (1) \(\lambda_{m}\gg\lambda_{g}\) or (2) \(\sigma_{m}\ll\sigma_{g}\), and use of Eq. (2) adds an extra term \(p[\lambda_{m}/(\lambda_{m}-\lambda_{g})-1]\sigma_{m}\) to the actual ground state production cross section in general. As \(p[\lambda_{m}/(\lambda_{m}-\lambda_{g})-1]\sim 0.53\) and \(\sigma_{m}\) is not negligible for the \({}^{197}\)Au(d,2n)\({}^{197}\)Hg reaction, some experiments showing high \(\sigma_{g}\) values in Fig. 2 may include this extra term. We notice that similar problems may occur in pairs of the metastable and ground states having close half-lives (_e.g._,\({}^{198}\)Tl, \({}^{198}\)Au) and we wish our compilation will contribute to discussion on this problem.
## 4 Application
Global test of reaction model codes is an immediate application of the newly prepared isomeric ratio table. We can easily check reaction model codes from the goodness-of-fit of outputs obtained using the new isomeric ratio table. As an example, we calculated isomeric ratios by TALYS-1.96 with the default parameter sets but varying the spin cut-off parameter. The spin cut-off parameter used in TALYS is
\[\sigma^{2}=R\frac{\bar{a}}{a}\frac{I_{\rm rig}T}{\hbar^{2}} \tag{6}\]
in default setting ("spincuttmodel 1"), where \(I_{\rm rig}\) is the rigid body moment of inertia, \(T\) is the nuclear temperature, and \(a\) and \(\bar{a}\) are the level density parameter and its high excitation energy limit, respectively. In "spincuttmodel 2", this is simplified to
\[\sigma^{2}=R\frac{I_{\rm rig}T}{\hbar^{2}}\equiv\frac{I_{\rm eff}T}{\hbar^{2}} \tag{7}\]
by omitting the shell effect factor \(a/\bar{a}\). \(R\) is an adjustable parameter in TALYS.1 The parameter \(\eta=I_{\rm eff}/I_{\rm rig}\) seen in the literature [89; 90; 91] is equal to \(R\) in Eq. (7). To see an appropriate choice of R, we calculated the isomeric ratios of all reactions in the present compilation from 1 eV (neutron-induced reactions) or 1 MeV (other reactions) to 200 MeV with the energy grids hardwired in TALYS. The \(R\) values were varied between 0.1 and 1.5, and calculations were done with both spin cut-off parameter models.
Footnote 1: \(R\) may be specified by Rspincut (nuclide independent) or s2adjust (nuclide dependent) in TALYS-1.96.
For \(n\sim 12,000\) experimental isomeric ratios compiled in the present work with their uncertainties and the isomeric ratios predicted by TALYS, we calculated the \(F\)-value [92]:
\[F=\exp\sqrt{\frac{1}{n}\sum_{i=1}^{n}\left[\ln\left(\frac{r_{i,\rm cal}}{r_{i, \rm exp}^{\prime}}\right)\right]^{2}} \tag{8}\]
with
\[r^{\prime}_{i,\exp}=\left\{\begin{array}{ll}r_{i,\exp}-\Delta r_{i,\exp}&\text{ if }r_{i,\text{cal}}<r_{i,\exp}-\Delta r_{i,\exp}\\ r_{i,\exp}+\Delta r_{i,\exp}&\text{if }r_{i,\text{cal}}>r_{i,\exp}+\Delta r_{i,\exp} \\ 1&\text{otherwise}\end{array}\right., \tag{9}\]
where \(r_{i,\exp}\) and \(\Delta r_{i,\exp}\) are the \(i\)th isomeric ratio in our compilation and its uncertainty, and \(r_{i,\text{cal}}\) is the corresponding isomeric ratio predicted by TALYS. Figure 3 shows \(R\) dependence of the \(F\)-value. This figure suggests that the best fit is obtained when the spin cut-off parameter is reduced to \(\sim\)40% (\(\sim\)50%) of its default value \(R=1\) when using the "spincuttmodel 1" ("spincuttmodel 2") setting. For more sophisticated evaluation of isomeric ratios, \(R\) must be adjusted for each nuclide individually. For example, Sudan et al. [89] reports that the \(\eta\) value shows strong mass dependence when it was adjusted for each nuclide separately.
Figure 4 shows distribution of the \(F\)-value for prediction by TALYS-1.96 with default setting ("spincuttmodel 1" and \(R=1\)). Among four reactions getting high \(F\)-values, \({}^{197}\)Au(n,\(\gamma\))\({}^{198}\)Au and \({}^{nat}\)Pb(p,x)\({}^{198}\)Au could be difficult ones to get \(F\sim 1\) since \(\sigma_{m}/\sigma_{t}\) is very low (\(\sim 10^{-3}\) or lower) for the former reaction, and the \(F\)-value is based on only one experimental \(\sigma_{m}/\sigma_{t}\) value at very high energy (150 MeV) for the latter reaction. On the other hand, we observe systematic deviations of experimental \(\sigma_{m}/\sigma_{t}\) values from those predicted by TALYS-1.96 for \({}^{nat}\)Ir(\(\alpha\),x)\({}^{194}\)Ir and \({}^{197}\)Au(d,p)\({}^{198}\)Au, for which the model predictions could be improved.
## 5 Summary
We extracted the experimental isomer production cross sections and isomeric ratios from the EXFOR library and compiled the isomeric ratios in the form of \(\sigma_{m}/\sigma_{t}\). Various mistakes in the EXFOR library and original publications were fixed during compilation. Preliminary experimental results and experiments reporting unphysical \(\sigma_{m}/\sigma_{t}\) values were discarded during compilation. As an application of the newly created isomeric ratio table, we studied the spin cut-off parameter dependence of the goodness of fit for the isomeric ratios predicted by TALYS-1.96.
## Data availability
The table of the compiled isomer production cross sections and isomeric ratios in a plain text file is included in the supplemental material. It is also available upon request to the authors by email or post. Their graphical comparison with evaluated data libraries is under preparation for an IAEA report [93].
## Acknowledgments
The authors are grateful to Viktor Zerkin for his special arrangement of a X4Pro SQLite database file dedicated to the present work. We also wish to thank the anonymous reviewer for careful reading of the earlier version of the manuscript. Sandor Sudar helped us to understand his adjustment of \(\eta\) values with TALYS. Ferenc Ditroi, Gyorgy Gyurky, Alex Hermanne, Mayeen Uddin Khandaker, Ondrej Lebeda, Rolf Michel, Haladhara Naik, Syed M. Qaim, Yury Titarenko and Sung Chul Yang helped us to understand the experimental data published by them. We would like to thank Oscar Cabellos for encouraging us. Last but not least, we appreciate the managers, compilers and programmers of the Nuclear Reaction Data Centres (NRDC) for maintenance and development of the EXFOR library.
## Supplemental material
The compiled cross sections and isomeric ratios in a plain text file and plots of the experimental cross sections and isomeric ratios along with those predicted by TALYS-1.96 with default setting can be found online at [https://doi.org/10.1016/j.adt.2023.xxxxxx](https://doi.org/10.1016/j.adt.2023.xxxxxx).
|
2307.14755 | Boundedness through nonlocal dampening effects in a fully parabolic
chemotaxis model with sub and superquadratic growth | This work deals with a chemotaxis model where an external source involving a
sub and superquadratic growth effect contrasted by nonlocal dampening reaction
influences the motion of a cell density attracted by a chemical signal. We
study the mechanism of the two densities once their initial configurations are
fixed in bounded impenetrable regions; in the specific, we establish that no
gathering effect for the cells can appear in time provided that the dampening
effect is strong enough. | Yutaro Chiyo, Fatma Ga mze D Düzgün, Silvia Frassu, Giuseppe Viglialoro | 2023-07-27T10:26:54Z | http://arxiv.org/abs/2307.14755v1 | Boundedness through nonlocal dampening effects in a fully parabolic chemotaxis model with sub and superquadratic growth
###### Abstract.
This work deals with a chemotaxis model where an external source involving a sub and superquadratic growth effect contrasted by nonlocal dampening reaction influences the motion of a cell density attracted by a chemical signal. We study the mechanism of the two densities once their initial configurations are fixed in bounded impenetrable regions; in the specific, we establish that no gathering effect for the cells can appear in time provided that the dampening effect is strong enough.
Mathematically, we are concerned with this problem
(\(\diamond\)) \[\begin{cases}u_{t}=\Delta u-\chi\nabla\cdot(u\nabla v)+au^{\alpha}-bu^{\alpha }\int_{\Omega}u^{\beta}&\text{in }\Omega\times(0,T_{max}),\\ \tau v_{t}=\Delta v-v+u&\text{in }\Omega\times(0,T_{max}),\\ u_{\nu}=v_{\nu}=0&\text{on }\partial\Omega\times(0,T_{max}),\\ u(x,0)=u_{0}(x)\geq 0,v(x,0)=v_{0}(x)\geq 0,&x\in\bar{\Omega},\end{cases}\]
for \(\tau=1\), \(n\in\mathbb{N}\), \(\chi_{i},a,b>0\) and \(\alpha,\beta\geq 1\). Herein \(u\) stands for the population density, \(v\) for the chemical signal and \(T_{max}\) for the maximal time of existence of any nonnegative classical solution \((u,v)\) to system \((\diamond)\). We prove that despite any large-mass initial data \(u_{0}\), whenever
* (the subquadratic case) \(1\leq\alpha<2\quad\text{and}\quad\beta>\frac{n+4}{2}-\alpha\),
* (the superquadratic case) \(\beta>\frac{2}{2}\quad\text{and}\quad 2\leq\alpha<1+\frac{2\beta}{n}\),
actually \(T_{max}=\infty\) and \(u\) and \(v\) are uniformly bounded.
This paper is in line with the result in [4], where the same conclusion is established for the simplified parabolic-elliptic version of model \((\diamond)\), corresponding to \(\tau=0\); more exactly, this work extends the study to the fully parabolic case [4].
Key words and phrases:Chemotaxis, Global existence, Nonlocal growth terms, Boundedness 2020 Mathematics Subject Classification: Primary: 35A01, 35K55, 35Q92, 34B10. Secondary: 92C17. \({}^{*}\)_Corresponding author:_ [email protected]
###### Contents
* 1 Introduction and motivations
* 1.1 Basic description of the research
* 1.2 An overview on the Keller-Segel system
* 1.3 An overview on the Keller-Segel system with logistics
* 1.4 An overview on the Keller-Segel system with nonlocal sources
* 1.5 Connection with the Fisher-KPP equation
* 2 Presentation of the main result and organization of the paper
* 2.1 Claim of the main result
Introduction and motivations
### Basic description of the research
In this paper we consider
\[\begin{cases}u_{t}=\Delta u-\chi\nabla\cdot(u\nabla v)+au^{\alpha}-bu^{\alpha} \int_{\Omega}u^{\beta}&\text{in }\Omega\times(0,T_{max}),\\ v_{t}=\Delta v-v+u&\text{in }\Omega\times(0,T_{max}),\\ u_{\nu}=v_{\nu}=0&\text{on }\partial\Omega\times(0,T_{max}),\\ u(x,0)=u_{0}(x),v(x,0)=v_{0}(x)&x\in\bar{\Omega},\end{cases} \tag{1}\]
where \(\Omega\subset\mathbb{R}^{n}\) (\(n\in\mathbb{N}\)) is a bounded domain with smooth boundary \(\partial\Omega\) (briefly, "bounded and smooth domain"); additionally, we fix \(\chi,a,b>0\), \(\alpha,\beta\geq 1\) and sufficiently regular and nonnegative initial data \(u_{0}(x),v_{0}(x)\). On the other hand, the subscript \(\nu\) in \((\cdot)_{\nu}\) indicates the outward normal derivative on \(\partial\Omega\) and \(T_{max}\) is the maximal existence time up to which solutions to the system are defined.
If properly interpreted, this model idealizes a _chemotaxis_ phenomenon, a mechanism from mathematical biology describing the directed migration of a cell in response to a chemical signal; more exactly, the movement of an organism or entity (such as somatic cells, bacteria, and other single-cell or multicellular organisms) is strongly influenced by the presence of a stimulus, and precisely the motion follows the direction of the gradient of the stimulus itself.
It is well known that the land marking event of chemotaxis was first introduced by Keller and Segel in 1970s ([16], [15]). More expressly, by indicating with \(u=u(x,t)\) a certain cell density at the position \(x\) and at the time \(t\), and with \(v=v(x,t)\) the stimulus at the same position and time, the pioneering study reads as (1) for the specific case \(a=b=0\). The partial differential equation modeling the motion of \(u\), i.e.
\[u_{t}=\Delta u-\chi\nabla\cdot(u\nabla v)\quad\text{in}\quad\Omega\times(0,T_{ max}), \tag{2}\]
essentially describes how a chemotactical impact of the (chemo)sensitivity (\(\chi\)) provided by the chemical signal \(v\) may break the natural diffusion (associated to the Laplacian operator, \(\Delta u\)) of the cells. Indeed, the term \(-\nabla\cdot(u\chi\nabla v)\) models the transport of \(u\) in the direction \(\chi\nabla v\), the negative sign indicating the attractive effect that \(v\) has on the cells (higher for \(\chi\) larger and for an increasing amount of \(v\)). As a consequence, when \(v\) is produced by the same cells, and in such a scenario \(v\) obeys
\[v_{t}=\Delta v-v+u\quad\text{in}\quad\Omega\times(0,T_{max}), \tag{3}\]
the attractive impact may be so efficient as to lead the cell density to its chemotactic collapse (blow-up at finite time with appearance of \(\delta\)-formations in the region).
### An overview on the Keller-Segel system
Mathematically, it was proved that solutions to the initial-boundary value problem associated to equations (2) and (3), may be globally bounded in time or may blow up at finite time; this depends on the mass (i.e., \(\int_{\Omega}u_{0}(x)dx\)) of the initial data, its specific configuration, and the value of the sensitivity \(\chi\). More precisely, in one-dimensional settings, all solutions are uniformly bounded in time, whereas for \(n\geq 3\) given any arbitrarily small mass \(m=\int_{\Omega}u_{0}(x)dx>0\), it is possible to construct solutions blowing-up at finite time. On the other hand, when \(n=2\), the value \(4\pi\) separates the case where diffusion overcomes self-attraction (if \(\chi m<4\pi\)) from the opposite scenario where self-attraction dominates (if \(\chi m>4\pi\)); respectively, all solutions are global in time, and initial data producing assembling processes at finite time can be detected. A detailed discussion on such analyses can be found in [11, 22, 21, 32], which are undoubtedly classical results in this context.
### An overview on the Keller-Segel system with logistics
If the evolution of \(u\) in equation (2) is also influenced by the presence of logistic terms behaving as \(au-bu^{\beta}\), for \(\beta>1\), mathematical intuition suggests that superlinear damping effects should benefit the boundedness of solutions (this, for instance, occurs for ordinary differential equations of the type \(u^{\prime}=au-bu^{\beta}\)). Actually, the prevention of \(\delta\)-formations in the sense of finite-time blow-up for
\[u_{t}=\Delta u-\chi\nabla\cdot(u\nabla v)+au-bu^{\beta}\quad\text{in}\quad \Omega\times(0,T_{max}), \tag{4}\]
when coupled with some equation implying the segregation of \(v\) with \(u\) (for instance (3)), has been established only for large values of \(b\) (if \(\beta=2\), see [31], [33]), whereas for some value of \(\beta\) near \(1\) a blow-up scenario was detected, first for dimension \(5\) or higher [34], (see also [9] for an improvement of [34]), but later also for \(n\geq 3\), in [35].
### An overview on the Keller-Segel system with nonlocal sources
As anticipated, in this research we are interested in understanding how the introduction of external growth factors of logistic type defined in terms of the total mass of the some power of the population, and hence idealized by nonlocal external sources, may avoid blow-up mechanisms, exactly as logistics. To be precise, likewise to classical logistic effects, impacts behaving as
\[au^{\alpha}-bu^{\alpha}\int_{\Omega}u^{\beta}\quad a,b>0\text{ and }\alpha,\beta\geq 1, \tag{5}\]
model a competition between a birth contribution, favoring instabilities of the species (especially for large values of \(a\)), and a death one opportunely contrasting this instability (especially for large values of \(b\)).
In this context, some questions naturally arise.
* Can one expect that in a biological mechanism governed by the equation (6) \[u_{t}=\Delta u-\chi\nabla\cdot(u\nabla v)+au^{\alpha}-bu^{\alpha}\int_{\Omega }u^{\beta}\quad\text{in}\quad\Omega\times(0,T_{max}),\] the external dampening source suffices to enforce boundedness of solutions, even for any large initial distribution \(u_{0}\), arbitrarily small \(b>0\) and in any large dimension \(n\)? Are, conversely, some restrictions on \(n\) and/or \(a,b\), \(\alpha,\beta,u_{0}\) required?
To our knowledge, most of the analyses connected to the aforementioned questions can be found in the literature when the equation for \(v\) expressed as (or similarly to) (6) is of elliptic type, i.e. for some \(\gamma\geq 1\)
\[0=-\Delta v+v+u^{\gamma}\quad\text{in}\quad\Omega\times(0,T_{max}).\]
As a matter of fact, when the equations for the cells and the stimulus are both evolutive, we are only aware of [24], where the authors consider, for \(\tau=1=m\), \(\sigma>2,\gamma\geq 1\) and \(h=h(x,t)\equiv 0\), the initial-boundary value problem associated to this model
\[\begin{cases}u_{t}=\nabla\cdot\left((u+1)^{m-1}\nabla u-\nabla\cdot(\chi u(u+ 1)^{\sigma-2}\nabla v\right)+f(u)&\text{in}\quad\Omega\times(0,T_{max}),\\ \tau v_{t}=\Delta v-v+u^{\gamma}+h&\text{in}\quad\Omega\times(0,T_{max}). \end{cases} \tag{7}\]
Herein, the nonlocal term is
\[f(u):=u\left(a_{0}-a_{1}u^{\alpha}+a_{2}\int_{\Omega}u^{\alpha}dx\right), \tag{8}\]
where \(\alpha\geq 1\), \(a_{0},a_{1}>0\) and \(a_{2}\in\mathbb{R}\); in particular, it is worthwhile mentioning that even though problem (1) is the limit case of (7) for \(m=1=\gamma\) and \(\sigma=2\) (and \(h=0\)), these models are not directly comparable. In fact, conversely to the mechanism we are dealing with (see again model (1)), in [24] the attractive drift-sensitivity is nonlinear (i.e., \(\sigma>2\) in \(-\chi u(u+1)^{\sigma-2}\nabla v\)) and, more importantly, the nonlocal term of the reaction in (8) has both an increasing (\(a_{2}>0\)) and decreasing (\(a_{2}<0\)) effect on the cell density, whereas the dampening counterpart is of polynomial type; this contrasts with (5), where the nonlocal term is purely absorbing and the local one productive.
For model (7) the global-in-time existence of classical solutions and the convergence to the steady state are established in the same [24], under suitable regularity assumptions on the initial data and whenever the coefficients of the system satisfy
\[\alpha+1>\sigma-1+\gamma\text{ and }a_{1}-a_{2}|\Omega|>0. \tag{9}\]
(Naturally \(a_{1}-a_{2}|\Omega|>0\) is unnecessary if \(a_{2}\geq 0\).) Additionally, the suppression of some of the conditions in (9), might provide (at least from the numerical point of view) some blow-up solution.
As we said above, when the equation for the chemical \(v\) is elliptic (biologically this idealizes the situations where chemicals diffuse much faster than cells), some more results are available in the literature. In particular, in [23] the authors analyze, inter alia, problem (7) in the framework of what follows: \(\tau=0,\sigma=2\), \(m=\gamma=\alpha=1\) and \(h=h(x,t)\) is a uniformly bounded function with suitable properties. Similar conclusions as those of the fully parabolic case are derived.
On the other hand, when the reaction term is taken exactly as in (5), these further results dealing with uniform-in-time boundedness of classical solutions emanating from sufficiently regular initial data have been obtained for problem (7), with \(\tau=0\) and \(h\equiv 0\):
* for the special case where \(m=\gamma=a=b=1\) and \(\sigma=2\) in [4], whenever these assumptions (with \(\alpha\geq 1,\beta>1\)) \(n\geq 3\), \(2\leq\alpha<1+\frac{2\beta}{n}\) or \(\frac{n+4}{2}-\beta<\alpha<2\) are complied;
* in [19] for the case \(m=a=b=1\) and \(\sigma=2\)\(\gamma\geq 1\), \(\sigma>2\) tied by \(\gamma+\sigma-1\leq\alpha<1+\frac{2\beta}{n}\) or \(\frac{n+4}{2}-\beta<\alpha<\gamma+\sigma-1\);
* for general choices of the parameters \(m>0,\sigma\geq 1,a=b>0\), for \(\gamma=1\), under the hypotheses that \(\sigma+\frac{n}{2}(\sigma-m)-\beta<\alpha<m+\frac{2}{n}\beta\) or \(\alpha=\sigma+\frac{n}{2}(\sigma-m)-\beta\) together with \(b\) large enough (see [29]).
For completeness, we add that another indication showing how rich is effectively the study in the framework of models with stationary equations for the stimulus, is given in these papers [3, 7, 6, 20], where nonlocal problems alike those in (7) are studied in the whole space \(\mathbb{R}^{n}\). (In this context, the equation for \(v\) is the classical Poisson's equation.)
### Connection with the Fisher-KPP equation
In mathematics
\[u_{t}-\Delta u=F(u), \tag{10}\]
is known (in its original one spatial dimensional version) as the Fisher-KPP equation, and it describes a reaction-diffusion phenomenon used to model population growth and wave propagation. (See [8, 17].) In its more common form \(F\), interpretable according to what said above as the rate of growth/death of the population, has this expression \((a,b\geq 0)\):
\[F(u)=au^{\alpha}(1-u)-bu.\]
Apart from the law of the corresponding sources, it appears interesting to discuss the parallelism between equations (10) and (4): essentially, in the latter the extra transport effect \(-\nabla\cdot(u\chi\nabla v)\) appears. In the specific, for \(\chi=0\) no convection on the particle density \(u\) influences the mechanism, and pure Reaction/\(F(u)\)-Diffusion/\(\Delta u\) models (RDm) are obtained (see (10)). Oppositely, for \(\chi>0\) the population is transported in the habitat toward the direction of \(\nabla v\); in this case, equation (4) is an example of \(\text{Taxis}/\nabla\cdot(u\chi\nabla v)\)-Diffusion-Reaction models (TDRm). As a consequence, and at least intuitively, the sources being equal, TDRm are more inclined to present some instabilities with respect to RDm.
Confining our attention to reactions \(F(u)\) of nonlocal type, for a general study on initial-boundary value problems (the majority of them with a homogeneous Dirichlet boundary condition, i.e. \(u=0\) on \(\partial\Omega\)) associated to (10), we refer to [26, 28] and references therein. Conversely, for results on more similar contexts to that considered in our analysis, we mention [2], where the authors study, among other things, globality and long-time behavior of solutions to a zero-flux nonlocal Fisher-KPP type problem.
## 2. Presentation of the main result and organization of the paper
### Claim of the main result
In this research we intend to improve the degree of knowledge on chemotactic models described by two coupled partial differential equations, and with non-local logistic sources, when both are of parabolic-type. In particular, our overall analysis gives an answer to questions \(\mathcal{Q}\), in the sense that we establish that _despite any fixed small value of the dampening parameter \(b\) and arbitrarily large growth parameter, any initial data \((u_{0},v_{0})\) (even arbitrarily large) produce uniform-in-time boundedness of solutions to model (1) for both subquadratic and superquadratic growth rate \(\alpha\), by properly magnifying the impact associated to the death rate \(\beta\)_.
Formally, we will prove the following
**Theorem 2.1**.: _Let \(\Omega\subset\mathbb{R}^{n}\), \(n\in\mathbb{N}\), be a bounded domain with smooth boundary, \(\chi,a,b>0\) and \(\alpha,\beta\geq 1\). Additionally, for every \(1<q<\infty\), let \(0\leq u_{0},v_{0}\in W^{2,q}(\Omega)\) be given such that \(\partial_{\nu}u_{0}=\partial_{\nu}v_{0}=0\) on \(\partial\Omega\). Then, whenever either_
\[\text{subquadratic growth rate:}\quad 1\leq\alpha<2\quad\text{and}\quad \beta>\frac{n+4}{2}-\alpha,\]
_or_
\[\text{superquadratic growth rate:}\quad\beta>\frac{n}{2}\quad\text{and} \quad 2\leq\alpha<1+\frac{2\beta}{n},\]
_problem (1) admits a unique classical solution, global and uniformly bounded in time, in the sense that_
\[\begin{cases}u\in C^{2,1}(\bar{\Omega}\times(0,\infty))\cap C^{0}(\bar{\Omega }\times[0,\infty))\cap L^{\infty}(\bar{\Omega}\times(0,\infty)),\\ v\in C^{2,1}(\bar{\Omega}\times(0,\infty))\cap C^{0}(\bar{\Omega}\times[0, \infty))\cap L^{\infty}_{loc}([0,\infty);W^{1,q}(\Omega))\cap L^{\infty}( \bar{\Omega}\times(0,\infty)).\end{cases}\]
### Structure of the paper
The rest of the paper is structured as follows. First, in SS3, we collect some necessary and preparatory materials. Then, in SS4, we give some hints on the local-well-posedness to model (1), so obtaining properties of related local solutions \((u,v)\) on \(\Omega\times(0,T_{max})\); additionally, through the _extensibility criterion_ we establish how to ensure globability (i.e., \(T_{max}=\infty\)) and boundedness (i.e., \(\|u(\cdot,t)\|_{L^{\infty}(\Omega)}\) finite on \((0,\infty)\)) by using their uniform-in-time \(L^{k}(\Omega)\)-boundedness, for \(k>1\). Such a bound is derived in SS5, and successively used in SS6 to prove Theorem 2.1.
**Remark 1** (On the difficulties of the fully parabolic analysis).: _As we will see below, conversely to the parabolic-elliptic case analyzed in [4, (2.21)], in the fully parabolic case it is no longer possible to use the equation for \(v\), so replacing \(\Delta v\) appearing in the testing procedures with \(v-u\). This complexity is circumvented by relying on Maximal Sobolev Regularity applied to the equation \(v_{t}=\Delta v-v+u\)._
## 3. Some preliminaries and auxiliary tools
We will make use of this functional relation, obtainable by manipulating the well known Gagliardo-Nirenberg inequality. We underline that for the case \(\Omega=\mathbb{R}^{n}\) the proof is given in [1, Lemma 2]; we did not find a reference covering bounded domains and henceforth herein we dedicate ourselves to this issue.
**Lemma 3.1**.: _Let \(\Omega\) be a bounded and smooth domain of \(\mathbb{R}^{n}\), with \(n\in\mathbb{N}\) and let, for \(n\geq 3\),_
\[p:=\frac{2n}{n-2}. \tag{11}\]
_Additionally, let \(q,r\) satisfy \(1\leq r<q<p\) and \(\frac{q}{r}<\frac{2}{r}+1-\frac{2}{p}\). Then for all \(\epsilon_{1},\epsilon_{2}>0\) there exists \(C_{0}=C_{0}(\epsilon_{1},\epsilon_{2})>0\) such that for all \(\varphi\in H^{1}(\Omega)\cap L^{r}(\Omega)\),_
\[\|\varphi\|_{L^{q}(\Omega)}^{q}\leq C_{0}\|\varphi\|_{L^{r}(\Omega)}^{\gamma} +\epsilon_{1}\|\nabla\varphi\|_{L^{2}(\Omega)}^{2}+\epsilon_{2}\|\varphi\|_{L^ {2}(\Omega)}^{2}, \tag{12}\]
_where_
\[\lambda:=\frac{\frac{1}{r}-\frac{1}{q}}{\frac{1}{r}-\frac{1}{p}}\in(0,1), \quad\gamma:=\frac{2(1-\lambda)q}{2-\lambda q}.\]
_The same conclusion holds for \(n\in\{1,2\}\) whenever \(q,r\) fulfill, respectively, \(1\leq r<q\) and \(\frac{q}{r}<\frac{2}{r}+2\) and \(1\leq r<q\) and \(\frac{q}{r}<\frac{2}{r}+1\)._
Proof.: Let \(n\geq 3\). From the Gagliardo-Nirenberg inequality ([25, page 126]) and this algebraic one
\[(A+B)^{l}\leq 2^{l-1}(A^{l}+B^{l})\quad\text{for all}\quad A,B\geq 0\;\;\text{ and}\;\;l\geq 1, \tag{13}\]
for any \(q,r>1\) and \(s>0\) there is some positive \(C_{GN}\) such that
\[\|\varphi\|_{L^{q}(\Omega)}^{q}\leq C_{GN}\|\nabla\varphi\|_{L^{2}(\Omega)}^{ \lambda q}\|\varphi\|_{L^{r}(\Omega)}^{(1-\lambda)q}+C_{GN}\|\varphi\|_{L^{s} (\Omega)}^{q}, \tag{14}\]
with (recall (11))
\[\lambda=\frac{\frac{1}{r}-\frac{1}{q}}{\frac{1}{r}-\frac{1}{2}+\frac{1}{n}}= \frac{\frac{1}{r}-\frac{1}{q}}{\frac{1}{r}-\frac{1}{p}}\in(0,1)\quad\text{for all}\quad 1\leq r<q<p. \tag{15}\]
Now, from the relation \(\frac{q}{r}<\frac{2}{r}+1-\frac{2}{p}\) we have \(\frac{\lambda q}{2}<1\), so that the Young inequality applied in (14) infers for every \(\epsilon_{1}>0\) some \(C_{1}=C_{1}(C_{GN},\epsilon_{1})>0\) such that
\[\|\varphi\|_{L^{q}(\Omega)}^{q}\leq\epsilon_{1}\|\nabla\varphi\|_{L^{2}(\Omega) }^{2}+C_{1}\|\varphi\|_{L^{r}(\Omega)}^{\gamma}+C_{GN}\|\varphi\|_{L^{s}(\Omega )}^{q}, \tag{16}\]
where
\[\gamma=\frac{2(1-\lambda)q}{2-\lambda q}. \tag{17}\]
On the other hand, for any \(q,p>1\), let \(s=\frac{2pq}{3p-2}>0\). Subsequently, the Holder inequality provides (note that \(\frac{2q}{s}=\frac{3p-2}{p}>1\))
\[C_{GN}\|\varphi\|_{L^{s}(\Omega)}^{q}=C_{GN}\left(\int_{\Omega}\varphi^{\frac {q}{s}}\varphi^{s-\frac{s}{q}}\right)^{\frac{q}{s}}\leq C_{GN}\left(\int_{ \Omega}\varphi^{\frac{2s(q-1)}{2q-s}}\right)^{\frac{1}{2}}\left(\int_{\Omega} \varphi^{\frac{2s(q-1)}{2q-s}}\right)^{\frac{1}{2}\left(\frac{2q}{s}-1\right)},\]
and, in turn, Young's inequality gives for any \(\epsilon_{2}>0\), some \(C_{2}=C_{2}(C_{GN},\epsilon_{2})>0\)
\[C_{GN}\|\varphi\|_{L^{s}(\Omega)}^{q}\leq\epsilon_{2}\int_{\Omega}\varphi^{2} +C_{2}\left(\int_{\Omega}\varphi^{\frac{2s(q-1)}{2q-s}}\right)^{\frac{2q}{s}- 1}. \tag{18}\]
The conclusion goes through standard but tedious computations; specifically, by inserting relation (18) into estimate (16) and by establishing that for \(s\) as above, and \(\lambda\) and \(\gamma\) as in (15) and (17) respectively, \(\frac{2s(q-1)}{2q-s}=r\) and \(\frac{2q}{s}-1=\frac{\gamma}{r}\), the proof is given with \(C_{0}=C_{1}+C_{2}\).
For \(n\in\{1,2\}\), the same arguments apply by taking respectively \(s=\frac{q}{2}\) and \(s=\frac{2q}{3}\).
In the spirit of [14, 13, 27], let us recall the following consequence of Maximal Sobolev Regularity results (like [12] or [10, Thm. 2.3]):
**Lemma 3.2**.: _Let \(n\in\mathbb{N}\), \(\Omega\subset\mathbb{R}^{n}\) be a bounded and smooth domain and \(q\in(1,\infty)\). Then there is \(C_{MR}>0\) such that the following holds: Whenever \(T\in(0,\infty]\), \(I=[0,T)\), \(f\in L^{q}(I;L^{q}(\Omega))\) and \(v_{0}\in W^{2,q}(\Omega)\) is such that \(\partial_{\nu}v_{0}=0\) on \(\partial\Omega\), every solution \(v\in W^{1,q}_{loc}(I;L^{q}(\Omega))\cap L^{q}_{loc}(I;W^{2,q}(\Omega))\) of_
\[v_{t}=\Delta v-v+f\quad\text{in}\quad\Omega\times(0,T);\quad\partial_{\nu}v=0 \quad\text{on}\quad\partial\Omega\times(0,T);\quad v(\cdot,0)=v_{0}\quad \text{on}\quad\Omega\]
_satisfies_
\[\int_{0}^{t}e^{s}\left(\int_{\Omega}|\Delta v(\cdot,s)|^{q}\right)ds\leq C_{ MR}\left[1+\int_{0}^{t}e^{s}\left(\int_{\Omega}|f(\cdot,s)|^{q}\right)ds \right]\quad\text{for }0<t<T.\]
Proof.: For \((x,t)\in\Omega\times(0,T)\), let us set \(z(x,t):=e^{\frac{t}{s}}v(x,t)\). Then easy computations establish that \(z\) solves
\[\begin{cases}z_{t}=\Delta z-\left(1-\frac{1}{q}\right)z+e^{\frac{t}{s}}f&\text {in }\Omega\times(0,T),\\ \partial_{\nu}z=0&\text{on }\partial\Omega\times(0,T),\\ z(x,0)=v_{0}(x)&x\in\Omega.\end{cases}\]
Subsequently, let us apply Maximal Sobolev Regularity ([12, (3.8)], [10, Thm. 2.3]) to \(A=\Delta-(1-\frac{1}{q})\), \(X=L^{q}(\Omega)\) and \(X_{1}=D(A)=W^{2,q}_{\partial_{\nu}}(\Omega)=\{w\in W^{2,q}(\Omega):\partial_{ \nu}w=0\text{ on }\partial\Omega\}\), which asserts that with some \(c_{1}>0\) we have for every \(t\in(0,T)\) that
\[\|\Delta z\|_{L^{q}([0,t];L^{q}(\Omega))}+\|z_{t}\|_{L^{q}([0,t];L^{q}(\Omega))} \leq c_{1}\left(\|v_{0}\|_{1-\frac{1}{q},q}+\Big{(}\int_{0}^{t}\|e^{\frac{s}{q}} f(\cdot,s)\|_{L^{q}(\Omega)}^{q}\,ds\Big{)}^{\frac{1}{q}}\right),\]
where \(\|\cdot\|_{1-\frac{1}{q},q}\) represents the norm in the interpolation space \((X,X_{1})_{1-\frac{1}{q},q}\). In turn, we have by using (13) that for \(C_{MR}=\Big{(}c_{1}\max\left\{1,\|v_{0}\|_{1-\frac{1}{q},q}\right\}\Big{)}^{q} \,2^{q-1}\)
\[\int_{0}^{t}\Big{(}\int_{\Omega}|\Delta z(\cdot,s)|^{q}\Big{)}\,ds\leq C_{MR} \left[1+\int_{0}^{t}e^{s}\left(\int_{\Omega}|f(\cdot,s)|^{q}\right)ds\right] \quad\text{for all }t\in(0,T). \tag{19}\]
We can finally obtain the claim by re-substituting \(z(\cdot,t):=e^{\frac{t}{s}}v(\cdot,t)\) into relation (19).
We will also need this comparison argument for Ordinary Differential Equations.
**Lemma 3.3**.: _Let \(T>0\) and \(\phi:(0,T)\times\mathbb{R}_{0}^{+}\to\mathbb{R}\). If \(0\leq y\in C^{0}([0,T))\cap C^{1}((0,T))\) is such that_
\[y^{\prime}\leq\phi(t,y)\quad\text{for all }t\in(0,T),\]
_and there is \(y_{1}>0\) with the property that whenever \(y>y_{1}\) for some \(t\in(0,T)\) one has that \(\phi(t,y)\leq 0\), then_
\[y\leq\max\{y_{1},y(0)\}\quad\text{on }(0,T).\]
Proof.: Setting \(y_{0}=y(0)\), let us distinguish the cases \(y_{0}<y_{1}\) and \(y_{0}\geq y_{1}\) and let us show that, respectively, the sets
\[S_{y_{1}}:=\{t\in(0,T)\mid y(t)>y_{1}\}\quad\text{and}\quad S_{y_{0}}:=\{t\in( 0,T)\mid y(t)>y_{0}\}\]
are empty. In particular, we will establish only that \(S_{y_{1}}=\emptyset\), the reasoning for \(S_{y_{0}}\) being similar.
By contradiction, if there were some \(t_{0}\in S_{y_{1}}\) then by the continuity of \(y\) and \(y_{0}<y_{1}\) we could find \(I=(\underline{t},\bar{t})\) (with possibly \(t_{0}=\bar{t}\)) such that \(y_{1}<y(\underline{t})<y(\bar{t})\), \(y_{1}<y(t)\) on \(I\); henceforth, by hypothesis, \(\phi(t,y)\leq 0\) for all \(t\in I\). At this stage, the Lagrange theorem would provide a proper \(\xi\in I\) leading to this inconsistency:
\[0<\frac{y(\bar{t})-y(\underline{t})}{\bar{t}-\underline{t}}=y^{\prime}(\xi) \leq\phi(\xi,y)\leq 0.\]
## 4. Local solutions and their main properties. A boundedness criterion
**Lemma 4.1** (Local existence and extensibility criterion).: _Let \(n\in\mathbb{N}\), \(\Omega\subset\mathbb{R}^{n}\) be a bounded and smooth domain, \(\chi,a,b>0\) and \(\alpha,\beta\geq 1\). Moreover, for every \(1<q<\infty\), let \(u_{0},v_{0}\in W^{2,q}(\Omega)\) satisfy_
\[\partial_{\nu}u_{0}=\partial_{\nu}v_{0}=0\text{ on }\partial\Omega,\text{ and }u_{0},v_{0}\geq 0 \text{ on }\bar{\Omega}.\]
_Then problem (1) has a unique and nonnegative classical solution_
\[\begin{cases}u\in C^{2,1}(\bar{\Omega}\times(0,T_{max}))\cap C^{0}(\bar{ \Omega}\times[0,T_{max})),\\ v\in C^{2,1}(\bar{\Omega}\times(0,T_{max}))\cap C^{0}(\bar{\Omega}\times[0,T_{ max}))\cap L^{\infty}_{loc}([0,T_{max});W^{1,q}(\Omega)),\end{cases}\]
_for some maximal \(T_{max}\in(0,\infty]\) which is such that_
\[\text{either }T_{max}=\infty\quad\text{or}\quad\limsup_{t\to T_{max}}\lVert u (\cdot,t)\rVert_{L^{\infty}(\Omega)}=\infty. \tag{20}\]
_Additionally, there exists \(m_{0}>0\) such that_
\[\int_{\Omega}u(x,t)\,dx\leq m_{0}\quad\text{for all }t\in(0,T_{max}). \tag{21}\]
Proof.: The first part of the proof can be obtained by adapting to the fully parabolic case the reasoning in [4, Proposition 4] developed for the simplified parabolic-elliptic scenario.
As to the boundedness of the mass, we integrate over \(\Omega\) the first equation of problem (1) so that by Holder's inequality, and \(\gamma(t):=\int_{\Omega}u^{\alpha}\geq 0\) on \((0,T_{max})\),
\[y^{\prime}(t):=\frac{d}{dt}\int_{\Omega}u=\int_{\Omega}u^{\alpha}\left(a-b \int_{\Omega}u^{\beta}\right)\leq\gamma(t)\left(a-b|\Omega|^{1-\beta}(y(t))^{ \beta}\right)\quad\text{for all }t\in(0,T_{max}).\]
Now we apply Lemma 3.3 with \(T=T_{max}\), \(\phi(t,y)=\gamma(t)\left(a-b|\Omega|^{1-\beta}(y(t))^{\beta}\right)\), \(y_{0}=y(0)=\int_{\Omega}u_{0}\) and \(y_{1}:=\left(\frac{a}{b|\Omega|^{1-\beta}}\right)^{\frac{1}{\beta}}\), so concluding with \(m_{0}=\max\{y_{0},y_{1}\}\).
Once the classical local well posedness to model (1) provided by Lemma 4.1 is ensured (in particular from now on with \((u,v)\) we refer to the local solution defined on \(\Omega\times(0,T_{max})\)), a suitable uniform-in-time boundedness criterion is required. In the specific, the next result based on an iterative method connected to the Moser-Alikakos technique addresses the issue.
**Lemma 4.2**.: _Whenever for every \(k>1\) there exists \(C>0\) such that_
\[\int_{\Omega}u^{k}\leq C\quad\text{for all }t\in(0,T_{max}),\]
_actually \(u\) is uniformly bounded, in the sense that \(u\in L^{\infty}((0,\infty);L^{\infty}(\Omega))\). Automatically, \(v\) is also uniformly bounded._
Proof.: From the first equation of problem (1) and the nonnegativity of \(u\), we have that \(u\) itself is such that \(u_{t}\leq\Delta u-\chi\nabla\cdot(u\nabla v)+au^{\alpha}\). In particular, \(u\) solves [30, (A.1)] with \(D(x,t,u)=1\), \(f(x,t)=-\chi u(x,t)\nabla v(x,t)\) and \(g(x,t)=au^{\alpha}(x,t)\). In these positions, since from our hypotheses \(u\in L^{\infty}((0,T_{max});L^{k}(\Omega))\) for all \(k>1\) (and in particular for \(k\) arbitrarily large), \(g\) belong to \(L^{\infty}((0,T_{max});L^{k}(\Omega))\) and from parabolic regularity results ([18, IV. 5.3]) we have that also \(\nabla v\in L^{\infty}((0,T_{max});L^{k}(\Omega))\in L^{\infty}((0,T_{max});L ^{k}(\Omega))\). As a by-product, \(f\) and, and [30, Lemma A.1] ensures \(u\in L^{\infty}((0,T_{max});L^{\infty}(\Omega))\). Finally, the extensibility criterion (20) entails \(T_{max}=\infty\) and we conclude. (The boundedness of \(v\) follows from \(u\in L^{\infty}((0,\infty);L^{k}(\Omega))\) for arbitrarily large \(k>1\) and, again, parabolic regularity results and Sobolev embeddings.)
## 5. A priori estimates
Since the uniform-in-time boundedness of \(u\) is implied whenever \(u\in L^{\infty}((0,T_{max});L^{k}(\Omega))\) for some \(k>1\), here under we dedicate to the derivation of some _a priori_ integral estimates.
(_In the sequel we will tacitly assume that all the constants \(c_{i}\) appearing below, \(i=1,2,\ldots\) are positive._)
**Lemma 5.1**.: _For all \(k>1\), \(\chi>0\) there exist \(c_{1},c_{2}\) such that whenever \(\alpha>1\)_
\[(k-1)\chi\int_{\Omega}u^{k}\Delta v\leq\int_{\Omega}u^{k+\alpha-1}+c_{1}\int_{ \Omega}\left|\Delta v\right|^{\frac{k+\alpha-1}{\alpha-1}}\quad\text{for all }t\in(0,T_{max}), \tag{22}\]
_wile if \(\alpha\geq 1\)._
\[(k-1)\chi\int_{\Omega}u^{k}\Delta v\leq\int_{\Omega}u^{k+1}+c_{2}\int_{ \Omega}\left|\Delta v\right|^{k+1}\quad\text{for all }t\in(0,T_{max}). \tag{23}\]
Proof.: The Young inequality directly provides the claim.
Let us now distinguish the analysis of the subquadratic case from the superquadratic one, exactly starting from this last situation.
### The superquadratic growth: \(\beta>\frac{n}{2}\) and \(2\leq\alpha<1+\frac{2\beta}{n}\)
**Lemma 5.2**.: _Assume that \(\alpha,\beta\geq 1\) satisfy that_
\[\beta>\frac{n}{2}\quad\text{and}\quad 2\leq\alpha<1+\frac{2\beta}{n}. \tag{24}\]
_Then there exist \(k_{0}\geq 1,L_{0}>0\) such that for all \(k>k_{0}\),_
\[\int_{\Omega}u^{k}\leq L_{0}\quad\text{for all }t\in(0,T_{max}).\]
Proof.: Let us start fixing \(k_{0}=1\), and when necessary we will enlarge this initial value. For all \(k>k_{0}\), we have from the first equation in (1) and integration by parts that
\[\begin{split}\frac{d}{dt}\int_{\Omega}u^{k}&=k\int_ {\Omega}u^{k-1}\Delta u-k\chi\int_{\Omega}u^{k-1}\nabla\cdot(u\nabla v)+ka\int _{\Omega}u^{k+\alpha-1}\\ &\quad-kb\left(\int_{\Omega}u^{k+\alpha-1}\right)\left(\int_{ \Omega}u^{\beta}\right)\\ &=-k(k-1)\int_{\Omega}u^{k-2}|\nabla u|^{2}+k(k-1)\chi\int_{ \Omega}u^{k-1}\nabla u\cdot\nabla v+ka\int_{\Omega}u^{k+\alpha-1}\\ &\quad-kb\left(\int_{\Omega}u^{k+\alpha-1}\right)\left(\int_{ \Omega}u^{\beta}\right)\\ &=-\frac{4(k-1)}{k}\int_{\Omega}|\nabla u^{\frac{k}{2}}|^{2}-(k-1 )\chi\int_{\Omega}u^{k}\Delta v+ka\int_{\Omega}u^{k+\alpha-1}\\ &\quad-kb\left(\int_{\Omega}u^{k+\alpha-1}\right)\left(\int_{ \Omega}u^{\beta}\right)\quad\text{on }(0,T_{max}).\end{split} \tag{25}\]
Here, from bound (22) in Lemma 5.1 we have that
\[-(k-1)\chi\int_{\Omega}u^{k}\Delta v\leq\int_{\Omega}u^{k+\alpha-1}+c_{1}\int _{\Omega}|\Delta v|^{\frac{k+\alpha-1}{\alpha-1}}\quad\text{for all }t\in(0,T_{max}). \tag{26}\]
A combination of relations (25) and (26) implies that for all \(t\in(0,T_{max})\)
\[\frac{d}{dt}\int_{\Omega}u^{k}+kb\left(\int_{\Omega}u^{k+\alpha-1}\right)\left( \int_{\Omega}u^{\beta}\right)\leq-\frac{4(k-1)}{k}\int_{\Omega}|\nabla u^{ \frac{k}{2}}|^{2}+c_{3}\int_{\Omega}u^{k+\alpha-1}+c_{1}\int_{\Omega}|\Delta v |^{\frac{k+\alpha-1}{\alpha-1}}. \tag{27}\]
We now estimate the second integral on the right-hand side of (27). From the identity \(\int_{\Omega}u^{k+\alpha-1}=\|u^{\frac{k}{2}}\|_{L^{\frac{2(k+\alpha-1)}{k}}( \Omega)}^{\frac{2(k+\alpha-1)}{k}}\), our aim is exploiting Lemma 3.1 with \(\varphi:=u^{\frac{k}{2}}\) and proper \(q\) and \(r\). In the specific, for \(n\geq 3\) (at the end of this proof we will discuss the cases \(n=1\) and \(n=2\)) in order to make meaningful the forthcoming computations, let us take \(k_{0}=\max\{\beta-\alpha+1,1\}\). From the definition of \(k_{0}\) and condition (24), for any \(k>k_{0}\) it is possible to set
\[k^{\prime}:=\frac{k+\alpha+\beta-1}{2}, \tag{28}\]
which satisfies
\[\max\left\{\beta,\ \frac{k}{2},\ \frac{p(\alpha-1)}{p-2}\right\}<k^{\prime}<k+ \alpha-1. \tag{29}\]
In this way, for
\[q:=\frac{2(k+\alpha-1)}{k},\ r:=\frac{2k^{\prime}}{k}\]
a number of calculations yield \(1\leq r<q<p\) and \(\frac{q}{r}<\frac{2}{r}+1-\frac{2}{p}\). Therefore we infer from (12) that for all \(\bar{c}>0\)
\[\bar{c}\int_{\Omega}u^{k+\alpha-1}=\bar{c}\|u^{\frac{k}{2}}\|_{L^{\frac{2(k+ \alpha-1)}{k}}(\Omega)}^{\frac{2(k+\alpha-1)}{k}}\leq\frac{2(k-1)}{k}\int_{ \Omega}|\nabla u^{\frac{k}{2}}|^{2}+\int_{\Omega}u^{k}+c_{4}\left(\int_{ \Omega}u^{k^{\prime}}\right)^{\frac{\gamma}{r}}\quad\text{for all }t\in(0,T_{max}). \tag{30}\]
Here, the interpolation inequality (see [5, page 93]) yields for all \(t\in(0,T_{max})\),
\[\begin{split}\left(\int_{\Omega}u^{k^{\prime}}\right)^{\frac{ \gamma}{r}}=\|u\|_{L^{k^{\prime}}(\Omega)}^{b_{1}}&\leq\|u\|_{L^ {\beta}(\Omega)}^{a_{1}b_{1}}\|u\|_{L^{k+\alpha-1}(\Omega)}^{(1-a_{1})b_{1}}\\ &=\left(\|u\|_{L^{\beta}(\Omega)}^{\beta}\|u\|_{L^{k+\alpha-1}( \Omega)}^{k+\alpha-1}\right)^{\frac{(1-a_{1})b_{1}}{k+\alpha-1}}\|u\|_{L^{ \beta}(\Omega)}^{\left[a_{1}-\frac{\beta(1-a_{1})}{k+\alpha-1}\right]b_{1}}, \end{split} \tag{31}\]
where
\[b_{1}=b_{1}(q):=\frac{k^{\prime}\gamma(q)}{r}=\frac{k^{\prime}\gamma}{r}, \quad a_{1}:=\frac{\frac{1}{k}-\frac{1}{k+\alpha-1}}{\frac{1}{\beta}-\frac{1} {k+\alpha-1}}\in(0,1). \tag{32}\]
We note that recalling the expression of \(k^{\prime}\) in (28) and the range of \(\alpha\) in (24), some computations provide
\[\left[a_{1}-\frac{\beta(1-a_{1})}{k+\alpha-1}\right]b_{1}=0\quad\text{and} \quad\frac{(1-a_{1})b_{1}}{k+\alpha-1}<1.\]
As a consequence, we can invoke Young's inequality so that relation (31) reads
\[c_{4}\left(\int_{\Omega}u^{k^{\prime}}\right)^{\frac{\gamma}{r}}\leq c_{4} \left(\|u\|_{L^{\beta}(\Omega)}^{\beta}\|u\|_{L^{k+\alpha-1}(\Omega)}^{k+ \alpha-1}\right)^{\frac{(1-a_{1})b_{1}}{k+\alpha-1}}\leq kb\left(\int_{\Omega} u^{k+\alpha-1}\right)\left(\int_{\Omega}u^{\beta}\right)+c_{5}\quad\text{for all }t\in(0,T_{max}),\]
which in conjunction with (30) implies for all \(t\in(0,T_{max})\),
\[c_{3}\int_{\Omega}u^{k+\alpha-1}\leq\frac{2(k-1)}{k}\int_{\Omega}|\nabla u^{ \frac{k}{2}}|^{2}+\int_{\Omega}u^{k}+kb\left(\int_{\Omega}u^{k+\alpha-1} \right)\left(\int_{\Omega}u^{\beta}\right)+c_{5}. \tag{33}\]
Now we focus on the second integral at the right-hand side: the Gagliardo-Nirenberg inequality and (21) produce for
\[\theta_{1}:=\frac{\frac{k}{2}-\frac{1}{2}}{\frac{k}{2}+\frac{1}{n}-\frac{1}{2} }\in(0,1),\]
this bound on \((0,T_{max})\):
\[\int_{\Omega}u^{k}=\|u^{\frac{k}{2}}\|_{L^{2}(\Omega)}^{2}\leq c_{6}\|\nabla u^ {\frac{k}{2}}\|_{L^{2}(\Omega)}^{2\theta_{1}}\|u^{\frac{k}{2}}\|_{L^{\frac{2}{ k}}(\Omega)}^{2(1-\theta_{1})}+c_{6}\|u^{\frac{k}{2}}\|_{L^{\frac{2}{k}}(\Omega)}^{2} \leq c_{7}\left(\int_{\Omega}|\nabla u^{\frac{k}{2}}|^{2}\right)^{\theta_{1}}+c_ {7}.\]
In turn, we have from the Young inequality that for all \(\hat{c}>0\)
\[\hat{c}\int_{\Omega}u^{k}\leq\frac{2(k-1)}{k}\int_{\Omega}|\nabla u^{\frac{k}{2}} |^{2}+c_{8}\quad\text{for all }t\in(0,T_{max}). \tag{34}\]
Coming back to (27), in order to estimate the term \(c_{1}\int_{\Omega}|\Delta v|^{\frac{k+\alpha-1}{\alpha-1}}\), let us exploit Lemma 3.2 with \(q=\frac{k+\alpha-1}{\alpha-1}\): we have
\[c_{1}\int_{0}^{t}e^{s}\left(\int_{\Omega}|\Delta v(\cdot,s)|^{\frac{k+\alpha-1 }{\alpha-1}}\right)ds\leq c_{1}C_{MR}\left[1+\int_{0}^{t}e^{s}\left(\int_{ \Omega}u(\cdot,s)^{\frac{k+\alpha-1}{\alpha-1}}\right)ds\right]\quad\text{for all }t\in(0,T_{max}). \tag{35}\]
Since from the condition \(\alpha\geq 2\) we have that \(\frac{k+\alpha-1}{\alpha-1}\leq k+\alpha-1\), the Young inequality leads to
\[c_{1}C_{MR}\int_{\Omega}u^{\frac{k+\alpha-1}{\alpha-1}}\leq c_{1}C_{MR}\int_{ \Omega}u^{k+\alpha-1}+c_{9}\quad\text{for all }t\in(0,T_{max}). \tag{36}\]
(Naturally for the limit case \(\alpha=2\), the constant \(c_{9}\) can be taken equal to \(0\).) We now add to both sides of (27) the term \(\int_{\Omega}u^{k}\) and then we multiply by \(e^{t}\). Since \(e^{t}\frac{d}{dt}\int_{\Omega}u^{k}+e^{t}\int_{\Omega}u^{k}=\frac{d}{dt}\left( e^{t}\int_{\Omega}u^{k}\right)\), an integration over \((0,t)\) provides for all \(t\in(0,T_{max})\)
\[\begin{split}& e^{t}\int_{\Omega}u^{k}-\int_{\Omega}u^{k}_{0}+kb \int_{0}^{t}e^{s}\left(\int_{\Omega}u^{k+\alpha-1}\right)\left(\int_{\Omega}u ^{\beta}\right)\,ds\leq-\frac{4(k-1)}{k}\int_{0}^{t}e^{s}\left(\int_{\Omega}| \nabla u^{\frac{k}{2}}|^{2}\right)\,ds\\ &+\int_{0}^{t}e^{s}\left(\int_{\Omega}u^{k}\right)\,ds+c_{3}\int _{0}^{t}e^{s}\left(\int_{\Omega}u^{k+\alpha-1}\right)\,ds+c_{1}\int_{0}^{t}e^{ s}\left(\int_{\Omega}|\Delta v|^{\frac{k+\alpha-1}{\alpha-1}}\right)\,ds.\end{split} \tag{37}\]
By inserting estimate (35) into (37) and taking into account bounds (36), (33) and (34), we arrive at
\[e^{t}\int_{\Omega}u^{k}\leq\int_{\Omega}u^{k}_{0}+c_{10}e^{t}+c_{11}\quad \text{on }(0,T_{max}),\]
which implies
\[\int_{\Omega}u^{k}\leq L_{0}\quad\text{for all }t\in(0,T_{max})\]
with \(L_{0}:=c_{12}+\int_{\Omega}u^{k}_{0}\), so the claim is proved.
For \(n\in\{1,2\}\) the arguments are similar once relation (29) is, respectively, replaced by
\[\max\left\{\beta,\frac{k}{2},\frac{\alpha-1}{2}\right\}<k^{\prime}<k+\alpha-1 \quad\text{and}\quad\max\left\{\beta,\frac{k}{2},\alpha-1\right\}<k^{\prime}< k+\alpha-1.\]
### The subquadratic growth: \(1\leq\alpha<2\) and \(\beta>\frac{n+4}{2}-\alpha\)
**Lemma 5.3**.: _Assume that \(\alpha,\beta\geq 1\) satisfy_
\[1\leq\alpha<2\quad\text{and}\quad\beta>\frac{n+4}{2}-\alpha. \tag{38}\]
_Then there exist \(k_{1}\geq 1,L_{1}>0\) such that for all \(k>k_{1}\),_
\[\int_{\Omega}u^{k}\leq L_{1}\quad\text{for all }t\in(0,T_{max}).\]
Proof.: Let us consider \(k_{1}=1\); as done before, we will enlarge this initial value when necessary. By following the same argument of Lemma 5.2 for all \(k>k_{1}\), we arrive for all \(t\in(0,T_{max})\) at
\[\frac{d}{dt}\int_{\Omega}u^{k}=-\frac{4(k-1)}{k}\int_{\Omega}|\nabla u^{\frac{ k}{2}}|^{2}-(k-1)\chi\int_{\Omega}u^{k}\Delta v+ka\int_{\Omega}u^{k+\alpha-1}- kb\left(\int_{\Omega}u^{k+\alpha-1}\right)\left(\int_{\Omega}u^{\beta}\right). \tag{39}\]
Since \(\alpha\geq 1\), an application of relation (23) of Lemma 5.1 to the second integral at the right-hand side of (39) gives
\[-(k-1)\chi\int_{\Omega}u^{k}\Delta v\leq\int_{\Omega}u^{k+1}+c_{2}\int_{\Omega }|\Delta v|^{k+1}\quad\text{for all }t\in(0,T_{max}), \tag{40}\]
whereas from the condition \(\alpha<2\), the Young inequality leads to
\[ka\int_{\Omega}u^{k+\alpha-1}\leq\int_{\Omega}u^{k+1}+c_{13}\quad\text{for all }t\in(0,T_{max}). \tag{41}\]
Combining estimates (40) and (41) with bound (39), we have for all \(t\in(0,T_{max})\),
\[\frac{d}{dt}\int_{\Omega}u^{k}+kb\left(\int_{\Omega}u^{k+\alpha-1}\right)\left( \int_{\Omega}u^{\beta}\right)\leq-\frac{4(k-1)}{k}\int_{\Omega}|\nabla u^{\frac {k}{2}}|^{2}+2\int_{\Omega}u^{k+1}+c_{2}\int_{\Omega}|\Delta v|^{k+1}+c_{13}. \tag{42}\]
Now let us focus on the second integral on the right-hand side of (42). Since \(\int_{\Omega}u^{k+1}=\|u^{\frac{k}{2}}\|_{L^{\frac{2(k+1)}{k}}(\Omega)}^{\frac {2(k+1)}{k}}\), we can apply Lemma 3.1 with \(\varphi:=u^{\frac{k}{2}}\) and suitable \(q\) and \(r\). In the specific, for any
\[k>k_{1}:=\max\left\{1,1-\alpha+\beta\right\},\]
by posing
\[k^{\prime}:=\frac{k+\alpha+\beta-1}{2},\]
it is possible to check that
\[\max\left\{\beta,\frac{k}{2},\frac{p}{p-2}\right\}<k^{\prime}<k+\alpha-1. \tag{43}\]
In this way, and for \(n\geq 3\), letting
\[q:=\frac{2(k+1)}{k},\ r:=\frac{2k^{\prime}}{k}\]
we can establish that \(1\leq r<q<p\) and \(\frac{q}{r}<\frac{2}{r}+1-\frac{2}{p}\). Consequently, we deduce from (12) that for all \(\tilde{c}>0\)
\[\tilde{c}\|u^{\frac{k}{2}}\|_{L^{\frac{2(k+1)}{k}}(\Omega)}^{\frac{2(k+1)}{k} }\leq\frac{2(k-1)}{k}\int_{\Omega}|\nabla u^{\frac{k}{2}}|^{2}+\int_{\Omega}u ^{k}+c_{14}\left(\int_{\Omega}u^{k^{\prime}}\right)^{\frac{\gamma}{r}}\quad \text{for all }t\in(0,T_{max}). \tag{44}\]
Now an application of the interpolation inequality yields for all \(t\in(0,T_{max})\),
\[\left(\int_{\Omega}u^{k^{\prime}}\right)^{\frac{\gamma}{r}}=\|u \|_{L^{\prime}(\Omega)}^{b_{2}} \leq\|u\|_{L^{p}(\Omega)}^{a_{2}b_{2}}\|u\|_{L^{k+\alpha-1}( \Omega)}^{(1-a_{2})b_{2}}\] \[=\left(\|u\|_{L^{\beta}(\Omega)}^{\beta}\|u\|_{L^{k+\alpha-1}( \Omega)}^{k+\alpha-1}\right)^{\frac{a_{2}b_{2}}{\beta}}\|u\|_{L^{k+\alpha-1}( \Omega)}^{\left[1-a_{2}-\frac{a_{2}(k+\alpha-1)}{\beta}\right]b_{2}},\]
where
\[b_{2}=b_{2}(q):=\frac{k^{\prime}\gamma(q)}{r}=\frac{k^{\prime}\gamma}{r}, \quad a_{2}:=\frac{\frac{1}{K}-\frac{1}{k+\alpha-1}}{\frac{1}{\beta}-\frac{1} {k+\alpha-1}}\in(0,1).\]
(A comparison between the couple \((a_{2},b_{2})\) above and \((a_{1},b_{1})\) in (32) shows that \(a_{1}=a_{2}\), whereas \(b_{i}\), \(i=1,2\) depends on \(q\).) From straightforward calculations and the condition (38), we observe that
\[\left[1-a_{2}-\frac{a_{2}(k+\alpha-1)}{\beta}\right]b_{2}=0\quad\text{and} \quad\frac{a_{2}b_{2}}{\beta}<1.\]
Subsequently, we can exploit the Young inequality entailing
\[c_{14}\left(\int_{\Omega}u^{k^{\prime}}\right)^{\frac{\gamma}{r}}\leq c_{14} \left(\|u\|_{L^{\beta}(\Omega)}^{\beta}\|u\|_{L^{k+\alpha-1}(\Omega)}^{k+ \alpha-1}\right)^{\frac{a_{2}b_{2}}{\beta}}\leq kb\left(\int_{\Omega}u^{k+ \alpha-1}\right)\left(\int_{\Omega}u^{\beta}\right)+c_{15}\quad\text{on }\ (0,T_{max}).\]
This in conjunction with (44) implies that \(t\in(0,T_{max})\),
\[\tilde{c}\int_{\Omega}u^{k+1}\leq\frac{2(k-1)}{k}\int_{\Omega}|\nabla u^{ \frac{k}{2}}|^{2}+\int_{\Omega}u^{k}+kb\left(\int_{\Omega}u^{k+\alpha-1} \right)\left(\int_{\Omega}u^{\beta}\right)+c_{15}. \tag{45}\]
As to the term \(\int_{\Omega}|\Delta v|^{k+1}\) in expression (42), by exploiting Lemma 3.2 with \(q=k+1\), we obtain
\[c_{2}\int_{0}^{t}e^{s}\left(\int_{\Omega}|\Delta v(\cdot,s)|^{k+1}\right)ds\leq c _{2}C_{MR}\left[1+\int_{0}^{t}e^{s}\left(\int_{\Omega}u(\cdot,s)^{k+1}\right) ds\right]\quad\text{for all }t\in(0,T_{max}). \tag{46}\]
On the other hand, by adding \(\int_{\Omega}u^{k}\) at both sides of estimate (42), by multiplying what obtained by \(e^{t}\), a subsequent integration over \((0,t)\) yields
\[\begin{split}& e^{t}\int_{\Omega}u^{k}-\int_{\Omega}u_{0}^{k}+kb\int_ {0}^{t}e^{s}\left(\int_{\Omega}u^{k+\alpha-1}\right)\left(\int_{\Omega}u^{ \beta}\right)\,ds\\ &\leq-\frac{4(k-1)}{k}\int_{0}^{t}e^{s}\left(\int_{\Omega}|\nabla u ^{\frac{k}{2}}|^{2}\right)\,ds+2\int_{0}^{t}e^{s}\left(\int_{\Omega}u^{k+1} \right)\,ds+\int_{0}^{t}e^{s}\left(\int_{\Omega}u^{k}\right)\,ds\\ &\quad+c_{2}\int_{0}^{t}e^{s}\left(\int_{\Omega}|\Delta v|^{k+1} \right)\,ds+c_{16}e^{t}\quad\text{for all }t\in(0,T_{max}).\end{split} \tag{47}\]
By rearranging bound (47) by virtue of estimates (46), (45) and (34), it is provided
\[e^{t}\int_{\Omega}u^{k}\leq\int_{\Omega}u_{0}^{k}+c_{17}e^{t}+c_{18}\quad \text{on}\,\,\,(0,T_{max}),\]
which gives
\[\int_{\Omega}u^{k}\leq L_{1}\quad\text{for all }t\in(0,T_{max})\]
with \(L_{1}:=c_{19}+\int_{\Omega}u_{0}^{k}\), so proving the claim.
To establish the claim for \(n\in\{1,2\}\), relation (43) has to be taken as
\[\max\left\{\beta,\frac{k}{2}\right\}<k^{\prime}<k+\alpha-1.\]
## 6. Proof of Theorem 2.1
We apply Lemma 5.2 and Lemma 4.2, and Lemma 5.3 and Lemma 4.2 to give the proof for the subquadratic and superquadratic case, respectively.
_Acknowledgments._ SF and GV are members of the Gruppo Nazionale per l'Analisi Matematica, la Probabilita e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM), and are partially supported by the research project _Analysis of PDEs in connection with real phenomena_ (2021, Grant Number: F73C22001130007), funded by Fondazione di Sardegna. GV is also supported by MIUR (Italian Ministry of Education, University and Research) Prin 2022 _Nonlinear differential problems with applications to real phenomena_ (Grant Number: 2022ZXZTN2), and acknowledges financial support under the National Recovery and Resilience Plan (NRRP), Mission 4 Component 2 Investment 1.5 - Call for tender No.3277 published on December 30, 2021 by the Italian Ministry of University and Research (MUR) funded by the European Union - NextGenerationEU. Project Code ECS0000038 - Project Title eINS Ecosystem of Innovation for Next Generation Sardinia - CUP F53C22000430001- Grant Assignment Decree No. 1056 adopted on June 23, 2022 by the Italian Ministry of University and Research (MUR).
|
2301.01968 | Learning Neural Force Manifolds for Sim2Real Robotic Symmetrical Paper
Folding | Robotic manipulation of slender objects is challenging, especially when the
induced deformations are large and nonlinear. Traditionally, learning-based
control approaches, such as imitation learning, have been used to address
deformable material manipulation. These approaches lack generality and often
suffer critical failure from a simple switch of material, geometric, and/or
environmental (e.g., friction) properties. This article tackles a fundamental
but difficult deformable manipulation task: forming a predefined fold in paper
with only a single manipulator. A sim2real framework combining
physically-accurate simulation and machine learning is used to train a deep
neural network capable of predicting the external forces induced on the
manipulated paper given a grasp position. We frame the problem using scaling
analysis, resulting in a control framework robust against material and
geometric changes. Path planning is then carried out over the generated
``neural force manifold'' to produce robot manipulation trajectories optimized
to prevent sliding, with offline trajectory generation finishing 15$\times$
faster than previous physics-based folding methods. The inference speed of the
trained model enables the incorporation of real-time visual feedback to achieve
closed-loop model-predictive control. Real-world experiments demonstrate that
our framework can greatly improve robotic manipulation performance compared to
state-of-the-art folding strategies, even when manipulating paper objects of
various materials and shapes. | Andrew Choi, Dezhong Tong, Demetri Terzopoulos, Jungseock Joo, M. Khalid Jawed | 2023-01-05T09:09:01Z | http://arxiv.org/abs/2301.01968v4 | # Deep Learning of Force Manifolds from the Simulated Physics of Robotic Paper Folding
###### Abstract
Robotic manipulation of slender objects is challenging, especially when the induced deformations are large and nonlinear. Traditionally, learning-based control approaches, e.g., imitation learning, have been used to tackle deformable material manipulation. Such approaches lack generality and often suffer critical failure from a simple switch of material, geometric, and/or environmental (e.g., friction) properties. In this article, we address a fundamental but difficult step of robotic origami: forming a predefined fold in paper with only a single manipulator. A data-driven framework combining physically-accurate simulation and machine learning is used to train deep neural network models capable of predicting the external forces induced on the paper given a grasp position. We frame the problem using scaling analysis, resulting in a control framework robust against material and geometric changes. Path planning is carried out over the generated manifold to produce robot manipulation trajectories optimized to prevent sliding. Furthermore, the inference speed of the trained model enables the incorporation of real-time visual feedback to achieve closed-loop sensorimotor control. Real-world experiments demonstrate that our framework can greatly improve robotic manipulation performance compared against natural paper folding strategies, even when manipulating paper objects of various materials and shapes.
robotic manipulation, deformable material manipulation, deep neural networks, data-driven models, closed-loop sensorimotor control
## I Introduction
From shoelaces to clothes, we encounter flexible slender structures throughout our everyday lives. These structures are often characterized by their ability to undergo large deformations when subjected even to moderate forces, such as gravity. People possess an incredible innate understanding of the dynamics of such deformable objects; e.g., we can use gravity to perfectly manipulate a shirt over our heads. Instilling such intuition into robots remains an important research problem and has the potential to breed numerous applications with considerable economic and humanitarian potential. Some examples include preparing deformable products in the food industry [1, 2], assisting in the medical field [3, 4, 5], and providing caregiving assistance to elderly and disabled communities, including with respect to dressing [6, 7, 8, 9, 10] and feeding [11, 12]. However, the robotic manipulation of deformable objects is highly nontrivial as a robot must be able to take into account future deformations of the manipulated object to complete manipulation tasks successfully.
Prior research has focused primarily on manipulating either cloth [13, 14, 15, 16, 17, 18] or ropes [19, 20, 21, 22, 23, 24, 25] and as a result, the robotic manipulation of many other deformable objects still lacks robust solutions. In this article, we address a particularly difficult deformable manipulation task -- folding paper. Paper is similar to cloth but typically has a much larger bending stiffness and a slippery surface. Therefore, compared with folding garments and fabrics, more delicate and insightful manipulations are required for folding sheets of paper.
### _Our Approach_
We propose a framework that combines physically accurate simulation, scaling analysis, and machine learning to generate folding trajectories optimized to prevent sliding. With scaling analysis, we make the problem non-dimensional, resulting in both dimensionality reduction and generality. We then train neural networks, whose outputs are referred to as neural force manifolds (NFM), to continuously approximate a scaled force manifold sampled purely from simulation. Compared to numerical models that require the entire geometric configuration
Fig. 1: Half valley folding for A4 paper with (a) intuitive manipulation and (b) our designed optimal manipulation. An intuitive manipulation scheme such as tracing a semi-circle experiences significant sliding due to the bending stiffness of the paper, resulting in a poor fold. By contrast, our optimal manipulation approach achieves an excellent fold by taking into consideration the paper’s deformation and thus minimizing sliding.
of the paper, NFMs map the external forces of the paper given only the grasp position. Therefore, we can generate trajectories optimized to minimize forces (and thus minimize sliding) by applying path planning algorithms in near real-time. We show that our approach is capable of folding paper on extremely slick surfaces with little-to-no sliding (Fig. 1(b)).
Our main contributions are as follows: (1) we formulate a solution to the folding problem in a physically robust manner using scaling analysis, resulting in complete generality with respect to material, geometric, and environmental properties; (2) we train a neural network with non-dimensional simulation data forming a fast and accurate model that can generate a descriptive force manifold for trajectory optimization; (3) we utilize the high inference speed of our trained model with a perception system to construct a robust and efficient closed-loop sensorimotor control algorithm for the folding task, and finally (4) we demonstrate full sim2real realization through an extensive robotic case study featuring 210 experiments across paper sheets of various materials and shapes. While several previous works have trained their policies purely from simulation data [19, 26, 27, 28, 7], these works lacked real world validation. To our knowledge, our framework is the first to provide optimal folding trajectories with complete generality.
We release supplementary videos as well as all source code and CAD files as open source at [https://github.com/StructuresComp/deep-robotic-paper-folding](https://github.com/StructuresComp/deep-robotic-paper-folding).
### _Overview_
The remainder of the article is organized as follows: We begin with a review of related work in Sec. II. A brief description of the folding problem is presented in Sec. III. The formulation of a reduced-order physics-based model is discussed in Sec. IV, where we formulate the folding problem using scaling analysis. In Sec. V, we formulate our learning framework as well as algorithms for optimal path planning. Next, in Sec. VI, we introduce our robotic system as well as formulate our closed-loop visual feedback pipeline. Experimental results for a robot case study and analysis of the results are given in Sec. VII. Finally, we provide concluding remarks and discuss the potential of future research avenues in Sec. VIII.
## II Related Work
The majority of prior works tackling the folding problem can be roughly divided into four categories: mechanical design-based solutions, vision-based solutions, learning-based solutions, and model-based solutions.
Mechanical design-based approaches typically involve solving the folding problem through highly specialized manipulators or end effectors. Early approaches involve specialized punches and dies for sheet metal bending [29], More recently, highly specialized manipulators for robotic origami folding have also been developed [30]. Such methods can reliably produce repeatable folding but are often limited to a highly specific fold, geometry, and/or material.
Vision-based approaches involve folding deformable materials by generating folding motions purely from visual input. These approaches are usually common for folding clothes [14, 16, 31] as they are extremely soft, which results in the easy predictability of their deformation state given a particular action. Such approaches can be effective and rather simple to implement, but do not transfer well to paper folding as paper possesses a much higher stiffness when compared to fabric and will attempt to restore its natural, undeformed state if not properly handled.
Learning-based approaches involve the robot learning how to fold through training data. The most popular has been to learn control policies from human demonstrations, also known as learning from demonstrations (LfD). Prior research has demonstrated flattening and folding towels [32, 33]. Teleop demonstrations are a popular avenue for training policies and have been used to learn how to manipulate deformable linear objects (DLOs) [34] as well as folding fabric [35]. To eliminate the need for expensive human-labeled data, researchers have also focused on tackling the sim2real problem for robotic folding, where reinforcement learning has been used to train robots to fold fabrics and cloths completely from simulation [26, 28, 36]. More recently, Zheng et al. [37] used reinforcement learning to train a robot to flip pages in a binder through tactile feedback. Pure learning-based methods have shown promising performance, but only for specific tasks whose state distribution matches the training data. Such methods tend to generalize quite poorly; e.g., when the material or geometric properties change drastically.
Model-based approaches, where the model can either be known or learned, often use model predictive control to manipulate the deformable object. They involve learning the natural dynamics of deformable objects through random perturbations [38]. These models are generally fast, but they can be inaccurate when experiencing new states. Known models are often formulated to be as physically accurate as possible. They can be referred to as physics-based (as opposed to simulated). Their physical accuracy allows for the direct application of their predictive capabilities in the real world. Examples are published for rectangular cloth folding [39], strip folding [40], and garment folding [41]. Still, known models are usually quite expensive to run and must often face a trade-off between accuracy and efficiency.
Despite the large quantity of prior research focusing on 2D deformable object manipulation, the majority of these efforts have limited their scope to soft materials such as towels and cloth. Such materials are highly compliant and often do not exhibit complicated nonlinear deformations, thus allowing for solutions lacking physical insight. We instead tackle the scenario of folding papers of various stiffnesses with a single manipulator. Because of its relatively high bending stiffness and slippery surface, paper is significantly more difficult to manipulate since large deformations will cause sliding of the paper on the substrate. Such an example can be observed in Fig. 1(a), where intuitive folding trajectories that may work on towels and cloth fail for paper due to undesired sliding.
However, a few works have attempted to solve the paper folding problem. For example, Elbrecht et al. [42] demonstrated paper folding using visual tracking and real-time physics-based modeling with impressive results, but they required expensive
end effectors (two Shadow Dexterous Hands), one end effector to hold the paper down while folding at all times, and the paper to have AR tags for visual tracking. Similarly, Namiki et al. [43] also achieved paper folding through dynamic motion primitives and used physics-based simulations to estimate the deformation of the paper sheet, also requiring highly specialized manipulators and an end effector to hold the paper down while folding. By contrast, our method can fold papers reliably without any need for holding down the paper during the folding operation and requires only an extremely simple 3D printed gripper. Other approaches have also attempted to fold with a single manipulator while minimizing sliding [36, 40], but these methods focused on fabrics whose ends were taped down to the substrate.
## III Problem Statement
This article studies a simple but challenging task in robotic folding: creating a predefined crease on a sheet of paper of typical geometry (e.g., rectangular, diamond, etc.) as is illustrated in Fig. 2. Only one end of the paper is manipulated while the other end is left free. Thus, extra fixtures are unnecessary and the folding task can be completed by a single manipulator, which simplifies the workspace, but slippage of the paper against the substrate must be mitigated during manipulation, which is a challenge.
The task can be divided into two sub-tasks and three states. The first sub-task is manipulating one end of the paper from the initial flat state (Fig. 2(a)) to the folding state (Fig. 2(b)), with the goal that the manipulated edge or point should overlap precisely with the crease target line or point \(C\) as shown in the figure. With the manipulated edge of the paper at the origin, the manipulator moves in the \(x\) direction. Since the manipulated paper usually has relatively high bending stiffness, large nonlinear elastic deformations are induced in the folding state. In the second sub-task, the paper must be permanently deformed to form the desired crease at \(C/2\), thus achieving the final folded state (Fig. 2(c)).
## IV Physics-based Model and Analysis
We next present the numerical framework for studying the underlying physics of the paper folding process. First, we analyze the main deformations of the manipulated paper and prove that a 2D model is sufficient to learn the behaviors of the manipulated paper so long as the sheet is symmetrical. Second, we briefly introduce a physically accurate numerical model based on prior work in computer graphics [44]. Third, we formulate a generalized strategy for paper folding using scaling analysis.
### _Reduced-Order Model Representation_
Paper is a unique deformable object. Unlike cloth, its surface is developable [45]; i.e., the surface can bend but not stretch. Furthermore, shear deformations are not of particular importance as the geometry of the manipulated paper is symmetrical. Therefore, the primary nonlinear deformation when folding paper in our scenario is bending deformation. We postulate that the nonlinear behaviors of paper arise primarily from a balance of bending and gravitational energies: \(\epsilon_{b}\sim\epsilon_{g}\).
To further understand the energy balance of the manipulated paper, we analyze an arbitrary piece in the paper, as shown in Fig. 3(b). The bending energy of this piece can be written as
\[\epsilon_{b}=\frac{1}{2}k_{b}\kappa^{2}l, \tag{1}\]
where \(l\) is its undeformed length of the piece, \(\kappa\) is its curvature, and its bending stiffness is
\[k_{b}=\frac{1}{12}Ewh^{3}, \tag{2}\]
where \(w\) is its undeformed width, \(h\) is its thickness, and \(E\) is its Young's modulus. The gravitational energy of the piece is
\[\epsilon_{g}=\rho whlgH, \tag{3}\]
where \(\rho\) is its volume density and \(H\) is its vertical height above the rigid substrate.
From the above equations, we obtain a characteristic length called the gravito-bending length, which encapsulates the influence of bending and gravity:
\[L_{gb}=\left(\frac{Eh^{2}}{24\rho g}\right)^{\frac{1}{3}}\sim\left(\frac{h}{ \kappa^{2}}\right)^{\frac{1}{3}}. \tag{4}\]
The length is in units of meters, and we can observe that it scales proportionally to the ratio of thickness to curvature squared, which are the key quantities describing the deformed configuration of the manipulated paper. Note that the formulation of \(L_{gb}\) contains only one geometric parameter, the paper thickness \(h\), which means that other geometric quantities (i.e., length \(l\) and width \(w\)) have no influence on the deformed configuration.
Fig. 2: Folding sheets of paper. The manipulation process involves (a) the initial state, where the paper lies flat on the substrate, followed by (b) the folding state, where the manipulated end is moved to the “crease target” line \(C\), and finally (c) the folded state, which involves forming the desired crease on the paper.
Additionally, due to the symmetrical geometry of the paper, curvature \(\kappa\) should be identical for all regions at the same height \(H\). Therefore, we can simply use the centerline of the paper, as shown in Fig. 3(a), to express the paper's configuration. We model this centerline as a 2D planar rod since deformations are limited to the \(x,z\) plane. We implement a discrete-differential-geometry (DDG)-based numerical simulation to simulate the 2D planar rod. We present the details of this numerical framework in the next section.
### _Discrete Differential Geometry Numerical Model_
Following pioneering work on physics-based modeling and simulation of deformable curves, surfaces, and solids [46, 47, 48], the computer graphics community has shown impressive results using DDG-based simulation frameworks. For example, the Discrete Elastic Rods (DER) [44] framework has shown efficient and physically accurate simulation of deformable linear objects in various scenarios including knot tying [49], helix bifurcations [50], coiling of rods [51], and flagella buckling [52]. Given this success, we use DER to model the centerline of the paper as a 2D planar rod undergoing bending deformations.
As shown in Fig. 3(c), the discrete model is comprised of \(N+1\) nodes, \(\mathbf{q}_{i}\) (\(0\leq i\leq N\)). Each node, \(\mathbf{q}_{i}\), represents two degrees of freedom (DOF): position along the \(x\) and the \(z\) axes. This results in a \(2N+2\)-sized DOF vector representing the configuration of the sheet, \(\mathbf{q}=[\mathbf{q}_{0},\mathbf{q}_{1},...,\mathbf{q}_{N}]^{T}\), where \({}^{T}\) is the transpose operator. Initially, all the nodes of the paper are located in a line along the \(x\)-axis in the paper's undeformed state. As the robotic manipulator imposes boundary conditions on the end node \(\mathbf{q}_{N}\), portions of the paper deform against the substrate as shown in Fig. 4(a). We compute the DOFs as a function of time \(\mathbf{q}(t)\) by integrating the equations of motion (EOM) at each DOF.
Before describing the EOM, we first outline the elastic energies of the rod as a function of \(\mathbf{q}\). Kirchhoff's rod theory tells us that the elastic energies of a rod can be divided into stretching \(E_{s}\), bending \(E_{b}\), and twisting \(E_{t}\) energies. First, The stretching elastic energy is
\[E_{s}=\frac{1}{2}k_{s}\sum_{i=0}^{N-1}\left(1-\frac{\|\mathbf{q}_{i+1}- \mathbf{q}_{i}\|}{\Delta l}\right)^{2}\Delta l, \tag{5}\]
where \(k_{s}=EA\) is the stretching stiffness; \(E\) is Young's modulus; \(A=wh\) is the cross-sectional area, and \(\Delta l\) is the undeformed length of each edge (segment between two nodes). The bending energy is
\[E_{b}=\frac{1}{2}k_{b}\sum_{i=2}^{N-1}\left(2\tan\frac{\phi_{i}}{2}-2\tan \frac{\phi_{i}^{0}}{2}\right)^{2}\frac{1}{\Delta l}, \tag{6}\]
where \(k_{b}=\frac{Ewh^{3}}{12}\) is the bending stiffness; \(w\) and \(h\) are the width and thickness respectively; \(\phi_{i}\) is the "turning angle" at a node as shown in Fig. 3(d), and \(\phi_{i}^{0}\) is the undeformed turning angle (\(0\) for paper). Finally, since we limit our system to a 2D plane, we can forgo twisting energies entirely. The total elastic energy is then simply \(E_{el}=E_{s}+E_{b}\).
Indeed, a ratio \(k_{s}/k_{b}\sim w/h^{2}>>1\) indicates that stretching strains will be minimal which matches our intuition as paper is usually easy to bend but not stretch. Therefore, the stretching energy item in (5) acts as a constraint to prevent obvious stretching for the modeled planar rod.
We can now construct our EOM as a simple force balance
\[\mathcal{P}(\mathbf{q})\equiv\mathbb{M}\ddot{\mathbf{q}}+\frac{\partial E_{ el}}{\partial\mathbf{q}}-\mathbf{F}^{\text{ext}}=0, \tag{7}\]
where \(\mathbb{M}\) is the diagonal lumped mass matrix; \(\dot{(}\dot{)}\) represents derivatives with respect to time; \(-\frac{\partial E_{el}}{\partial\mathbf{q}}\) is the elastic force vector, and \(\mathbf{F}^{\text{ext}}\) is the external forces applied on the paper. Note that (7) can be solved using Newton's method, allowing for full simulation of the 2D planar rod under manipulation.
### _Generalized Solution and Scaling Analysis_
As mentioned in Sec. III, the core of the folding task is to manipulate the end \(\mathbf{q}_{N}\) to the target position \(C\) starting from an initially flat state shown in Fig. 4(a). To do so, we analyze the physical system in order to achieve a solution capable of minimizing sliding during manipulation.
We first denote several quantities to describe the deformed configuration of the paper. Here, we introduce a point \(\mathbf{q}_{C}\), which is the node that connects the suspended (\(z>0\)) and unsuspended regions (\(z=0\)) of the paper. We focus primarily on the suspended region as deformations occur solely in this region. An origin \(\mathbf{o}\) is defined for our 2D plane which is located at the initial manipulated end \(\mathbf{q}_{N}\) as shown in Fig. 4(a). For the manipulated end, the robot end-effector imposes a position \(\mathbf{q}_{N}=(x,z)\) and an orientation angle \(\alpha\) to control the pose of the manipulated end as shown in Fig. 4(a). On the connective node \(\mathbf{q}_{C}\), the tangent is always along the \(x\)-director. Here, we impose a constraint that the curvature at the manipulated end is always zero so that sharp bending deformations are prevented, which is crucial to preventing permanent deformations during
Fig. 3: (a) Schematic of a paper during the folding state. (b) Bending deformations of a small piece in the paper. (c) Reduced-order discrete model (planer rod) representation of our paper. (d) Notations in the discrete model.
the folding process. With these definitions, we can now modify (7) with the following constraints:
\[\mathcal{P}(\mathbf{q}) =0,\] (8) s.t. \[\mathbf{q}_{N} =(x,z),\] \[\frac{\mathrm{d}\mathbf{q}_{C}}{\mathrm{d}s} =(-1,0),\] \[M_{N} =0,\] \[l_{s}\equiv\int_{\mathbf{q}_{C}}^{\mathbf{q}_{N}}\mathrm{d}s =\mathbf{q}_{C}\cdot\hat{\mathbf{x}},\]
where \(M_{N}\) is the external moment applied on the manipulated end; \(s\) is the arc length of the paper's centerline, and \(l_{s}\) is the arc length of the suspended region (from \(\mathbf{q}_{C}\) to \(\mathbf{q}_{N}\)).
We can solve (8) with the numerical framework presented in Sec. IV-B resulting in a unique DOF vector \(\mathbf{q}\). Note that when \(\mathbf{q}\) is determined, we can then obtain the external forces from the substrate along the paper \(\mathbf{F}_{\text{substrate}}=\mathbf{F}_{x}+\mathbf{F}_{z}\), orientation angle \(\alpha\) of the manipulated end, and the suspended length \(l_{s}\). Recall that through (4), Young's modulus \(E\), thickness \(h\), and density \(\rho\) were determined to be the main material and geometric properties of the paper. Therefore, we can outline the following physical relationship relating all our quantities:
\[\lambda =\frac{\|\mathbf{F}_{x}\|}{\|\mathbf{F}_{z}\|}, \tag{9}\] \[(\lambda,\alpha,l_{s}) =f\left(E,h,\rho,x,z\right),\]
where \(f\) is an unknown relationship. It is then trivial to see that to prevent sliding the relationship
\[\lambda\leq\mu_{s} \tag{10}\]
must be satisfied, where \(\mu_{s}\) is the static friction coefficient between the paper and the substrate. Therefore, a trajectory that minimizes sliding is one that minimizes \(\lambda\) along its path.
One glaring problem remains in that the relation \(f\) must be known to generate any sort of trajectory. In the absence of an analytical solution, the numerical framework from Sec. IV-B can be used to exhaustively find mappings between the inputs and outputs of \(f\). However, generating tuples in this fashion requires solving the high-dimensional problem in (8). Such a method would be horribly inefficient and would make real-time operation infeasible. Instead, we opt to obtain an analytical approximation of \(f\) by fitting a neural network on simulation data. Currently, this approach has several shortcomings. For one, directly learning \(f\) is difficult given that (9) currently depends on five parameters as input, resulting in a high dimensional relationship. Furthermore, since the formulation directly depends on intrinsic parameters of the paper (\(E\), \(\rho\), and \(h\)), an enormously exhaustive range of simulations must be run to gather enough data to accurately learn \(f\).
To solve all the aforementioned shortcomings, we reduce the dimensionality of the problem by applying scaling analysis. According to Buckingham \(\pi\) theorem, we construct five dimensionless groups: \(\bar{x}=x/L_{gb}\); \(\bar{z}=z/L_{gb}\); \(\bar{l}_{s}=l_{s}/L_{gb}\); \(\alpha\), and \(\lambda=F_{t}/F_{n}\), where \(L_{gb}\) is the gravito-bending length from (4). This results in a non-dimensionalized formulation of (9) which is expressed as
\[(\lambda,\alpha,\bar{l}_{s})=\mathcal{F}\left(\bar{x},\bar{z}\right). \tag{11}\]
Note that the mapping relationship \(\mathcal{F}\) is now irrelevant to quantities with units, e.g., material and geometric properties of the paper. As the dimensionality of our problem has been reduced significantly, we can now express \(\lambda\) as a function of just two parameters \(\bar{x},\bar{z}\). Therefore, training a neural network to model \(\mathcal{F}\) is now trivial as non-dimensionalized simulation data from a single type of paper can be used. Furthermore, the low dimensionality of \(\mathcal{F}\) allows us easily visualize the \(\lambda\) landscape along a non-dimensional 2D-plane. In the next section, we will now go over the steps to model \(\mathcal{F}\).
## V Deep Learning and Optimization
### _Data Generation_
In order to learn the force manifold, we solve (8) for several sampled \((x,z)\) points. An example of the partial force manifold produced from this sampling can be observed for a single suspended length in Fig. 4(b). For a specific \((x,z)\) location, we apply incremental rotations along the y-axis and find the optimal rotation angle \(\alpha\) that results in \(M_{N}=0\) on the manipulated end. For a particular configuration \((x,z,\alpha)\), we then record the suspended length \(l_{s}\) as well as the tangential and normal forces experienced on the clamped end. This leads to a training dataset \(\mathcal{D}\) consisting of six element tuples \((F_{t},F_{n},\alpha,l_{s},x,z)\). We then non-dimensionalize this dataset to the form \((\lambda,\alpha,\bar{l}_{s},\bar{x},\bar{z})\). A total of 95796 training samples were used within a normalized suspended length of \(\bar{l}_{s}\leq 6.84\), which adequately includes the workspace of most papers.
### _Learning Force and Optimal Grasp Orientation_
We can now train on our dataset \(\mathcal{D}\) to obtain a generalized neural network modeling \(\mathcal{F}\):
\[(\lambda,\alpha,\bar{l}_{s})=\mathcal{F}_{\text{NN}}(\bar{x},\bar{z}). \tag{12}\]
To obtain the above function, a simple fully-connected feed-forward nonlinear regression network is trained with 4 hidden layers, each containing 392 nodes. Aside from the final output layer, each layer is followed by a rectified linear unit (ReLU) activation. In addition, we preprocess all inputs through the standardization
\[\mathbf{x}^{\prime}=\frac{\mathbf{x}-\bar{\mathbf{x}}_{\mathcal{D}}}{\mathbf{ \sigma}_{\mathcal{D}}}, \tag{13}\]
Fig. 4: (a) Side view of a symmetrical paper during folding with coordinate frame and relevant notations. (b) Sampled \(\lambda\) forces for a particular \(\bar{l}_{s}\) of 4.10. This showcases one of the sampled “partial” force manifolds that we use train our neural network on.
where \(\mathbf{x}\) is the original input, \(\bar{\mathbf{x}}_{\mathcal{D}}\) is the mean of the dataset \(\mathcal{D}\), and \(\boldsymbol{\sigma}_{\mathcal{D}}\) is the standard deviation of \(\mathcal{D}\).
We use an initial 80-20 train-val split on the dataset \(\mathcal{D}\) with a batch size of 128. Mean absolute error (MAE) is used as the error. We alternate between stochastic gradient descent (SGD) and Adam whenever training stalls. Furthermore, we gradually increase the batch size up to 4096 and train on the entire dataset once MAE reaches \(<0.001\). Using this scheme, we achieve an MAE of \(<0.0005\).
### _Constructing the Neural Force Manifold_
The neural force manifold (i.e. \(\lambda\) outputs of \(\mathcal{F}_{\text{NN}}\) for the workspace set) is discretized into a rectangular grid consisting of \(\bar{\delta}\times\bar{\delta}\) blocks, where \(\bar{\delta}=\delta/L_{gb}\). For each of the blocks, we obtain and store a single \(\lambda\) value using the midpoint of the block. This results in a discretized neural force manifold \(\mathcal{M}\) represented as a \(m\times n\) matrix. For the purposes of path planning, we add two components to our manifold. First, we do not allow exploration into any region not belonging to our dataset distribution (\(\bar{l}_{s}>6.84\)). We do so by defining a workspace \(\mathcal{W}\) as all \((\bar{x},\bar{z})\) pairs within the concave hull of the input portion of the dataset \(\mathcal{D}\). Secondly, we also exclude regions within a certain \(\bar{l}_{s}\) threshold. This is done as positions with small suspended lengths and large \(\alpha\) angles may result in high curvatures that could cause collision with our gripper and/or plastic deformation, both of which we wish to avoid. We denote this region as the penalty region \(\mathcal{L}_{s}\). A visualization of \(\mathcal{M}\) with the workspace \(\mathcal{W}\) and penalty boundary \(\mathcal{L}_{s}\) regions can be seen in Fig. 5(a). The \(\alpha\) values corresponding to the manifold are also shown in Fig. 5(b).
### _Path Planning over the Neural Force Manifold_
Given the discretized manifold \(\mathcal{M}\), we can now generate optimal trajectories through traditional path planning algorithms. We define an optimal trajectory \(\tau^{*}\) as one that gets to the goal state while minimizing the sum of \(\lambda\):
\[\tau^{*}=\operatorname*{arg\,min}_{\tau\in\mathcal{T}}\sum_{i=0}^{i=L-1} \lambda_{i}, \tag{14}\]
where \(L\) is the length of the trajectory and \(\mathcal{T}\) is the set of all valid trajectories from the desired start to goal state. We define a valid trajectory as one that is contained within the acceptable region
\[(x_{i},z_{i})\in\mathcal{W}\setminus\mathcal{L}_{s}\ \forall\ (x_{i},z_{i})\in\tau,\]
and whose consecutive states are adjacent grid locations. Given the discretization of the NFM, we can treat \(\mathcal{M}\) as a graph whose edge weights consist of \(\lambda\). Therefore, we can use uniform cost search to obtain \(\tau^{*}\). The pseudocode of the path planning algorithm can be seen in Alg. 1.
## VI Robotic System
### _Dual Manipulator Setup_
For our experiments, we use two Rethink Robotics' Sawyer manipulators as shown in Fig. 7. One arm has an elongated gripper designed for folding, while the other arm has a spring compliant roller for creasing and an Intel Realsense D435 camera for vision feedback. The elongated gripper has rubber attached to the insides of the fingers for tight gripping.
Fig. 5: (a) Visualization of the trained neural network’s non-dimensionalized \(\lambda\) force manifold \(\mathcal{M}\) and (b) \(\alpha\) manifold. An extremely low \(\bar{\delta}\) discretization is used to showcase smoothness. For the force manifold, we observe two distinctive local minima canvas. Note that regions outside the workspace \(\mathcal{W}\) are physically inaccurate but are of no consequence to us as they are ignored. For the \(\alpha\) manifold, we observe continuous smooth interpolation all throughout which is key for producing feasible trajectories. Both manifolds showcase the used trajectories in the experiments for folding paper in half for \(L_{gb}\in[0.048,0.060,0.132]\). (c) Showcases the three trajectories in (a) and (b) scaled back to real space. These are the actual trajectories used by the robot. (d) Arbitrary trajectories for various \(L_{gb}\) with identical start and goal states are shown to highlight the effect of the material property on our control policy.
### _Perception System_
For our perception, we take an eye-in-hand approach by attaching an Intel Realsense D435 to the roller arm. We do not use the depth component of the camera as we align the camera to be pointing down along the world \(z\)-axis and the distance from the camera to the table is known. To detect the pose of the paper, we use simple color detection to segment the paper and then use Shi-Tomasi corner detection [53] to obtain the position of the bottom edge. An example of the top-down view as well as detected poses produced by the camera can be seen in Fig. 6.
### _Vision-feedback Control_
Although we minimize \(\lambda\) with our proposed framework, sliding could still happen due to a substrate's low friction surface and/or jittering of the robot's end-effector. Notice that the generated optimal trajectory \(\tau^{*}\) from Sec. V-D assumes that the origin \(\mathbf{o}\) of our coordinate system shown in Fig. 4(a) is fixed. We can define the origin as \(\mathbf{o}=\mathbf{q}_{0}-L\hat{\mathbf{x}}\) where \(L\) is the total length of the paper. Any amount of sliding indicates that \(\mathbf{q}_{0}\) is moving along the \(x\)-axis and therefore, the origin \(\mathbf{o}\) also moves an identical amount. When this occurs, our position within the manifold during traversal deviates from the optimal trajectory. Furthermore, without adaptive replanning, the amount of sliding \(\Delta x\) will directly result in \(\Delta x\) amount of error when creasing. To circumvent this, we introduce a vision-feedback approach that mitigates the effects of sliding.
We perform vision-feedback at \(N\) evenly spaced out intervals
Fig. 6: Example of our perception system with a top down view of the folding procedure. (a) Showcases the the intuitive baseline results while (b) showcases our open-loop algorithm for \(L_{gb}=0.048\) and \(C=0.25\)m. Similar to Fig. 2, the solid green line indicates the desired end effector position while the dashed blue line indicates the crease location. We observe that the intuitive baseline has considerable sliding while our open-loop algorithm has near-perfect performance for this case.
Fig. 7: Experimental apparatus: Two robot manipulators, one for folding (1) and the other for creasing (3). An elongated gripper (2) is used for grabbing the manipulated end of the folding paper. A roller (5) with compliant springs (6) is used for forming the crease. An Intel Realsense D435 camera (4) is attached to the creasing arm offer vision feedback during the folding procedure. All gripper attachments were 3D printed.
of the trajectory \(\tau^{*}\) as shown in Fig. 8. To do so, we first split up \(\tau^{*}\) into \(N\) partial trajectories. Aside from the first partial trajectory \(\tau_{0}^{*}\), we extract the start and goal states of the other \(1\leq i\leq N\) partial trajectories resulting in a sequence of \(N\) evenly spaced out states \(\mathcal{S}=\{(x_{1},z_{1},\alpha_{1}),...,(x_{N},z_{N},\alpha_{N})\}\) when accounting for overlaps. After carrying out \(\tau_{0}^{*}\), we detect the amount of sliding \(\Delta x\) and incorporate this error by updating the start state and non-dimensionalizing as
\[\bar{x}_{i}^{c}=\frac{x_{i}-\Delta x}{L_{gb}}.\]
We then replan a partial trajectory \(\tau_{i}^{*}\) from the updated start state \((x_{i}^{c},z_{i})\) to the next state \((x_{i+1},z_{i+1})\) in the sequence and carry out this updated trajectory. This is repeated until reaching the goal state. By properly accounting for sliding, we ensure that the traversal through the NFM is as accurate as possible. We note that this scheme allows us obtain corrected partial trajectories in near real time once \(N\) becomes sufficiently large as each partial trajectory's goal state becomes increasingly close to its start state, allowing for uniform cost search to conclude rapidly. We direct the reader to the supplementary videos mentioned in Sec. I which showcase the speed of the feedback loop.
Rectifying the sliding \(\Delta x\) is not the only error we must address. Recount that we assume an optimal grasp orientation \(\alpha\) for each position within the manifold. When the origin of our NFM moves, our true position does not match the intended position, resulting in also an angular error
\[\alpha_{i}^{c} =\mathcal{F}_{\text{NN}}(\bar{x}_{i}^{c},\bar{z}_{i}),\] \[\Delta\alpha =\alpha_{i}-\alpha_{i}^{c}.\]
Simply applying a \(-\Delta\alpha\) update to the first point in a partial trajectory results in a large rotational jump that only exacerbate the sliding issue. Furthermore, we postulate that so long as sliding is not extremely large, the incorrect \(\alpha\) at the current position within the manifold is still fairly optimal. Therefore, the \(\Delta\alpha\) error is incorporated into the trajectory gradually:
\[\tau_{i}^{*} =\text{UCS}(\bar{x}_{i}^{c},\bar{z}_{i},\bar{x}_{i+1},\bar{z}_{i +1},\mathcal{M}),\] \[\boldsymbol{\alpha}_{i} =\mathcal{F}_{\text{NN}}(\tau_{i}^{*}),\] \[\boldsymbol{\alpha}_{i}^{c} =\boldsymbol{\alpha}_{i}+\Delta\alpha[1,(L-1)/L,...,1/L,0]^{T},\]
where UCS stands for uniform cost search and \(L\) is the length of the trajectory \(\tau_{i}^{*}\). This gradual correction ensures that we minimize sliding while maintaining smoothness of the trajectory. The pseudocode for our full closed-loop algorithm can be seen in Alg. 2.
```
Input:\((x_{s},z_{s}),(x_{g},z_{g}),L_{gb},\delta,N,\mathcal{F}_{\text{NN}}\)
1\(\mathcal{M}\leftarrow\)DiscretizeManifold \((\mathcal{F}_{\text{NN}},\delta)\)
2\(\bar{x}_{s},\bar{z}_{s},\bar{x}_{g},\bar{z}_{g}\leftarrow\) non-dimensionalize with \(L_{gb}\)
3\(\bar{\tau}^{*}\leftarrow\)UCS \((\bar{x}_{s},\bar{x}_{g},\bar{x}_{g},\bar{z}_{g},\mathcal{M})\)
4update \(\bar{\tau}^{*}\) with \(\alpha_{s}\) using \(\mathcal{F}_{\text{NN}}\)
5\(\tau^{*}\leftarrow\) convert \(\bar{\tau}^{*}\) to real space with \(L_{gb}\)
6\(\tau_{0}^{*},...,\tau_{N-1}^{*}\leftarrow\)SplitTrajectory \((\tau^{*},N)\)
7\(\mathcal{S}\leftarrow\) extract start and goal states
8carry out \(\tau_{0}^{*}\) on robot
9for\((x_{i},z_{i},\alpha_{i})\) and \((x_{i+1},z_{i+1},\alpha_{i+1})\in\mathcal{S}\)do
10\(\Delta x\leftarrow\) detect sliding of paper
11\(\bar{x}_{i}^{c}\leftarrow\bar{x}_{i}-\Delta x\)
12\(\bar{x}_{i}^{c},\bar{z}_{i},\bar{x}_{i+1},\bar{z}_{i+1}\leftarrow\) non-dimensionalize with \(L_{gb}\)
13\(\alpha_{i}^{c}\leftarrow\mathcal{F}_{\text{NN}}(\bar{x}_{i}^{c},\bar{z}_{i})\)
14\(\Delta\alpha\leftarrow\alpha_{i}-\alpha_{i}^{c}\)
15\(\bar{\tau}_{i}^{*}\leftarrow\) UCS \((\bar{x}_{i}^{c},\bar{z}_{i},\bar{x}_{i+1},\bar{z}_{i+1},\mathcal{M})\)
16\(L\leftarrow\) len\((\tau_{i}^{*})\)
17\(\boldsymbol{\alpha}_{i}\leftarrow\) obtain as of \(\bar{\tau}_{i}^{*}\) using \(\mathcal{F}_{\text{NN}}\)
18\(\boldsymbol{\alpha}_{i}^{c}\leftarrow\boldsymbol{\alpha}_{i}+\Delta\alpha[1,(L- 1)/L,...,1/L,0]^{T}\)
19 append \(\bar{\tau}_{i}^{*}\) with \(\boldsymbol{\alpha}_{i}^{c}\)
20\(\tau_{i}^{*}\leftarrow\) convert \(\bar{\tau}^{*}\) to real space with \(L_{gb}\)
21 carry out \(\tau_{i}^{*}\) on robot
22crease paper with roller
```
**Algorithm 2**Closed-loop Control Pseudocode
## VII Experiments and analysis
### _Measuring the Material Property of Paper_
To use our framework, we must develop a way to accurately measure the parameter \(L_{gb}\) for a particular piece of paper. As mentioned previously, \(L_{gb}\) encapsulates the influence of bending and gravity. With this in mind, we propose a simple way to measure the parameter.
As shown in Fig. 10(a), when one end of the paper is fixed, the paper will deform due to the coupling of bending
Fig. 8: An overview of our folding pipeline. The top row showcases offline proponents while the bottom row shows online. On the offline side, we use our trained neural network to generate the necessary force manifold for planning. Then, given an input tuple \((x_{s},z_{s},x_{g},z_{b}),L_{gb})\), we generate an end-to-end trajectory using uniform cost search. This end-to-end trajectory is then split up into partial trajectories that are carried out by the robot. At the conclusion of each partial trajectory, we measure paper sliding and replan the next partial trajectory to rectify the error.
and gravitational energy. Therefore, the following mapping relationship exists:
\[\bar{L} =\mathcal{L}(\epsilon), \tag{15}\] \[\bar{L} =\frac{L}{L_{gb}},\] \[\epsilon =\frac{l_{h}}{L},\]
where \(l_{h}\) is the vertical distance from the free end to the fixed end and \(L\) is the total length of the paper. We can obtain the mapping relationship \(\mathcal{L}(\epsilon)\) using numerical simulations, which is shown in Fig. 10(b). With this mapping known, simple algebra can be performed to obtain \(L_{gb}\). First, we measure the ratio \(\epsilon=l_{h}/L\) for a particular paper to obtain its corresponding normalized total length \(\bar{L}\). Then, the value of \(L_{gb}\) can be calculated simply by \(L_{gb}=L/\bar{L}\). Once we obtain \(L_{gb}\), we can now use the non-dimensionlized mapping relationship in (11) to find the optimal path for manipulating the paper.
### _Experimental Setup_
For our experiments, we tested folding on 4 distinct types of paper:
1. A4 paper, \(L_{gb}=0.048\)m,
2. US Letter paper, \(L_{gb}=0.060\)m,
3. Cardboard paper (US Letter dimensions), \(L_{gb}=0.132\)m,
4. Square origami paper, \(L_{gb}=0.043\)m.
For the rectangular papers (1-3), we do two sets of experiments. The first involves folding the papers to an arbitrary crease location (\(C=0.25\)m for A4 and \(C=0.20\)m for US Letter and cardboard), while the second involves folding the papers in half. For the square origami paper, we choose an arbitrary crease location of \(C=0.30\)m. This results in a total of 7 folding scenarios. For each of the scenarios, we conduct experiments using 3 different algorithms (an intuitive baseline, our open-loop approach, and our closed-loop approach). We complete 10 trials for each of these algorithms, resulting in a total of 210 experiments.
### _Baseline Algorithm_
To showcase the benefits of our folding algorithm, we compare our algorithm to an intuitive baseline. We can think of an intuitive baseline algorithm as one that would work if the opposite end of the paper were fixed to the substrate. Naturally, such a trajectory would be one that grabs the edge of the paper and traces the half perimeter of a circle with radius \(R=C/2\):
\[\text{d}\theta =\pi/M, \tag{16}\] \[\tau_{B} =\{(R\cos(i\text{d}\theta),R\sin(i\text{d}\theta),i\text{d}\theta) \ \forall\ i\in[0,M]\},\]
where \(M\) is an arbitrary number of points used as the resolution of trajectory. We choose \(M=250\) for all experiments.
Fig. 10: (a) Schematic of a hanging plate. The manipulation edge is fixed horizontally; (b) Relationship between the ratio \(\epsilon=l_{h}/L\) and normalized total length of the paper \(L=L/L_{gb}\).
Fig. 9: Experimental results for all folding scenarios. Each column indicates a folding scenario while the the top row (a) showcases the fold length and bottom row (b) showcases the spin error. Boxplot results are shown color coded for the intuitive baseline, open-loop control, and closed-loop control algorithms. Medians are shown as orange lines, means are shown as turquoise circles, and the desired target value is shown as a light blue horizontal line. We note that both our open-loop and closed-loop algorithms have significant improvements over the intuitive baseline as shown by the broken axis in (a). Our algorithms also have significantly less variance.
### _Metrics_
The metrics used for the experiments were the average fold length and the spin error. The average fold length was calculated by simply taking the average of the left and right side lengths up until the crease. The spin error was calculated as the angle \(\theta_{\text{err}}\) that results in the difference between the left and right side lengths. For square papers, the fold length was defined as the perpendicular length from the tip to the crease and the spin error was the angular deviation from this line to the true diagonal.
### _Parameters_
The neural force manifold \(\mathcal{M}\) was discretized using a \(\bar{\delta}\) corresponding to \(\delta=2\)mm depending on the material as we found this discretization to have good compromise between accuracy and computational speed. All rectangular papers used a penalty region \(\mathcal{L}_{s}\) defined by \(\bar{l}_{s}<0.958\) while the square paper used one defined by \(\bar{l}_{s}<1.137\). This discrepancy is due to the fact that the diagonal paper has a smaller yield strength compared to the the rectangular paper, i.e., to prevent extremely high curvatures, a larger suspended length \(\bar{l}_{s}\) range must be avoided.
For closed-loop control, we chose to split all trajectories into \(N=5\) intervals regardless of trajectory length. Furthermore, we use an extremely slick (i.e. low friction) table to showcase the robustness of our method. Using an empirical method, we measured the static coefficient of friction of our papers and the substrate to be approximately \(\mu_{s}=0.12\). For comparison, the static coefficient of friction for steel on steel (both lubricated with castor oil) is \(\mu_{s}=0.15\).
### _Results and Analysis_
All experimental results can be seen expressed as box plots where we showcase achieved fold lengths and spin errors in Fig. 9(a) and (b), respectively. When observing the achieved fold lengths, we see significant improvement over the baseline for all folding scenarios. Due to the large gap in performance, broken axes are used to properly display the variance of the recorded data. We note that not only do our algorithms achieve significantly better performance on average, the variance of our approaches is also much lower as shown by the decreased y-axis resolution after the axis break. We attribute the high variance of the baseline method due to the increased influence of friction, which can often cause chaotic, unpredictable results. In other words, truly deterministic folding can only be achieved when sliding is nonexistent.
For a vast majority of cases, we observe a clear improvement over the open-loop algorithm when incorporating vision-feedback. Intuitively, we observe a trend where the performance gap between our open-loop and closed-loop algorithms grow as the material stiffness increases for rectangular folding. For softer materials (\(L_{gb}=0.048\)), the open-loop algorithm has near perfect performance as shown when folding a paper in half in Fig. 11(a2). In comparison, Fig. 11(a1) showcases the baseline algorithm failing with significant sliding.
The sliding problem is only exacerbated by increasing the stiffness of the material (\(L_{gb}=0.132\)) where Fig. 12(a) showcases the baseline algorithm failing to fold the cardboard paper in half by a margin almost as long as the paper itself. In comparison, our open-loop algorithm is capable of folding the cardboard with significantly better results albeit with some visual sliding as shown in Fig. 12(b). As the material stiffness increases, the benefits of the incorporated vision-feedback are more clearly seen as we are able to achieve near perfect
Fig. 11: Isometric views of different folding scenarios. (a1-2) showcases \(C=\) Half folding for \(L_{gb}=0.048\) paper with the intuitive baseline and our open-loop algorithm, respectively. (b1-2) showcases \(C=0.30\)m diagonal folding for \(L_{gb}=0.043\) with the intuitive baseline our closed-loop algorithm, respectively.
folding for cardboard in Fig. 12(c). All of our findings for rectangular folding also match the results of our diagonal folding experiment shown in Fig. 11(b1-b2), where closed-loop once again achieves minimal sliding when compared to the baseline. Overall, the matching findings across all of our experiments showcase the robustness of our formulation against material and geometric factors.
We observe one oddity for the folding scenario of \(L_{gb}=0.048\) and \(C=\) Half where the open-loop algorithm outperformed our closed-loop variant. Still, we wish to point out that this decrease in performance is only on average 1mm, which can easily be attributed to repetitive discretization error caused by \(N=5\) replanning. In fact, as we use a discretization of \(\delta=2\)mm for the manifold, compounding rounding errors can easily cause 1-2mm errors. With this in mind, our closed-loop method achieves an average fold length performance within a 1-2mm tolerance across all experiments.
In terms of spin error, we found that softer materials had the greatest error. As the frictional surface of the table is not perfectly even, any amount of sliding will directly result in uneven spin as shown in Fig. 11(a). As the material stiffness increases, the spin errors became more uniform across the methods as the influence of friction is not enough to deform the paper. Still, we can see that our open and closed-loop algorithms had less sliding than the baseline on average.
## VIII Conclusion
We have introduced a novel control strategy capable of robustly folding sheets of paper of varying materials and geometries with only a single manipulator. Our framework incorporates a combination of techniques spanning several disciplines, including physical simulation, machine learning, scaling analysis, and path planning. The effectiveness of our framework was showcased through extensive real world experiments against an intuitive baseline. Furthermore, an efficient near real-time visual-feedback algorithm was implemented that further minimizes folding error. With our closed-loop sensorimotor control algorithm successfully accomplished challenging scenarios such as folding stiff cardboard with repeatable accuracy.
For future work, we hope to to tackle the difficult problem of creating arbitrary creases along sheets of paper with non-symmetric centerlines. Such non-symmetric papers can no longer be represented as a reduced-order model of a 2D elastic rod, thus requiring a different formulation. Additionally, folding along regions of paper with preexisting creases will also be a crucial step to achieving elegant folding tasks such as robotic origami. Moving forward, we anticipate exploring solutions to such problems that take advantage of generalized problem formulations with data-driven control schemes such as reinforcement learning.
We acknowledge financial support from the National Science Foundation under Grant numbers IIS-1925360, CAREER-2047663, and OAC-2209782.
|
2307.05880 | Constraints on Self-Interacting dark matter from relaxed galaxy groups | Self-interacting dark matter (SIDM) has been proposed as an alternative to
the standard collisionless cold dark matter to explain the diversity of
galactic rotation curves and core-cusp problems seen at small scales. Here, we
estimate the constraints on SIDM for a sample of 11 relaxed galaxy groups with
X-ray observations from Chandra and XMM-Newton. We fit the dark matter density
distribution to the Einasto profile and use the estimated Einasto $\alpha$
parameter to constrain the SIDM cross-section, based on the empirical relation
between the two, which was obtained in Eckert et al (2022). We obtain a
non-zero central estimate for the cross-section per unit mass ($\sigma/m$) for
seven groups, with the most precise estimate obtained for NGC 5044, given by
$\sigma/m=0.165 \pm 0.025~\rm{cm^2/g}$, for dark matter velocity dispersion of
about 300 km/sec. For the remaining four groups, we obtain 95% c.l. upper
limits on $\sigma/m < 0.16-6.61~\rm{cm^2/g}$ with dark matter velocity
dispersions between 200-500 km/sec, with the most stringent limit for our
sample obtained for the group MKW 4, given by $\sigma/m< 0.16~\rm{cm^2/g}$ for
dark matter velocity dispersion of about 350 km/sec. | Gopika K., Shantanu Desai | 2023-07-12T02:52:38Z | http://arxiv.org/abs/2307.05880v1 | # Constraints on Self-Interacting dark matter from relaxed galaxy groups
###### Abstract
Self-interacting dark matter (SIDM) has been proposed as an alternative to the standard collisionless cold dark matter to explain the diversity of galactic rotation curves and core-cusp problems seen at small scales. Here, we estimate the constraints on SIDM for a sample of 11 relaxed galaxy groups with X-ray observations from Chandra and XMM-Newton. We fit the dark matter density distribution to the Einasto profile and use the estimated Einasto \(\alpha\) parameter to constrain the SIDM cross-section, based on the empirical relation between the two, which was obtained in [1]. We obtain a non-zero central estimate for the cross-section per unit mass (\(\sigma/m\)) for seven groups, with the most precise estimate obtained for NGC 5044, given by \(\sigma/m=0.165\pm 0.025\;\rm cm^{2}/g\), for dark matter velocity dispersion of about 300 km/sec. For the remaining four groups, we obtain 95% c.l. upper limits on \(\sigma/m<0.16-6.61\;\rm cm^{2}/g\) with dark matter velocity dispersions between 200-500 km/sec, with the most stringent limit for our sample obtained for the group MKW 4, given by \(\sigma/m<0.16\;\rm cm^{2}/g\) for dark matter velocity dispersion of about 350 km/sec.
## I Introduction
The dark matter problem is one of the most vexing problems in modern physics and astrophysics. Although dark matter constitutes about 25% of total energy density of the universe and is a basic tenet of the \(\Lambda\)CDM model [2], its identity is still unknown, despite close to 100 years of evidence [3]. There is also no laboratory evidence for some of the most well motivated dark matter candidates such as Weakly Interacting Massive Particles (WIMPs) or axions or through indirect searches, which have only reported null results [4; 5; 6; 7; 8; 9; 10; 11]. Furthermore, the currently established \(\Lambda\)CDM model has also faced some tensions at scales smaller than 1 Mpc, such as the core/cusp problem [12; 13; 14], missing satellite problem [15; 16], too big to fail problem [17], and satellites plane problem [18]. Data on galactic scales for spiral galaxies have also revealed some intriguing deterministic scaling relations or correlations such as the Radial Acceleration Relation (RAR) [19; 20], mass acceleration discrepancy relation [21], the constancy of dark matter halo surface density [22], linearity of dark matter to baryonic matter ratio [23], which remain an enigma, although recent works have shown that some of these regularities could be explained within \(\Lambda\)CDM [24; 25; 26], while some other observations such as the RAR [20; 27; 28; 29; 30], constancy of dark matter surface density [31; 32] and linearity of dark to baryonic matter ratio [33] are not universal. Therefore a large number of alternatives to the standard \(\Lambda\)CDM model have emerged such as Self-Interacting Dark Matter (SIDM, hereafter) [34], Superfluid dark matter [35], Warm Dark Matter [36], Wave (or Fuzzy) Dark Matter [37], Flavor Mixed Dark Matter [38], modified gravity (which obviates the need for dark matter) [18; 21], etc.
The original motivation of SIDM about twenty years ago was to resolve the core/cusp and missing satellite problems [39]. Although the missing satellite problem is no longer a major point of contention given the discovery of many satellite galaxies of the Milky way from wide-field sky surveys [34; 40], the core/cusp problem is still an issue [41]. Furthermore, another acute problem is the diversity in the dark matter density profiles inferred from rotation curves, which cannot be resolved using feedback models [42]. SIDM can resolve these problems and also some of the aforementioned anomalies at the galactic scale such as RAR and constant dark matter surface density [43; 44; 45; 34]. SIDM has been applied to a whole suite of astrophysical observations from dwarf galaxies to galaxy clusters (See [34; 43] for recent reviews). Current observations are consistent with a velocity-dependent SIDM cross-sections with values of \(\sigma/m\sim 2\;\rm cm^{2}/g\) on single galaxy scales to \(\sigma/m\sim 0.1\;\rm cm^{2}/g\) on galaxy cluster scales [46].
A large number of works have obtained results on SIDM cross-sections using galaxy clusters [47; 48; 49; 1; 50; 1] as well as a combination of isolated galaxies and brightest cluster galaxies (BCGs) within clusters [61]. These include looking at the shape distribution of galaxy cluster arcs [47], dark matter-galaxy offsets [50; 57], spherical Jeans modeling [60], subhalo evaporation of elliptical galaxies in clusters [48], fitting the dark profiles to cored isothermal [62] or Einasto profiles [1], offset between dark matter and stars [53; 55; 56], baryon-dark matter separation [54], offset between X-ray gas and galaxies [51], offset between X-ray center and optical center [63], ellipticities of clusters [52], wobbling of BCG around the center of dark matter [58], effect on the profiles at the outskirts near the splashback radius [59]. The most stringent constraints come from the core densities obtained using cluster strong lensing with limits ranging from \(\sigma/m<\ 0.13\;\rm cm^{2}/g\)[64] to \(\sigma/m<\ 0.35\;\rm cm^{2}/g\)[60]. We also note that some
of the earliest stringent limits obtained on the SIDM cross-section, for example \(\sigma/m<0.02\) cm\({}^{2}\)/g from MS 2137-23 [65] had to be revised with the advent of SIDM simulations, and subsequently became much more relaxed with \(\sigma/m<1\) cm\({}^{2}\)/g [52]. Evidence for non-zero cross-sections have been reported at cluster scales with \(\sigma/m\approx 0.19\pm 0.09\) cm\({}^{2}\)/g [60]. Most recently, an analysis of the offsets between the X-ray and central galaxy position for 23 clusters from DES and SDSS is tentatively consistent with \(\sigma/m\sim 1\) cm\({}^{2}\)/g [63]. Similarly, evidence for a non-zero cross section from galaxy groups has also been found with \(\sigma/m=0.5\pm 0.2\) cm\({}^{2}\)/g [60].
In this work we obtain constraints on the SIDM cross-section using a sample of relaxed galaxy groups and low mass galaxy clusters imaged with Chandra and XMM-Newton [66]. Galaxy groups are dark matter dominated systems with masses in the range of \(10^{13}-10^{14}M_{\odot}\) and containing less than 50 galaxies [67; 68; 69]. The exact demarcation between a galaxy cluster and galaxy group is not well defined [68; 69]. On group scales the expected values of \(\sigma/m\) are expected to be \(0.1-1\) cm\({}^{2}\)/g [60; 46]. We have previously used these same systems to test the invariance of dark matter halo surface density and to test the RAR [32]. To obtain constraints on SIDM cross-sections, we follow the same methodology as [1] (E22 hereafter).
The outline of this manuscript is as follows. We recap the E22 analysis in Sect. II. The data sample used for this work along with analysis procedure and results are outlined in Sect. III. We conclude in Sect. IV.
## II E22 analysis
Here, we provide an abridged summary of the method used in E22 to obtain limits on SIDM cross-section using galaxy clusters. For their analysis, they considered mock clusters with \(M_{200}>3\times 10^{14}M_{\odot}\) from the Bahamas-SIDM hydrodynamical cosmological simulations describing cluster formation [70; 71]. These simulations have been carried out for four distinct values of \(\sigma/m\) ranging from 0 (corresponding to CDM) to 1.0 cm\({}^{2}\)/g with a total of 230 clusters. These simulations also include baryonic feedback effects such as cooling, star formation, stellar as well as AGN feedback [70]. However, since there is no ab-initio understanding of how the first black holes form, for these simulations black hole seeds are injected by hand into the dark matter halos, and the growth of black holes is governed by Bondi accretion following the prescription in [72]. These simulations do not model the cold interstellar medium, which could underestimate the accretion rate onto black holes. These simulations have been shown to reproduce the galactic stellar mass fraction including dependence on galaxy type, while also matching the hot gas fraction in groups and clusters [70]. However, in these simulations, the stellar density profiles are sensitive to the resolution and details of the AGN feedback processes [71]. A full listing of all the successes and shortcomings of these simulations have been summarized in [70; 71]. The synthetic clusters from these simulations were then fitted to an Einasto profile [73]:
\[\rho_{Ein}(r)=\rho_{s}\exp\bigg{[}-\frac{2}{\alpha}\bigg{(}\bigg{(}\frac{r}{r_ {s}}\bigg{)}^{\alpha}-1\bigg{)}\bigg{]} \tag{1}\]
where \(r\) is the radial distance; \(r_{s}\) is the scale scale radius [74]; \(\rho_{s}\) is the scale density corresponding to the mass density at \(r=r_{s}\); and \(\alpha\) is the Einasto index. The Einasto profile provided a good fit to the synthetic clusters. Based on these fits E22 constructed an empirical relation between \(\alpha\) and \(\sigma/m\) which can be written as follows:
\[\alpha=\alpha_{0}+\alpha_{1}\left(\frac{\sigma/m}{1cm^{2}/g}\right)^{\gamma} \tag{2}\]
where \(\alpha_{0}\) is the mean value obtained for CDM, while \(\alpha_{1}\) and \(\gamma\) denote the scaling parameters that encode the dependence of \(\alpha\) on the SIDM cross section/unit mass. For relaxed clusters in hydrostatic equilibrium, E22 obtained \(\alpha_{0}=0.178\), \(\alpha_{1}=0.20\), and \(\gamma=0.63\). This relation was found to be robust with respect to the choice of subgrid physics, which incorporates different models of cooling, star formation, AGN and supernova feedback [70]. This empirical relation was then applied to the clusters in the XMM-Newton Cluster Outskirts Project (X-COP) sample [75], and a combined best fit value of \(\alpha\) was used to constrain \(\sigma/m\). The estimated value of \(\sigma/m\) from this stacking analysis was found by E22 to be \(\sigma/M<0.19\) cm\({}^{2}\)/g at 95% c.l [1] at an assumed dark matter collision velocity of 1000 km/sec. We should also point out that the isothermal Jeans profile [46] could also adequately model the simulated SIDM halos [76]. However, in order to obtain constraints on SIDM cross-sections from isothermal profiles, one needs to make additional assumptions such as the age of the halo, etc [76].
## III Analysis and results
We use X-ray observations of 17 relaxed galaxy groups imaged with Chandra and/or XMM-Newton with redshifts up to z = 0.08. The observational details for A2589 can be found in [77], while details for all the remaining groups can be found in [66]. These groups have masses between (\(10^{13}-10^{14}M_{\odot}\)) and span the range between galaxies and clusters, with temperatures in the range 1-3 keV. These suites of galaxy groups were used to test the constancy of dark matter density and radial acceleration relation for groups in [32] as well as the MOND paradigm [78].
### Einasto fits to DM halos
The reconstruction of the dark matter density profiles for the group sample considered in this work can be found
in [32]. We only provide a brief summary here. We first estimate the total group mass after assuming that the groups are in hydrostatic equilibrium, since we are dealing with relaxed systems. The gas mass was estimated from the X-ray temperature and surface-brightness data. We then obtain the dark matter mass by subtracting the gas mass (using the fitted gas density profiles) and the stellar mass obtained from the \(K\)-band luminosity for the brightest group galaxy. The dark matter density profile was then estimated from the dark matter mass assuming spherical symmetry. The errors in the density profile have been obtained using error propagation based on the errors in the temperature. This error budget does not include systematic errors due to hydrostatic equilibrium and hydrostatic bias, spherical symmetry as well as due to uncertainties in the stellar mass. These hydrostatic bias values range from no bias [79] to about 40% [80] (see Ref. [81] for a recent compilation of hydrostatic bias measurements in literature.) The latest state of the art cosmological simulations predict a bias of about 20% [82]. In order to estimate the hydrostatic bias on a group by group basis, we would need robust lensing masses for all our objects, which are not available at the moment. E22 has shown that that the Einasto \(\alpha\) parameter gets overestimated by 2-8% based on the hydrostatic equilibrium assumption, depending on the model for non-thermal pressure support. The dynamical modeling in [66] as well as in [78] have been done assuming spherical symmetry. A detailed discussion of the validity of the spherical symmetrical assumption is discussed in one of our previous works and could cause systematic errors of about 5% in the total mass determination [31]. It is also not straightforward to estimate this error for every group, since the X-ray images are intrinsically two dimensional in nature. Finally, since the stellar mass is sub-dominant compared to the gas and dark matter contributions, we neglect the uncertainties in the estimates of stellar mass. Other causes of systematic errors could be due to inadequate background modelling and subtraction and whose magnitude could be about 5% [66].
Similar to E22, we fit these density profile data for the groups to the Einasto profile (Eq. 1) using three free parameters: \(\rho_{s}\), \(r_{s}\), and \(\alpha\). We maximize the log-likelihood function using the emcee MCMC sampler [83]. The likelihood used for getting the best-fit parameters can be found in Eq. 1 in the Appendix. The priors for \(\rho_{s}\) are in the range \(10^{11}<\rho_{s}<10^{16}M_{\odot}/\text{Mpc}^{-3}\), \(r_{s}\) between 0 kpc and 300 kpc, and \(\alpha\) spans from 0 to 0.5. For the MCMC runs, the number of walkers was set to 200 and the total number of iterations to 5000, which attained a mean acceptance fraction of approximately 0.52 for these fits, where the acceptance fraction is defined as the fraction of the total parameter space, which gets accepted for sampling the posterior pdf. The corner plots showing the 68%, 90%, and 99% credible intervals for the best-fit parameters associated with the 11 galaxy groups used in this analysis can be found in the Appendix. We have excluded the groups ESO 3060170, MS 0116.3-0115, NGC 1550, NGC 4325, NGC 533, and RX J1159.8+5531 in this analysis as their density profiles could not be fitted by marginalized closed contours for all the three Einasto parameters. For these groups, at least one of the parameters showed only one-sided marginalized contours at 68% credible intervals. The dark matter density profiles for the remaining galaxy groups along with the best-fit Einasto parameters is shown in Figure 1. For every group, we also show the normalized residuals in the bottom panel, where the residuals are normalized by the error in each data point. We find that two groups have a reduced \(\chi^{2}>2\). Table 1 summarizes the parameters obtained from fitting the dark matter density profiles of the galaxy groups with an Einasto model along with their reduced \(\chi^{2}\), which gives a measure of the efficacy of the fit. A graphical summary of the values of \(\alpha\) for every group can be found in Fig. 2. The estimated values of \(\alpha\) for all the systems are in the range of approximately 0.12-0.49. This agrees with the values obtained in E22 for the X-COP sample, which had found \(\alpha\sim 0.19\).
### Results for \(\sigma/m\)
Once we have determined the best-fit \(\alpha\) for each group, we obtain an estimate for \(\sigma/m\) by comparing with Eq. 2. We assume that this equation is also valid for low mass clusters and groups with masses in the range \(10^{13}-10^{14}M_{\odot}\), as no new physics kicks in below \(10^{14}M_{\odot}\). Furthermore, our current sample also contains five objects with \(M_{vir}>10^{14}M_{\odot}\). If the observed \(\alpha\) for a given group is consistent (within errors) we set an upper limit on \(\sigma/m\). Else one could get a central bounded estimate for \(\sigma/m\). For this purpose, we calculate \(\chi^{2}\) as a function of \(\sigma/m\) as follows:
\[\chi^{2}=\left[\frac{\alpha_{obs}-\alpha_{th}(\alpha_{0},\alpha_{1},\gamma, \sigma)}{\sigma_{\alpha}}\right]^{2}, \tag{3}\]
where \(\alpha_{th}(\alpha_{0},\alpha_{1},\gamma,\sigma)\) is given by Eq. 2, \(\alpha_{obs}\) is the observed value for each group and \(\sigma_{\alpha}\) its associated error. If the \(\chi^{2}\) functional is shaped like a parabola with a well-defined minimum, one could get bounded c.l. intervals for \(\sigma/m\) based on fixed \(\Delta\chi^{2}\) values compared to the minimum [84], as long as the lower point of intersection is greater than 0.
For the cases when the reduced \(\chi^{2}>1\) in the estimation of the SIDM cross-section, we assume that there are unaccounted for systematic effects, and rescale
Figure 1: Dark matter density profile for the galaxy groups fitted with an Einasto model. The top panel shows the dark matter density profile \(\rho\) as a function of radius (\(r\)) from the group center along with the best-fit Einasto model parameters given in Table 1. The bottom panel in each plot shows the normalized residual given by the difference in the data and model divided by the error in the data.
Figure 1: Dark matter density profile for the galaxy groups fitted with an Einasto model (contd). The plot description is the same as for the other groups in previous page.
(in Eq. 3) by \(\chi^{2}_{red}\) following Ref. [85; 86; 87; 88].1 Therefore, for our analysis, in case the reduced \(\chi^{2}\) for any group is greater than 1, for estimating the SIDM cross-section, we rescaled the \(\sigma_{\alpha}\) (in Eq. 3) by \(\sqrt{\chi^{2}_{red}}\). The \(\chi^{2}\) curves as a function of \(\sigma/m\) are shown in Fig. 3. We have a total of two groups with reduced \(\chi^{2}>2\). So this rescaling would at most drastically affect the results of only two groups.
Footnote 1: This procedure has also been criticized in literature [89]. In the case of linear regression, one can show that the errors in the estimated parameters need to be rescaled by \(\sqrt{\chi^{2}_{red}}\)[84]. Although such a rescaling may not be exact for non-linear regression or if the errors are not Gaussian, this rescaling procedure has been applied in a number of studies from Particle Physics (where it is routinely used by Particle Data Group to rescale the errors in the masses or lifetimes of elementary particles [85; 88]) to pulsar timing, (where the errors in the pulsar times of arrival have been rescaled based on reduced \(\chi^{2}\)[87].)
Since all other works in literature have obtained limits on SIDM at 95% c.l, we also obtain limits (or central estimates) at 95% c.l. These are obtained by finding the maximum value of \(\sigma/m\) for which \(\Delta\chi^{2}<4\)[84]. In case the \(\chi^{2}\) curve shows a minimum for \(\sigma/m>0\), we report 95% central estimate if the lower value of \(\sigma/m\) for which \(\Delta\chi^{2}=4\) is greater than 0. A comprehensive summary of our results for the 11 groups can be found in Table 2. We find non-zero central estimates for \(\sigma/\)m for seven groups, whereas for the remaining four groups we report upper limits at 95% c.l. However among these seven groups, only one group, viz. NGC 5044 has fractional error \(<\) 20%, given by \(\sigma/m=0.165\pm 0.025\) cm\({}^{2}\)/g. The upper limits on \(\sigma/m\) which we obtain for the remaining groups are between \(0.16-6.61\) cm\({}^{2}\)/g. The most stringent limit is obtained for the group MKW 4, which has \(\sigma/m<0.16\) cm\({}^{2}\)/g.
In addition to estimating the limits (or central intervals) on the SIDM cross-section, we have also estimated the dark matter collision velocity for each of the groups, so that a comparison can be made with the currently viable models of SIDM cross-section decreasing with velocity [34]. For this purpose, we used the following scaling relation between line of sight velocity dispersion of galaxies (\(\sigma_{LOS}\)) in the cluster and \(M_{200}\) from [90], (which was also used in [60]):
\[\log(h(z)M_{200})=13.98+2.75\log\left[\frac{\sigma_{LOS}}{500\ (\text{km/ sec})}\right] \tag{4}\]
To estimate \(\sigma_{LOS}\) and its uncertainty, we used the values of \(M_{200}\) and its error from [66; 77]. We also note that there are also variations of order 10% reported in [90] in the exponent and slope in Eq. 4, depending on the simulation set used. We do not take this into account while estimating the uncertainty. However, this uncertainty would uniformly affect all the groups. These values of \(\sigma_{LOS}\) can be found in Table 2. Therefore, the groups
Figure 2: The Einasto shape parameter (\(\alpha\)) calculated for all the galaxy groups analyzed in this work.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Group** & \(\rho_{\mathbf{s}}(\mathbf{10^{14}M_{\odot}Mpc^{-3}})\) & \(\mathbf{r_{s}}(\mathbf{kpc})\) & \(\alpha\) & **DOF** & \(\chi^{2}_{\mathbf{red}}\) \\ \hline \hline A2589 & \(6.66\pm 0.35\) & \(130.74\pm 3.94\) & \(0.29\pm 0.02\) & 6 & 1.34 \\ A262 & \(4.39\pm 0.75\) & \(161.16\pm 17.16\) & \(0.25\pm 0.02\) & 16 & 0.002 \\ A2717 & \(2.22\pm 0.25\) & \(253.23\pm 13.38\) & \(0.29\pm 0.03\) & 4 & 0.90 \\ AWM 4 & \(3.88\pm 0.98\) & \(204.42\pm 29.99\) & \(0.31\pm 0.05\) & 5 & 0.05 \\ ESO 5520200 & \(2.88\pm 0.31\) & \(208.57\pm 12.32\) & \(0.21\pm 0.04\) & 4 & 0.27 \\ MKW 4 & \(17.7\pm 2.02\) & \(68.24\pm 3.78\) & \(0.18\pm 0.02\) & 13 & 1.74 \\ NGC 2563 & \(11.1\pm 3.05\) & \(61.47\pm 7.71\) & \(0.19\pm 0.06\) & 4 & 9.02 \\ NGC 5129 & \(7.75\pm 1.15\) & \(71.86\pm 5.38\) & \(0.49\pm 0.12\) & 2 & 2.21 \\ RGH 80 & \(8.76\pm 0.30\) & \(70.24\pm 1.22\) & \(0.23\pm 0.01\) & 9 & 1.01 \\ IC 1860 & \(5.60\pm 0.75\) & \(117.53\pm 9.07\) & \(0.40\pm 0.04\) & 4 & 1.16 \\ NGC 5044 & \(8.62\pm 0.1\) & \(84.42\pm 0.18\) & \(0.24\pm 0.003\) & 11 & 1.19 \\ \hline \end{tabular}
\end{table}
Table 1: This table shows the best-fit parameter values for the Einasto profile (cf. Eq. 1) to the group halos along with the degrees of freedom (DOF). The efficacy of the fit can be quantified by the reduced \(\chi^{2}\) (shown in the last column). The data for A2589 was obtained from [77] and for all other groups from [66].
with 95% upper limits between \(0.16-6.61\) cm\({}^{2}\)/g have dark matter collision velocity between approximately 200-500 km/sec with the most stringent limit for MKW 4 (\(\sigma/m<0.16\) cm\({}^{2}\)/g) at a velocity dispersion of \(\approx 350\) km/sec. Conversely, the group with the most precise non-zero estimate for \(\sigma\)/m, viz. NGC 5044 has dark matter velocity dispersion of about 300 km/sec.
Therefore, our results are consistent with the constraints on SIDM using galaxy groups obtained in [60] and agree with predictions of velocity-dependent SIDM cross-sections on group/cluster scales [46]. The constraints on \(\sigma/m\), which we obtain are within the ballpark of \(0.1-1\) cm\({}^{2}\)/g needed to solve the core-cusp and too big to fail problems [43].
## IV Conclusions
In this work, we obtain constraints on the SIDM cross-sections using a sample of 11 relaxed galaxy groups with X-ray observations, which we have previously used to test the invariance of the dark matter halo surface density and RAR [32]. For this purpose, we follow the same prescription as E22, that derived an empirical relation between the SIDM cross-selection (\(\sigma/m\)) and the \(\alpha\) parameter of the Einasto profile using simulated SIDM cluster halos from the Bahamas-SIDM set of simulations [1]. This empirical relation between the Einasto \(\alpha\) parameter and the SIDM \(\sigma/m\) can be found in Eq. 2.
We tried to fit the density profiles of 17 galaxy groups to the Einasto parameterization. We were able to obtain closed contours for the Einasto parameters for 11 of these groups. The best-fit Einasto parameters for all these groups along with the reduced \(\chi^{2}\) can be found in Table 1. The values of \(\alpha\) which we get are between 0.12-0.49. To obtain constraints on the SIDM cross section, we used the aforementioned 11 groups and rescaled the errors in \(\alpha\) by \(\sqrt{\chi^{2}_{red}}\) for the groups with \(\chi^{2}_{red}>1\). Possible reasons for the large reduced \(\chi^{2}\) for some of these groups could be due to AGN feedback in the core of the group or incomplete relaxation [66]. We then obtain the 95% c.l. upper limits (or central estimates) on the SIDM cross-section, by finding the maximum value of \(\sigma/m\) for which \(\Delta\chi^{2}=4\), where \(\chi^{2}\) has been obtained using Eq. 3.
Our results for SIDM cross-sections for the 11 relaxed groups can be found in Table 2. We obtain a non-zero value for seven groups, with the most precise estimate (fractional error \(<20\%\)) for \(\sigma/m\) for one group (NGC 5044), given by \(\sigma/m=0.165\pm 0.025\) cm\({}^{2}\)/g at 95% c.l. at a velocity dispersion of about 300 km/sec. For the remaining four other groups, we estimate 95% c.l. upper limits on \(\sigma/m\) in the range of \(0.16-6.61\) cm\({}^{2}\)/g with dark matter velocity dispersions in the range of 200-500 km/sec. The most stringent limit which we obtain is for MKW 4 for which \(\sigma/m<0.16\) cm\({}^{2}\)/g at velocity dispersion of \(\approx 350\) km/sec. Our results for \(\sigma/m\) for all the groups in our sample are in the same ballpark as those found in [60], (which were consistent with \(\sigma/m=0.5\pm 0.2\) cm\({}^{2}\)/g or \(\sigma/m<0.9\) cm\({}^{2}\)/g, if interpreted as an upper limit). Our results are also consistent with the predictions of velocity-dependent SIDM cross-sections at group/cluster scales [46].
## Acknowledgements
We are grateful to Fabio Gastaldello and Dominique Eckert for useful correspondence related to the galaxy group data used for this work and E22, respectively. We also thank the anonymous referee for several useful and constructive comments on the manuscript. GK also acknowledges the Ministry of Education (MoE), Government of India for the Senior Research Fellowship.
|
2304.12674 | Compressing Sentence Representation with maximum Coding Rate Reduction | In most natural language inference problems, sentence representation is
needed for semantic retrieval tasks. In recent years, pre-trained large
language models have been quite effective for computing such representations.
These models produce high-dimensional sentence embeddings. An evident
performance gap between large and small models exists in practice. Hence, due
to space and time hardware limitations, there is a need to attain comparable
results when using the smaller model, which is usually a distilled version of
the large language model. In this paper, we assess the model distillation of
the sentence representation model Sentence-BERT by augmenting the pre-trained
distilled model with a projection layer additionally learned on the Maximum
Coding Rate Reduction (MCR2)objective, a novel approach developed for
general-purpose manifold clustering. We demonstrate that the new language model
with reduced complexity and sentence embedding size can achieve comparable
results on semantic retrieval benchmarks. | Domagoj Å everdija, Tomislav Prusina, Antonio JovanoviÄ, Luka Borozan, Jurica Maltar, Domagoj MatijeviÄ | 2023-04-25T09:23:43Z | http://arxiv.org/abs/2304.12674v1 | # Compressing Sentence Representation with Maximum Coding Rate Reduction
###### Abstract
In most natural language inference problems, sentence representation is needed for semantic retrieval tasks. In recent years, pre-trained large language models have been quite effective for computing such representations. These models produce high-dimensional sentence embeddings. An evident performance gap between large and small models exists in practice. Hence, due to space and time hardware limitations, there is a need to attain comparable results when using the smaller model, which is usually a distilled version of the large language model. In this paper, we assess the model distillation of the sentence representation model Sentence-BERT by augmenting the pre-trained distilled model with a projection layer additionally learned on the Maximum Coding Rate Reduction (MCR\({}^{2}\)) objective, a novel approach developed for general purpose manifold clustering.
We demonstrate that the new language model with reduced complexity and sentence embedding size can achieve comparable results on semantic retrieval benchmarks.
**Keywords:**_Sentence embeddings, model distillation, Maximum Coding Rate Reduction, semantic retrieval_
## 1 Introduction
Dense vector representations of words, or word embeddings, form the backbone of most NLP applications and can be constructed using context-free (see [2], [19], [22]) or contextualized methods (see [9], [23] for more details). In
practice, few NLP applications often benefit from having sentence or document representations in addition to word embeddings. In most cases, one can use the weighted average (aka pooling) over some or all of the word embeddings from a sentence or document. Although it disregards word order while pooling, this approach has shown to be reasonably performant [1]. Pre-trained language models like BERT have shown success on many NLP tasks through fine-tuning. Unfortunately, using contextualized word vectors from these models as a sentence representation is significantly inferior in terms of semantic textual similarity compared to approaches when one uses non-contextualized word vectors, which are trained with a much simpler model (see [24] for more details). Therefore, more sophisticated methods were derived to find efficient and performant universal sentence encoders. Reimers et al. in [24] developed the Sentence-BERT model by fine-tuning pre-trained BERT architecture on sentence pair scoring tasks using a Siamese architecture to learn better sentence representations, showing much improvement in downstream NLP tasks. Their approach ended up with a relatively large model size (hundreds of millions to billions of parameters) and sentence embedding dimension 768, a relatively large number for efficient search and retrieval operations over databases. In this paper, we focus on reducing the dimensionality of sentence embeddings up to 50%-70% while still achieving comparable results across the board of NLP benchmarks. This opens up a possibility of deploying AI models on smaller-scale computer systems like embedded systems.
### Related Work
Following the distributional hypothesis, Mikolov et al. in [19] showed that computing dense vectors of lower dimension for word embeddings give interesting mathematical properties of words. Inspired by the same idea, Kiros et al. [14] and Lee et al.[17] tried to derive a model which predicts surrounding sentences. Sent2Vec [20] generates context-free sentence embeddings as averages of word vectors and \(n\)-gram vectors (similar to FastText [3] for words). Conneau et al.[7] computed contextualized sentence embeddings using a BiLSTM Siamese network that was fine-tuned on pairs of semantically similar sentences. This approach was extended to fine-tuning pre-trained language models like BERT in [24]. Recently, Gao et al. [10] improved this approach by suggesting a contrastive learning method and achieved state-of-the-art results. Projecting sentence embeddings to lower dimensions was motivated by projecting word vectors. In most cases, PCA methods gave surprisingly good results and even retrofitted the word vectors in such a way that it made vectors more isotropic which had a good impact on NLP benchmarks. Li et al. [15] showed that this anomaly is also apparent in sentence vectors and gave a normalizing flow method to retrofit such vectors. Recent work of [27] introduced Maximum Coding Rate Reduction (MCR2), a novel learning objective that enables for learning a subspace representation given the clustering1. They also demonstrated how to extend the
approach to the problem of unsupervised clustering.
### Our contribution
We use a pre-trained sentence embedding model like Sentence-BERT (SBERT) as a sentence encoder and train a non-linear mapper atop the encoder using a Maximal Coding Rate Reduction as a training objective for learning discriminative low-dimensional structures that preserve all the essential information encoded into the high-dimensional data. This approach allows for more robust training than standard training objectives like cross-entropy and produces clusters in the embedding space. The main contribution of our paper is a sentence embedding compression technique that achieves comparable results with smaller sentence embedding sizes on semantic NLP benchmarks compared to the baseline sentence encoder.
The paper is organized as follows. In Section 2 we describe Maximum Rate Coding Reduction training objective for computing subspace embedding space. Furthermore, SBERT architecture is described as a sentence encoder followed by a definition of the projection layer. In Section 3 we experimentally evaluate our method and conclude with a results discussion.
## 2 Method
For a given set of sentences \(S\) and for each sentence
\[(word_{1},word_{2},\ldots,word_{n_{i}})\in S\]
our task is to construct a lower dimensional embedding \(z_{i}\in\mathbb{R}^{d}\) that contains important semantic information characteristic for that sentence. Our idea is to extend SBERT and from it's embedding compute a small projector to reduce the dimension, i.e. given the set of SBERT's embeddings \(Z\in\mathbb{R}^{d\times n}\) of the dataset \(S\), find a \(\hat{Z}\in\mathbb{R}^{d\times n}\) that retains semantic information extracted by SBERT.
### Learning a subspace representation with MCR\({}^{2}\)
Using the idea from Li et al. [16] we aim to minimize the angle between similar sentences and maximize the entropy of the whole dataset. For two representations \(\hat{z}_{1},\hat{z}_{2}\in\mathbb{R}^{d}\) of two sentences we measure how similar they are by cosine similarity
\[D\left(\hat{z}_{1},\hat{z}_{2}\right)=\frac{\cos\left(\hat{z}_{1}^{\top}\hat{ z}_{2}\right)}{\|\hat{z}_{1}\|_{2}\|\hat{z}_{2}\|_{2}}.\]
For two sets \(\hat{Z}_{1},\hat{Z}_{2}\in\mathbb{R}^{d\times b}\) we define this function as
\[D(\hat{Z}_{1},\hat{Z}_{2})=\frac{1}{b}\sum_{i=1}^{b}D\left(\hat{z}_{1,i},\hat{ z}_{2,i}\right) \tag{1}\]
where \(\hat{z}_{1,i}\) is the \(i\)-th element of \(\hat{Z}_{1}\) and \(\hat{z}_{2,i}\) is the \(i\)-th element of \(\hat{Z}_{2}\). Given pairs of similar sentences we want them to have the \(D\) score as large as possible.
For a set of representations \(\hat{Z}\in\mathbb{R}^{d\times n}\) with \(n\) elements, its entropy is defined as
\[R_{\varepsilon}(\hat{Z})=\frac{1}{2}\log\det\left(I+\frac{d}{n\varepsilon^{2}} \hat{Z}\hat{Z}^{\top}\right) \tag{2}\]
for a given parameter \(\varepsilon\) and identity matrix \(I\). This function is approximately the Shannon coding rate function for multivariate Gaussian distribution given average distortion \(\varepsilon\)[8]. Maximizing (2) we maximize the volume of the ball in which the embeddings are packed. The theory behind this is well over the scope of this paper. It is given in the paper by Ma et al. [18] where they explore rate distortion, \(\varepsilon\)-ball packing and lossy encoding with normally distributed data. By optimizing it in parallel with (1) we try to distance each sentence from others, except for the similar pairs that we try to keep close. Additionally, given cluster assignments, we can measure the entropy of each cluster with
\[R_{\varepsilon}\left(\hat{Z},\Pi_{k}\right)=\frac{n_{k}}{2n}\log\det\left(I+ \frac{d}{n_{k}\varepsilon^{2}}\hat{Z}\Pi_{k}\hat{Z}^{\top}\right) \tag{3}\]
where \(\Pi_{k}\) is a diagonal matrix with \(i\)-th entry being \(1\) if the \(i\)-th sentence belongs to cluster \(k\), otherwise \(0\), and \(n_{k}=\operatorname{tr}\left(\Pi_{k}\right)\), trace of matrix \(\Pi_{k}\), i.e. number of points in this cluster. Combining functions (1), (2) and (3) into one we get the MRC\({}^{2}\) loss function defined with
\[L(\hat{Z},\Pi)=-R_{\varepsilon}(\hat{Z})+\sum\limits_{i=1}^{k}R_{\varepsilon}( \hat{Z},\Pi_{i})-\lambda D(\hat{Z}_{1},\hat{Z}_{2}) \tag{4}\]
for some hyperparameter \(\lambda\) and pairs of similar sentences respectively divided into two sets \(\hat{Z}_{1},\hat{Z}_{2}\). \(\Pi\) denoted in \(L(\hat{Z},\Pi)\) is the clustering of data given by the user or learned by the architecture. The choice of \(\lambda\) depends on how close we want to keep similar sentences in our projection. For larger values of \(\lambda\) the network focuses on collapsing similar pairs into the same vector which, if one is not careful enough, can lead to collapsing all vectors into one. For smaller values of \(\lambda\) the network has more freedom to decide which vector embeddings to keep close. This, on the other hand, can lead to an unwanted vector representation that tends to maximally distance vectors from each other. By minimizing (4) we
* maximize the volume of all embeddings, \(R_{\varepsilon}(\hat{Z})\),
* minimize the volume of each cluster, \(\sum\limits_{i=1}^{k}R_{\varepsilon}(\hat{Z},\Pi_{i})\),
* maximize the cosine similarity of pairs of similar sentences, \(\lambda D(\hat{Z}_{1},\hat{Z}_{2})\).
The consequence of this is that after the minimization we have an embedding in which different clusters are orthogonal to each other (see [27] for more details), i.e.
\[i\neq j\implies\hat{Z}_{i}\hat{Z}_{j}^{\top}=0. \tag{5}\]
### Architecture
Our model receives as input a batch of sentences \(S\), encodes a sentence representations \(Z\) and outputs projected sentence representations \(\hat{Z}\) together with cluster assignments \(\Pi\) for \(S\). The overall architecture is shown in Fig. 1.
#### 2.2.1 Sentence encoder
BERT [9] and its variants has set a new state-of-the art performance on sentence-pair regression and classification tasks. Unfortunately, it requires that both sentences are fed into network causing a computation overhead which renders simple tasks like finding similar sentence pairs in large datasets a costly procedure. Therefore, SBERT [24] is a modification of the BERT network which uses siamese network that is able to derive semantically meaningful sentence representations. The model consists of BERT as a pre-trained encoder, a pooling layer that computes sentence representation as an average of hidden states from the last layer of BERT. SBERT is trained on the combination of the SNLI [4] and MultiNLI [26] datasets.
#### 2.2.2 Projection layer
Following Li et al. [16] we use above mentioned SBERT as a backbone and two last linear heads used to produce features and cluster logits. Features given by the first head are additionally normalized to unit sphere and the clusters are learned from the given pairs of similar sentences. The whole architecture is described in Fig. 1 where blue denotes the SBERT model and gray denotes a feed forward neural network that we call a projection layer. In this projection layer we have two heads. The first head colored in red is a single linear layer that collects information about the clusters and applies Gumbel-Softmax [12]. The second head colored in green is again a single linear layer that outputs features which are in turn normalized to zero mean and unit variance. The ELU activation function is used due to its good properties [5].
Figure 1: The overall architecture
Experiments
We trained our model on StackExchange duplicate questions as title/title pairs, used from CQADupStack [11]. The pipeline from Sentence Transformers2 for SBERT and projection layer was used with default settings, 256 batch size and 50 epochs. The backbone _all-mpnet-base-v2_ and distilled model _all-MiniLM-L6-v2_[25] as pre-trained SBERT were frozen and the only trained part was the projection layer. We refer to the former as MPNET and the latter as MiniLM. Hyperparameter \(\lambda\) from equation (4) was set to 2000 for dimensions 50 and 100, and to 4000 for all other dimensions. Our model is evaluated on several downstream NLP tasks. First of all, we test our model on those benchmarks that can include clustering, namely, semantic retrieval tasks. We also show that computed low-dimension sentence representations behave reasonably well on other semantic benchmarks. The sizes of these dimensions are motivated by experimental observation of suitable word vector sizes from [21] and [15] in which a connection between word vectors and sentence embeddings is established. For downstream NLP tasks such as standard textual similarity, sentiment analysis and question-type classification tasks we use available datasets from SentEval evaluation toolkit [6] for sentence embeddings. See [6] and references therein for dataset descriptions.
Footnote 2: [https://github.com/UKPLab/sentence-transformers](https://github.com/UKPLab/sentence-transformers)
All our experiments were evaluated on AMD Ryzen Threadripper 3990X 64-Core Processor @ 4.3GHz, Nvidia GeForce RTX 3090 GPU, CUDA 11.6 with PyTorch implementation 1.9.1.
### Semantic Retrieval (SR) Task
The semantic retrieval (SR) task is to find all sentences in the retrieval corpus that are semantically similar to the query sentence. The basic framework is to
\begin{table}
\begin{tabular}{l c|c c c} model & accuracy & encoding &
\begin{tabular}{c} Time \\ clustering \\ \end{tabular} & total \\ \hline all-mpnet-base-v2 + MCR50 & 0.562 & 00:04:15 & - & 00:04:15 \\ all-mpnet-base-v2 + MCR100 & 0.545 & 00:04:15 & - & 00:04:15 \\
**all-mpnet-base-v2 + MCR200** & 0.645 & 00:04:16 & - & 00:04:16 \\ all-mpnet-base-v2 + MCR300 & 0.632 & 00:04:17 & - & 00:04:17 \\ \hline all-mpnet-base-v2 + MCR50 + kmeans & 0.671 & 00:04:15 & 00:07:08 & 00:11:23 \\ all-mpnet-base-v2 + MCR100 + kmeans & 0.650 & 00:04:15 & 00:07:22 & 00:11:37 \\ all-mpnet-base-v2 + MCR200 + kmeans* & 0.635 & 00:04:16 & 00:09:08 & 00:13:24 \\ all-mpnet-base-v2 + MCR300 + kmeans & 0.631 & 00:04:17 & 00:11:09 & 00:15:26 \\ all-mpnet-base-v2 + kmeans (768)** & 0.648 & 00:04:17 & 00:59:57 & 01:04:12 \\ all-mpnet-base-v2 + MCR768 + kmeans & 0.630 & 00:04:17 & 00:18:15 & 00:22:32 \\ \hline \end{tabular}
\end{table}
Table 1: For Semantic Retrieval (SR) tasks the all-mpnet-base-v2 SBERT model with MCR\({}^{2}\) projection to dimension 200 achieves best accuracy without no additional time for clustering like in the same setup with \(k\)-means (denoted with *). Clustering backbone sentence embeddings from SBERT with \(k\)-means (denoted with **) took almost an hour.
compute sentence embeddings for the retrieval corpus and the query sentence. The goal is to find closest points in retrieval corpus embedding space to the query. Sometimes, to speed up the process [13], one can cluster sentences in the retrieval corpus embedding space into \(k\) clusters and use query sentence to find the closest cluster of sentences. The Quora Duplicate Question Dataset3 is used to evaluate our method. This dataset consists of 500k sentences with over 400k annotated question pairs if they are duplicates or not.
Footnote 3: [https://www.kaggle.com/datasets/sambit7/first-quora-dataset](https://www.kaggle.com/datasets/sambit7/first-quora-dataset)
### Semantic Textual Similarity (STS) Task
One of the baseline benchmarks in natural language processing is the semantic textual similarity (STS) task that qualitatively assesses the semantic similarity between two sentences (i.e., text snippets). Our model is evaluated by computing cosine similarity between sentence pair embeddings on standard STS tasks: STS 2012-2016 and STS Benchmark available in SentEval. These datasets were labeled between 0 and 5 scores indicating the semantic relatedness of sentence
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline model & STSb & STS12 & STS13 & STS14 & STS15 & STS16 \\ \hline all-mpnet-base-v2 + MCR50 & 0.749 & 0.666 & 0.739 & 0.713 & 0.754 & 0.768 \\ all-mpnet-base-v2 + MCR100 & 0.788 & 0.696 & 0.782 & 0.753 & 0.791 & 0.793 \\ all-mpnet-base-v2 + MCR200 & 0.818 & 0.712 & 0.812 & 0.779 & 0.819 & 0.816 \\ all-mpnet-base-v2 + MCR300 & 0.821 & 0.718 & 0.817 & 0.783 & 0.827 & 0.823 \\
**all-mpnet-base-v2 (768)** & 0.836 & 0.722 & 0.821 & 0.790 & 0.838 & 0.831 \\ \hline all-MiniLM-L6-v2 + MCR50 & 0.752 & 0.654 & 0.690 & 0.682 & 0.741 & 0.737 \\ all-MiniLM-L6-v2 + MCR100 & 0.778 & 0.685 & 0.742 & 0.721 & 0.780 & 0.777 \\ all-MiniLM-L6-v2 + MCR200 & 0.810 & 0.705 & 0.773 & 0.751 & 0.813 & 0.792 \\ all-MiniLM-L6-v2 + MCR300 & 0.813 & 0.710 & 0.780 & 0.759 & 0.826 & 0.800 \\
**all-MiniLM-L6-v2 (384)** & 0.824 & 0.711 & 0.790 & 0.772 & 0.838 & 0.812 \\ \hline \hline \end{tabular}
\end{table}
Table 2: For Semantic Textual Similarity (STS) tasks backbone model achieves the best results (bolded) for Spearman rank correlation coefficient on multiple benchmarks, although we observe comparable results of our method compared to the backbone and distilled model all-MiniLM-L6-v2.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline model & SST2 & SST5 & MR & CR & SUBJ & MPQA & TREC \\ \hline all-mpnet-base-v2 + MCR50 & 75.45 & 36.43 & 69.67 & 63.76 & 79.16 & 68.84 & 51.6 \\ all-mpnet-base-v2 + MCR100 & 82.54 & 39.19 & 75.85 & 63.76 & 81.86 & 68.77 & 60.0 \\ all-mpnet-base-v2 + MCR200 & 86.55 & 42.76 & 80.62 & 72.77 & 88.28 & 82.27 & 71.0 \\ all-mpnet-base-v2 + MCR300 & 87.59 & 44.66 & 82.33 & 79.71 & 90.73 & 85.76 & 79.8 \\
**all-mpnet-base-v2 (768)** & 88.74 & 49.00 & 85.05 & 86.84 & 93.97 & 89.32 & 94.0 \\ \hline all-MiniLM-L6-v2 + MCR50 & 65.95 & 31.76 & 61.61 & 63.76 & 79.19 & 68.77 & 41.0 \\ all-MiniLM-L6-v2 + MCR100 & 72.27 & 33.94 & 66.38 & 63.82 & 83.97 & 76.58 & 64.0 \\ all-MiniLM-L6-v2 + MCR200 & 77.54 & 37.10 & 70.06 & 69.17 & 86.87 & 81.83 & 69.2 \\ all-MiniLM-L6-v2 + MCR300 & 79.35 & 39.50 & 72.95 & 75.07 & 88.47 & 84.13 & 72.0 \\
**all-MiniLM-L6-v2 (384)** & 81.44 & 42.99 & 75.98 & 80.56 & 91.80 & 87.38 & 90.0 \\ \hline \hline \end{tabular}
\end{table}
Table 3: For Sentence Classification (SC) tasks backbone model achieves the best results (bolded) for accuracy on multiple benchmarks, although we observe comparable results of our method compared to the backbone and distilled model.
pairs. Evaluation on these datasets is conducted using Spearman rank correlation which measures the correlation quality between calculated and human labeled similarity. It is valued from -1 and 1 which will be high if the ranks of predicted similarities and human labels are similar.
### Sentence classification (SC) Task
Sentiment classification tasks involve assigning a score for a sentiment of a snippet of text. It is formulated as a classification of text into two or more sentiment classes, namely negative, positive or neutral, or something in-between. Datasets SST, SUBJ, CR, MR are typical benchmarks for sentiment analysis. Moreover, another example of a sentence classification task is to assign a question type for a question, like in the TREC task. In the paraphrase detection problem (like MRPC), one must classify if one sentence is a paraphrase of the other. MPQA dataset is an example of opinion classification task. The performance metric for these benchmarks is given as accuracy. All these datasets are available in SentEval toolkit.
Figure 2: Performance comparison on SR task
Figure 3: Relative error in STS and SC benchmarks
Results
This section compares our method as a clustering and compression algorithm, respectively. In Table 1, we compare how our clustering competes with \(k\)-means4 algorithm and report time performance (needed time for encoding vectors, clustering, and total time) In the second part, we test our compression against semantic relatedness tasks. The results are reported in Tables 1, 2 and 3. Model names in these tables are structured as follows: the sBERT pretrained model name, the abbreviation MCR indicates that MCR\({}^{2}\) is used as a projection, the number followed is a projection dimension, and optionally if \(k\)-means is used. The number given in parenthesis is the default embedding size.
Footnote 4: implemented in _scikit-learn_ Python package
Results on SR tasksWe evaluate our projection layer capacity of clustering against \(k\)-means clustering in retrieval space of sentence embeddings. The query sentence is assigned to a cluster of semantically related sentences, and we compare whether the ground truth duplicate belongs to that cluster, reported as an accuracy score. In all our experiments, the number of clusters was up to 128 (chosen empirically). In Table 1 and Fig. 2, accuracy scores and overall time for computation (i.e., encoding of sentence embeddings and clustering) depending on embedding size and type of model (MCR\({}^{2}\) with implicit clustering or \(k\)-means) are presented. Our method is comparable to \(k\)-means algorithm down to a certain dimension. On dimensions less than 200 \(k\)-means performed slightly better due to the fact that we did not put much effort into finding suitable \(\lambda\) values (suggested values of \(\lambda\) are from [16]). Our method computes clusters during inference which is much faster comparing to using \(k\)-means. Also worth noting, our projection layer used as a non-linear mapper in the original space (i.e., without any dimensionality reduction) retrofitted the sentence embeddings, enabling faster convergence of \(k\)-means algorithm (shown in the last row of Table 1).
Results on STS tasksTable 2 presents results for baseline (MPNET) and distilled model (MiniLM) coupled with the projection layer (MCR) with various embedding sizes. As one can see, a relative error of up to 13% in Spearman rank correlation is incurred if the sentence embedding dimension is as low as 6% of the original sentence embedding size. We conclude that due to the projection layer's ability to preserve cosine distance in lower-dimensional space, the neighborhood of points is preserved, resulting in less performance degradation. This trend is visible on all STS benchmarks with both models. The relative error in Spearman rank correlation coefficient with respect to projection dimension is shown in the first row of Fig. 3 for both the baseline and distilled model.
Results on SC tasksAs seen in Table 3, it is observable that per-sentence classification problems like SST2 and MRPC have less performance degradation than per-token sentence classification problems like MPQA and per-sentence
multi-classification problems like TREC, respectively. This is because fine-grained semantics for such tasks could not be preserved as much during projection. In the worst case, the performance degradation went up to 45% for the baseline model and up to 60% for the distilled model, respectively, at 6% of the original embedding size. The relative error in accuracy with respect to projected dimension is shown in the second row of Fig. 3 for both the baseline and distilled model.
## 5 Conclusion
In this paper, we demonstrated how MCR\({}^{2}\) technique could be used to obtain lower-dimension embeddings of sentence representation for fast semantic retrieval tasks up to 70% of its original size. Also, we argued that these embeddings are comparable with SBERT results on standard semantic NLP benchmarks. Due to the projection layer's ability to cluster data, we were able to cluster our sentences without any extra time cost and further reduce the representation of sentences to a reasonable dimension size without significant loss of the important semantic features. We hope our approach gives new insights for possible applications in deploying AI models in smaller-scale computer systems.
|
2303.08853 | Finite Transforms with Applications to Bessel Differential Equations of
Order Higher than Two | A finite transformation method is introduced. This method is equivalent to
the $Z$ transform method to a certain extent but generalizes it. By applying
the presented method to the Bessel functions, it is possible to solve related
ordinary differential equations of order higher than two with given initial
conditions. | Gabriel López Garza | 2023-03-15T18:12:29Z | http://arxiv.org/abs/2303.08853v1 | # Finite Transforms with Applications to Bessel Differential Equations of Order Higher than Two
###### Abstract
A finite transformation method is introduced. This method is equivalent to the \(Z\) transform method to a certain extent but generalizes it. By applying the presented method to the Bessel functions, it is possible to solve related ordinary differential equations of order higher than two with given initial conditions.
keywords: Finite Transforms, Bessel Functions, Operational Calculus +
Footnote †: journal: Integral Transforms and Special Functions
## 1 Introduction
The best-known transformation defined by sequences, among the so-called "Finite Transforms", is probably the \(Z\) transform (for an extensive study see [13]). Given a sequence \(\{f_{n}\}\), the Z transform is defined by \(F(z)=\sum_{n=0}^{\infty}\frac{f_{n}}{z^{n}}\). The main difference between the \(Z\) transform with the transforms introduced in this paper is that whereas a sequence is transformable if the series in the definition converges for at least one value of \(z\), for the transforms that we study, convergence is not required. In section 4 it is shown how the zeta transform is a special case of the transforms studied in this paper. Other well known examples of finite transforms are the Sturm-Liouville Transforms (see for instance [4]). Basically, in order to solve some differential equations (ordinary or even partial differential equations), a sequence can be associated to certain suitable functions. As it is known, the eigenfunctions \(\{\phi_{n}(t)\}\) of self-adjoint operators satisfying certain boundary conditions (i. e., the solutions of a given Sturm-Liouville problem), provide a generalized Fourier expansion for a function \(f\) (which satisfies certain conditions depending on
the problem under consideration), given by
\[f(t)=\sum_{n=0}^{\infty}a_{n}\phi_{n}(t),\quad a_{n}=\langle f,\phi_{n}\rangle,\]
where \(\langle\cdot,\cdot\rangle\) is the inner product defined for each Sturm-Liouville problem. So, in this example of finite transform, the sequence \(\{a_{n}\}\), may be defined as the transform, say \(\mathcal{T}\), of a function \(f(t)\) by
\[\mathcal{T}[f(t)]\stackrel{{ def}}{{=}}\{a_{n}\},\,n\geq 0.\]
The transform so defined is very useful in solving differential equations to some extent. Still, the method induced by this transform (for instance in [4]) does not provide a complete Mikusinski's operational method as defined in [11]. This is so since the convolution product induced by the inner product has non-zero divisors of zero (for instance for any \(n\neq m\), \(\langle\phi_{n},\phi_{m}\rangle=0\) given that the eigenfunctions of Sturm-Liouville problems are orthogonal and, of course, are different from zero). This limitation (limitation from the heuristic standpoint) is a rich source of examples for convolution algebras [8], nevertheless. The main interest of this paper is trying to extend the finite transform method as much as possible, if not to complete operational calculus in the Mikusinski sense, but to extend it to a method that allows solving many ordinary or partial differential equations involving Bessel and other operators in various practical problems.
Examples of finite transforms are, the Legendre Transform [3], and the Lagerre transform [10]. Still, there are many other forms of associating a sequence to a given function besides the Sturm-Liouville transform, for instance, the Neuman series [14, Chapter XVI]. In our approach, a transform is constructed through a differential operator which is in some sense an extension of the Maclaurin series for Bessel Functions as will be explained.
The transform method that is studied in this paper consists in associating to a certain set of suitable functions a sequence \(\{a_{n}\}\in\mathbb{C}\). Since the set of sequences is not an algebraic field, the study is restricted to sequences that are invertible for the Cauchy product. The fact that not each sequence different from zero is invertible is the main cause of the impossibility of extending our study to a complete Mikusinski's Operational Calculus. So our study shares more similarities with the traditional Laplace transform Method than with the Mikusinski's Operational Calculus as that studied in [1]. In fact,
to certain differential equations with given initial conditions, a polynomial may be associated via the finite transformation, so that an equation for the transform may be solved in terms of partial fraction decomposition. The partial fraction decomposition may be associated with known transforms or the Cauchy product of known transforms. Finally, the inverse transform is obtained with which the problem is solved, as usual.
The article is divided as follows: in section 2 the operations used in the set of transformations are described and the finite transform used is defined (subsection 2.1). In section 3 some higher order Bessel equations are solved and equivalence to Laplace transform is shown but is worth noticing that the transforms are defined by differential and not with integral operators as in the case of classic Laplace transform. In the final section, the striking similarities that have long been noted (5, p. 136) between the functions ber and bei with the functions \(\cos\) and \(\sin\), respectively, are fully explained within the context of the transform method studied in this article.
## 2 Mathematical setting
Given two sequences \(\{a_{n}\},\{b_{n}\}\) we define the product (the so-called Cauchy product) by
\[\{a_{n}\}\{b_{n}\} = \left\{\sum_{\tau=0}^{n}a_{\tau}b_{n-\tau}\right\}\] \[= \{a_{0}b_{0},a_{0}b_{1}+a_{1}b_{0},\dots\}.\]
In (2, p. 721) it is shown that the Cauchy product of two sequences is zero if and only if one of the sequences is the zero sequence \(a_{n}=0,n\geq 0\) so that a quotient field of sequences can be constructed. Nevertheless not every sequence different from the zero sequence is invertible. In order to be invertible, the first term of a given sequence is required to be different from zero. Actually, given a sequence \(\{a_{n}\},a_{n}\in\mathbb{C},a_{0}\neq 0\), the multiplicative inverse \(\{b_{n}\}\) with respect to the Cauchy product is easily calculated recursively, since
\(\{a_{n}\}\{b_{n}\}=\{1,0,0,\dots\}\) implies
\[b_{0} = \frac{1}{a_{0}}\] \[b_{1} = -\frac{a_{1}}{a_{0}^{2}}\] \[b_{2} = \frac{a_{1}^{2}}{a_{0}^{3}}-\frac{a_{2}}{a_{0}^{2}}\] \[\vdots\] \[b_{n} = \frac{-1}{a_{0}}\left(a_{1}b_{n-1}+\dots+a_{n}b_{0}\right).\]
The fact that not every sequence different from zero has a multiplicative inverse restricts the construction of an operational calculus in the Mikusinski's sense [11], but, as we will see, from the invertible sequences used in this paper, it is possible to build many transforms and operate with them as in many other transform methods used to solving differential equations. After the last considerations the set of suitable sequences \({\cal A}=\{\{a_{n}\}:a_{0}\neq 0\}\) is defined.
With the Cauchy product, it is possible to construct many operators \(T:{\cal A}\to{\cal A}\). An important example is the _right shift_, which is an operator defined by the Cauchy product with the sequence \(s=\{0,1,0,0,\dots\}\) if \(\{a_{n}\},n\geq 0\) is any sequence with \(a_{n}\in\mathbb{C}\), we have
\[s\{a_{0},a_{1},a_{2},\dots\}=\{0,1,0,0,\dots\}\{a_{0},a_{1},a_{2},\dots\}=\{0, a_{0},a_{1},a_{2},\dots\}.\]
So the right shift operator \(S:{\cal A}\to{\cal A}\) is defined by \(S[\{a_{n}\}]\stackrel{{ def}}{{=}}s\{a_{n}\}\). The notation \(S[\{a_{n}\}]\) is not in use in most papers and calling the operator \(s\) instead \(S\) is the standard procedure (for instance in [2]), we will follow this practice throughout this paper to avoid confusion with an established practice.
It follows that \(s^{2}=ss=\{0,0,1,0,0,\dots\}\) and, in general
\[s^{n}=\{0,0,\dots,0,1,0,\dots\},\]
is the sequence with zeros everywhere except in the place \(n+1\) where there is the number one. With the powers of \(s\) the following notation is standard
\[\{a_{0},a_{1},a_{2},\dots\}\stackrel{{ def}}{{=}}a_{0}s^{0}+a_{1} s+a_{2}s^{2}+\cdots, \tag{2}\]
where \(s^{0}=\{1,0,0,\dots\}\) and subsequently we write simply \(a_{0}s^{0}=a_{0}\), and any constant \(c\) in our calculus represents the sequence \(c\stackrel{{ def}}{{=}}\{c,0,0,\dots\}\). We recall that the relation (2) is purely formal and does not involve the concept of convergence at all (2, p. 723).
Notice that \(s=\{0,1,0,\dots\}\) is not invertible with respect to the Cauchy product, but the sequence \(\{1,0,0,\dots\}+s=\{1,1,0,0,\dots\}\), actually is, and the following formulas are derived easily [2]
\[\{1\} = \frac{1}{1-s}, \tag{3}\] \[\{r^{n}\} = \frac{1}{1-rs},\] (4) \[\left\{\cos\frac{\pi}{2}n\right\} = \{1,0,-1,0,\dots\}=\frac{1}{1+s^{2}},\] (5) \[\left\{\sin\frac{\pi}{2}n\right\} = \{0,1,0,-1,\dots\}=\frac{s}{1+s^{2}}. \tag{6}\]
That is, the sequence \(1-s\) is the multiplicative inverse of the sequence \(\{1\}=\{1,1,1,\dots\}\) for the Cauchy product or, more properly speaking, the sequence \(1-s\) is a representative of the class of inverse sequences of \(\{1\}\), and similar meaning has the symbol \(1/G(s)\) for the identities from (4) to (6), as well.
The _left shift_\(l\) of a sequence \(\{y_{n}\}\) is defined by
\[l\{y_{n}\}=l\{y_{0},y_{1},y_{2},\dots\} \stackrel{{ def}}{{=}} \{y_{1},y_{2},y_{3},\dots\}=\{y_{n+1}\}\] \[l^{m}\{y_{n}\} \stackrel{{ def}}{{=}} \{y_{n+m}\}. \tag{7}\]
The following formula (2, p. 724) will be relevant for solving differential equations related to Bessel and other operators in this paper:
\[s^{m}(l^{m}\{y_{n}\})=s^{m}\{y_{n+m}\}=\{y_{n}\}-y_{0}-sy_{1}-\dots-s^{m-1}y_{ m-1}. \tag{8}\]
Notice that formula (8) is similar to the Laplace transform formula for the derivative of order \(m\) of a given function.
### Finite Transform definition
Given a series \(g(t)=a_{0}f_{0}(t)+a_{1}f_{1}(t)+a_{2}f_{2}(t)+\cdots\), where \(a_{n}\in\mathbb{C}\) and \(f_{n}(t)\) are given functions, we define the transform \(\mathcal{T}[g(t)]=G(s)\) by the
formula
\[\mathcal{T}[g(t)] \stackrel{{ def}}{{=}} \{a_{n}\}=\{a_{0},a_{1},a_{2},\dots\} \tag{9}\] \[= a_{0}+a_{1}s+a_{2}s^{2}+\dots=G(s), \tag{10}\]
if and only if there exists a operator \(L\) such that \(L[f_{n}(t)]=f_{n-1}(t),n\geq 1\). The operator \(L\) corresponds to the left shift \(s\) and it is called the _concrete realization_ of the shift for the sequence \(\{f_{n}(t)\},n\geq 0\).
The \(a_{n}\) will be given by different instances of differential operators applied to given functions as seen in the following examples. Notice that (10) is a purely formal expression in powers of \(s\) and this notation is not germane with the question of convergence.
## 3 Examples of applications
1. _Transform induced by Bessel functions_. For Bessel functions of order \(\nu\geq 0\) we consider the monomials of the form \(f_{n,\nu}(t)=\frac{(t/2)^{2n+\nu}}{\Gamma(\nu+n+1)n!}\). If \(L_{\nu}\stackrel{{ def}}{{=}}\frac{1}{t}DtD-\frac{\nu^{2}}{t^{2}}\) is a differential operator where \(D\) denotes the derivative with respect to \(t\), then a direct calculation shows that \[L_{\nu}f_{n,\nu}(t) = f_{n-1,\nu}(t),\mbox{ for }n\neq 0,\] (11) \[L_{\nu}f_{0,\nu}(t) = 0,\mbox{ for }n=0,\] (12) in this way \(L_{\nu}\) corresponds to the left shift \(s\) in this concrete realization. In fact, if \(f(t)\) is a function for which \[f(t)=a_{0}f_{0,\nu}(t)+a_{1}f_{1,\nu}(t)+a_{2}f_{2,\nu}(t)+\dots,a_{n}\in \mathbb{C},\] then we define the Bessel Transform \(\mathcal{T}_{B_{\nu}}\) by \[\mathcal{T}_{B_{\nu}}[f(t)]=\{a_{0},a_{1},a_{2},\dots\}\] and, since \[L_{\nu}f(t)=a_{1}f_{0,\nu}(t)+a_{2}f_{1,\nu}(t)+\dots,\] by (11) and (12), we have \[\mathcal{T}_{B_{\nu}}[L_{\nu}f(t)]=\{a_{1},a_{2},a_{3}\dots\}\] so that applying \(L_{\nu}\) to a function \(f(t)\) corresponds to the left shift applied to \(\mathcal{T}[f(t)]\), as we claimed.
Examples of transforms for fixed \(\nu\geq 0\) are
\[{\cal T}_{B_{\nu}}[J_{\nu}(t)] = \{1,-1,1,-1,\dots\}, \tag{13}\] \[{\cal T}_{B_{\nu}}[I_{\nu}(t)] = \{1,1,1,\dots\},\] (14) \[{\cal T}_{B_{\nu}}[{\rm Ber}_{\nu}(t)] = \left\{\cos\frac{(3\nu+2n)\pi}{4}\right\},\] (15) \[{\cal T}_{B_{\nu}}[{\rm Bei}_{\nu}(t)] = \left\{\sin\frac{(3\nu+2n)\pi}{4}\right\}. \tag{16}\]
where \(J_{\nu}(t)\) are the well known Bessel functions of order \(\nu\geq 0\) and \(I_{\nu}(t)\) are the modified Bessel functions of the first kind. Particular interesting cases of (15) and (16) occur when \(\nu=0\) for which
\[{\cal T}_{B_{0}}[{\rm Ber}(t)] = \{1,0,-1,0,\dots\} \tag{17}\] \[{\cal T}_{B_{0}}[{\rm Bei}(t)] = \{0,-1,0,1,\dots\}. \tag{18}\]
Formulas (13) to (18) are well-known (see for instance [12] p. 140 for formulas (15) and (16)). For general functions, the Transform \({\cal T}_{B_{\nu}}\) can be calculated by iterating the operator \(L_{\nu}\). Let \(L_{\nu}^{m}=L_{\nu}(L_{\nu}^{m-1}),m\geq 2\) be the order \(m\) operator, then given a function \(f(t)\) for which the limit
\[\lim_{t\to 0}(L^{m}f(t))=a_{m} \tag{19}\]
does exist we will denote \(\lim_{t\to 0}(L_{\nu}^{m}f(t))=L_{\nu}^{m}f(0)=a_{m}\), so for the class of functions for which that limit exists for any \(m\in\mathbb{N}\) we define
\[{\cal T}_{B_{\nu}}[f(t)]=\{f(0),L_{\nu}f(0),L_{\nu}^{2}f(0),\dots\}.\]
Note the similarity of the series \(f(x)=f(0)+a_{1}f_{1,\nu}(t)+a_{2}f_{2,\nu}(t)+\cdots\) with the Maclaurin series. The coincidence is not developed further, for our purposes, the existence of the transform \({\cal T}_{B_{\nu}}\) requires only the existence of the limit (19). There are many functions for which the transform can be computed by applying the \(L_{\nu}\) operator repeatedly, for example, \(\sqrt{\frac{2}{\pi t}}\sinh t\), \(\sqrt{\frac{2}{\pi x}}\cosh t\), \(\sqrt{\frac{2}{\pi t}}\left(\sin t+\frac{\cos t}{t}\right)\), and many others. The reader can verify directly by applying the operator that the transform \({\cal T}_{B_{\nu}}\) exists, but a direct proof follows easily from the formulas for Bessel functions of order equal to one half and odd integer [12, pg. 138 and p. 140]. For example, with the formula 24.58 in [12] we have \(I_{1/2}(t)=\sqrt{\frac{2}{\pi t}}\sinh t\), and therefore
\[{\cal T}_{B_{1/2}}\left[\sqrt{\frac{2}{\pi t}}\sinh t\right]=\{1,1,1,\dots\}.\]
2. _Transforms method for Bessel operators._ After separation of variables of the Plum equation \(\Delta^{2}u-\gamma\Delta u-\frac{4\gamma}{r^{2}}u=\Lambda u\), where \(\Delta\) is the laplacian in polar coordinates [6], the resulting fourth order equation \((ty^{\prime\prime})^{\prime\prime}-((9t^{-1}+8\mu^{-1}t)y^{\prime})^{\prime}= \Lambda ty\) may be solved by using the transforms of example 1. In fact, last equation may be written in the form equivalent to [1, eq. 45] \[\left[L_{2}^{2}-\frac{8}{\mu}L_{2}-\left(\lambda^{2}+\frac{8}{\mu}\right) \lambda^{2}\right]y(t)=0.\] (20) Applying \({\cal T}_{B_{2}}\) to equation (20) we obtain \[\{Y_{n+2}\}-\frac{8}{\mu}\{Y_{n+1}\}-\left(\lambda^{2}+\frac{8}{\mu}\right) \lambda^{2}\{Y_{n}\}=0,\] (21) where \(\{Y_{n}\}={\cal T}_{B_{2}}[y(t)]\), \({\cal T}_{B_{2}}[L_{2}^{2}y(t)]\), and \({\cal T}_{B_{2}}[L_{2}y(t)]\) are obtained after using formula (7). Multiplying (21) by \(s^{2}\) and applying formula (8) we have \[\{Y_{n}\}-Y_{0}-sY_{1}-s\frac{8}{\mu}(\{Y_{n}\}-Y_{0})-s^{2}\left(\lambda^{2 }+\frac{8}{\mu}\right)\lambda^{2}\{Y_{n}\}=0,\] and hence \[\{Y_{n}\}=\frac{Y_{0}+s\left(Y_{1}-\frac{8}{\mu}Y_{0}\right)}{(1+\lambda^{2} s)\left(1-\left(\frac{8}{\mu}+\lambda^{2}\right)s\right)}\] (22) and after partial fraction decomposition in (22) \[\{Y_{n}\}=\frac{1}{2(4+\lambda^{2}\mu)}\left(\frac{8Y_{0}-\mu Y_{1}+Y_{0} \lambda^{2}\mu}{1+\lambda^{2}s}+\frac{Y_{1}\mu+Y_{0}\lambda^{2}\mu}{1-(8/\mu+ \lambda^{2})s}\right).\] (23) By using a general form of formulas (13) and (14), i. e. \[{\cal T}_{B_{\nu}}[J_{\nu}(\sqrt{\lambda}t)] = \lambda^{\nu/2}\{\lambda^{k}\}=\frac{\lambda^{\nu/2}}{1-\lambda s}\] (24) \[{\cal T}_{B_{\nu}}[I_{\nu}(\sqrt{\lambda}t)] = \lambda^{\nu/2}\{(-\lambda)^{k}\}=\frac{\lambda^{\nu/2}}{1+ \lambda s}\] (25)
we have \[{\cal T}_{B_{2}}[J_{2}(\lambda t)] = \lambda^{2}\{1,-1,1,-1,\dots\}=\frac{1}{1-(-\lambda^{2})s}\] \[{\cal T}_{B_{2}}\left[I_{2}\left(\sqrt{\frac{8}{\mu}+\lambda^{2}} \,t\right)\right] = \left(\frac{8}{\mu}+\lambda^{2}\right)\{1,1,1,1,\dots\}=\frac{1}{ 1-\left(\frac{8}{\mu}+\lambda^{2}\right)s}.\] With the last two equations, it is possible to take inverse transform in equation (23) to solve the initial value problem (20) with initial conditions \(y(0)=Y_{0}\), and \(\lim_{t\to 0}L_{2}y(t)=Y_{1}\). The reader may notice that with the identities \(J_{2}(u)=-J_{0}(u)+\frac{2}{u}J_{1}(u)\) and \(I_{2}(u)=I_{0}(u)-\frac{2}{u}I_{1}(u)\) we obtain the same solution as that given by formulas in [7] and in [1].
3. _Transform induced by Maclaurin series._ Consider the monomials of the form \(f_{n}(t)=t^{n}/n!\), so that, since \(\frac{d}{dt}t^{n}/n!=t^{n-1}/(n-1)!\), the left shift in this concrete realization is the derivative with respect to \(t\). Given a function \(f(t)\) for which \(\lim_{t\to 0}f^{(n)}(t)=f^{(n)}(0),\,n\geq 0\) does exist, the discrete transform of \(f\), say \({\cal T}_{M}[f(t)]=F(s)\) is, by definition, the sequence \[{\cal T}_{M}[f(t)] \stackrel{{ def}}{{=}} \{f(0),f^{\prime}(0),f^{(2)}(0),\dots,f^{(n)}(0),\dots\}\] \[= f(0)+f^{\prime}(0)s+f^{(2)}(0)s^{2}+\cdots,\] and, formally, \[{\cal T}_{M}[f^{\prime}(t)]=\{f^{\prime}(0),f^{(2)}(0),f^{(3)}(0),\dots\}=f ^{\prime}(0)+f^{(2)}(0)s+f^{(3)}(0)s^{2}+\cdots.\] In general if \({\cal T}_{M}[y(t)]=\{a_{0},a_{1},a_{2},\dots\}=\{a_{n}\},n\geq 0\), then \({\cal T}_{M}[y^{\prime}(t)]=\{a_{1},a_{2},\dots\}=\{a_{n+1}\},n\geq 0\). Consequently \({\cal T}_{M}[y^{(m)}(t)]=\{a_{m},a_{m+1},\dots\}=\{a_{n+m}\},n\geq 0\), so that the derivative of order \(m\) corresponds to the right shift of \(m\) places. Observe that, for instance, the sequence \(\{1,1,1,\dots\}\) corresponds to the exponential function, and by formula (3), \({\cal T}_{M}[\,e^{t}\,]=\{1,1,1,\dots\}=\frac{1}{1-s}\). Also, it is easy to see, according to formula (5), that \({\cal T}_{M}[\,\cos t\,]=\frac{1}{1+s^{2}}\). And, moreover, by formula (6), \({\cal T}_{M}[\,\sin t\,]=\frac{s}{1+s^{2}}\). With the formula (8) now it is possible to solve non-homogeneous differential equations with constant coefficients, as an example we solve \[y^{\prime\prime}-3y^{\prime}+2y=e^{3t};\] (26) \[y(0)=1,\;y^{\prime}(0)=0.\] (27)
We have by (7) that if \({\cal T}_{M}[y(t)]=\{y_{n}\},n\geq 0\) then in this realization, necessarily the derivative satisfies \[{\cal T}_{M}[y^{(m)}(t)]=l^{m}\{y_{n}\}=\{y_{n+m}\}.\] (28) Taking transforms in (26) and setting \({\cal T}_{M}[y(t)]=\{y_{n}\}=Y(s)\), we have by formula (28) \[\{y_{n+2}\}-3\{y_{n+1}\}+2\{y_{n}\}=\frac{1}{1-3s}.\] (29) Multiplying (29) by \(s^{2}\), taking into account the initial conditions (27) so that \({\cal T}_{M}[y(0)]=y_{0}=1\), \({\cal T}_{M}[y^{\prime}(0)]=y_{1}=0\), hence by formula (8), we obtain after simplification \[\{y_{n}\}(1-3s+2s^{2}) = \frac{s^{2}}{1-3s}+1-3s\] (30) \[\{y_{n}\} = \frac{1-6s+10s^{2}}{(1-3s)(s-1)(2s-1)}\] (31) \[Y(s) = \frac{\frac{1}{2}}{1-3s}+\frac{\frac{5}{2}}{1-s}-\frac{2}{1-2s}\] (32) \[y(t) = {\cal T}_{M}^{-1}[Y(s)]=\frac{1}{2}e^{3t}+\frac{5}{2}e^{t}-2e^{2t}.\] (33) Clearly (32) is obtained after partial fraction decomposition of (31), and (33) is obtained from (4), since in this realization \({\cal T}_{M}[e^{rt}]=\{r^{n}\}=\frac{1}{1-rs}\). Of course, as usual in transform methods, if \({\cal T}_{M}[y(t)]=Y(s)\) then we define \(y(t)\stackrel{{ def}}{{=}}{\cal T}_{M}^{-1}[Y(s)]\).
## 4 The \(Z\) transform case
A correspondence between the \(Z\) transform and the Maclaurin transform studied in this article is established now. For a function \(G\in{\cal C}^{\infty}\) a sequence \(\{g_{n}\}\) may be defined by the formula
\[g_{n}\stackrel{{ def}}{{=}}\lim_{x\to 0}\frac{1}{n!}\frac{d^{n}}{ dx^{n}}G\left(\frac{1}{x}\right) \tag{34}\]
if and only if the limit does exist. With the sequence \(\{g_{n}\}\), the zeta transform of the sequence is defined as
\[{\cal T}_{Z}[\{g_{n}\}]=F(z)=g_{0}+\frac{g_{1}}{z}+\frac{g_{2}}{z^{2}}+\cdots= \sum_{n=0}^{\infty}\frac{g_{n}}{z^{n}}, \tag{35}\]
if the series (35) converges for at least one \(z\in\mathbb{C}\). Properties of the zeta transform are well known (see for instance [13]). Notice that the correspondence
\[z^{-n}\leftrightarrow s^{n} \tag{36}\]
establishes a one-to-one correspondence, hence an equivalence between the zeta transform and Maclaurin transform. The principal difference between these transforms is that the left shift \(s\) is associated with a differential operator in all other transforms studied in this paper, (and in particular for the Maclaurin transform \(\mathcal{T}_{M}\), in such a way that \(s\) corresponds to the derivative), meanwhile by derivating \(z^{-n}\) is not possible to find the \(g_{n}\) from a given Laurent series \(\sum_{n=0}^{\infty}\frac{g_{n}}{z^{n}}\) and evaluating at \(z=0\), but it is possible by complex integration.
Another difference is, as already mentioned, the convergence of a given series is not determinant for the existence of the Maclaurin transforms. For instance, the sequence \(\{1,1,1,\dots,\}\) has \(Z\) transform
\[\mathcal{T}_{Z}[\{1,1,\dots\}]=F(z)=1+\frac{1}{z}+\frac{1}{z^{2}}+\dots= \frac{z}{z-1} \tag{37}\]
which exists only if \(|z|>1\), but
\[\mathcal{T}_{M}[\{1,1,\dots\}]=\frac{1}{1-s} \tag{38}\]
is well defined. So, meanwhile in (37) the \(Z\) transform \(F(z)\) only exists for \(|z|>1\), formula (38) indicates that the multiplicative inverse respect to the Cauchy product of the sequence \(\{1,1,1,\dots 1\dots\}\) is the sequence \(\{1,-1,0,0,\dots,0,\dots\}=1-s\), as the reader may easily corroborate.
**Example [Equivalence with \(Z\) transform].** As an illustration of the equivalence between the \(Z\) transform and the Maclaurin transform we solve an initial value problem:
\[y_{k+1}-3y_{k} = 4 \tag{39}\] \[y_{0} = 1.\]
which is already solved by using \(Z\) transforms in [9, Example 3.42]. By the properties of the \(Z\) transform [13, Chapter 3, section 3.7] equation and initial
condition in (39) is transformed in
\[Y(z)={\cal T}_{Z}[\{y_{n}\}] = \frac{-2z}{z-1}+\frac{3z}{z-3} \tag{40}\] \[= -2\sum_{n=0}^{\infty}\frac{1}{z^{n}}+3\sum_{n=0}^{\infty}\frac{3}{ z^{n}} \tag{41}\]
where (41) is obtained by expanding in Laurent series (40). In considering the correspondence (36) and applying it to (41) it is possible to find the Maclaurin transform of (39) given by
\[Y(s)={\cal T}_{M}[\{y_{n}\}] = \frac{-2}{1-s}+\frac{3}{1-3s}\] \[\{y_{k}\}={\cal T}_{M}^{-1}[Y(s)] = -2\{1,1,\dots\}+3\{1,3,3^{2},\dots\}\]
So the difference problem (39) has the solution \(y_{k}=-2+3^{k+1}\), as in [9].
Of course, problem (39) can be solved directly with the Maclaurin transform, as shown below, last argument was made only to emphasize the correspondence (36). To solve (39) directly with the Maclaurin transform \(Y(s)={\cal T}_{M}[\{y_{n}\}]\) we have by the equation in problem (39)
\[\{y_{k+1}\}-3\{y_{k}\}=4\{1,1,\dots\},\]
multiplying the last equation by \(s\), applying formula (8), and taking \(y_{0}=1\), which corresponds to the initial value in problem (39), it is obtain
\[s\{y_{k}\}-1-3s\{y_{k}\} = 4s\{1,1,\dots\}\] \[(1-3s)Y(s) = \frac{4s}{1-s}+1\] \[Y(s) = \frac{4s}{(1-s)(1-3s)}+\frac{1}{1-3s},\]
consequently, after partial fraction decomposition
\[Y(s)=\frac{-2}{1-s}+\frac{2}{1-3s}+\frac{1}{1-3s}=\frac{-2}{1-s}+\frac{3}{1-3s}.\]
So by taking inverse transform and using formula (4)
\[\{y_{k}\}=-2\{1,1,\dots\}+3\{3^{k}\},k>0,\]
so, \(y_{k}=-2+3^{k+1},k>0\) and \(y_{0}=1\) which coincides with the \(Z\) transform solution given before.
## 5 Conclusions
The striking similarity between ber and bei functions with \(\cos\) and \(\sin\) functions respectively has been noticed since time ago (see for instance [5, p. 136]) but, to the best of my knowledge, it has not been completely understood. In the approach of this article, Ber and Bei functions and \(\cos\) and \(\sin\) are in correspondence with the same transform (or sequence), Ber and Bei for \(L_{\nu}\) operator, and \(\cos\) and \(\sin\) for \(d/dt\) operator respectively. Table (1) shows the exact correspondence between these functions.
So, for instance, Table (1) shows that with sequence \(\{1,1,\dots\}\) is in correspondence with two different functions, the Bessel function \(I_{\nu}\) and the exponential function. So that they have the same transforms even when they were obtained with different transform methods. Of course, the operator associated with each function is \(L_{\nu}\) and \(D\), respectively. So the coincidence of two different methods is fully explained within the context of the transform method studied in this article.
|
2302.07849 | Zero-Shot Anomaly Detection via Batch Normalization | Anomaly detection (AD) plays a crucial role in many safety-critical
application domains. The challenge of adapting an anomaly detector to drift in
the normal data distribution, especially when no training data is available for
the "new normal," has led to the development of zero-shot AD techniques. In
this paper, we propose a simple yet effective method called Adaptive Centered
Representations (ACR) for zero-shot batch-level AD. Our approach trains
off-the-shelf deep anomaly detectors (such as deep SVDD) to adapt to a set of
inter-related training data distributions in combination with batch
normalization, enabling automatic zero-shot generalization for unseen AD tasks.
This simple recipe, batch normalization plus meta-training, is a highly
effective and versatile tool. Our theoretical results guarantee the zero-shot
generalization for unseen AD tasks; our empirical results demonstrate the first
zero-shot AD results for tabular data and outperform existing methods in
zero-shot anomaly detection and segmentation on image data from specialized
domains. Code is at https://github.com/aodongli/zero-shot-ad-via-batch-norm | Aodong Li, Chen Qiu, Marius Kloft, Padhraic Smyth, Maja Rudolph, Stephan Mandt | 2023-02-15T18:34:15Z | http://arxiv.org/abs/2302.07849v4 | # Zero-Shot Anomaly Detection without Foundation Models
###### Abstract
Anomaly detection (AD) tries to identify data instances that deviate from the norm in a given data set. Since data distributions are subject to distribution shifts, our concept of "normality" may also drift, raising the need for zero-shot adaptation approaches for anomaly detection. However, the fact that current zero-shot AD methods rely on foundation models that are restricted in their domain (natural language and natural images), are costly, and oftentimes proprietary, asks for alternative approaches. In this paper, we propose a simple and highly effective zero-shot AD approach compatible with a variety of established AD methods. Our solution relies on training an off-the-shelf anomaly detector (such as a deep SVDD) on a set of inter-related data distributions in combination with batch normalization. This simple recipe-batch normalization plus meta-training-is a highly effective and versatile tool. Our results demonstrate the first zero-shot anomaly detection results for tabular data and SOTA zero-shot AD results for image data from specialized domains.
## 1 Introduction
Anomaly detection (AD)--the task of identifying data instances deviating from the norm (Ruff et al., 2021)--is significant in many application domains, ranging from fake review identification and bot detection in social networks to tumor recognition and industrial fault detection. AD is especially important in safety-critical applications: failing to recognize anomalies in a chemical plant or a self-driving car may put lives at stake.
The notion of an "anomaly" inherently depends on our notion of "normal" data. However, the notion of normality depends on the context, and in particular, what we consider normal may drift over time. For example, when monitoring network traffic for intrusions, normal data may differ from user to user and day to day. Medical imaging data depends on the patient and the laboratory equipment employed.
Adapting an anomaly detector to drift in the normal data distribution is the task in _zero-shot AD_(Liznerski et al., 2022; Esmaeilpour et al., 2022; Schwartz et al., 2022). To date, zero-shot AD has primarily been dealt with by using _foundation models_--large neural networks trained on massive unlabeled data at scale by self-supervised learning (Radford et al., 2021; He et al., 2022).
Foundation models have proven impactful in many areas, especially in vision and NLP (Radford et al., 2021; Brown et al., 2020; Yu et al., 2022). However, many AD applications involve data from specialized domains, such as data from industrial fault detection, network intrusion detection, bot detection, healthcare, medical imaging, and other applications. Besides text and images, AD commonly involves time series and tabular data for which no foundation models are currently available. Furthermore, foundation models have a large carbon footprint and are currently owned by a few major companies, raising questions about their free availability to the broader public.
This paper challenges the common assumption that one needs foundation models for zero-shot AD. The main contribution of this paper is Adaptive Centered Representations (ACR), a new lightweight model for zero-shot AD. ACR is theoretically grounded, simple, domain-independent, and easy to implement and use. It can be employed for zero-shot AD using data from any domain, whether it is time series or tabular data, DNA sequences, or graphs.
ACR relies on a simple idea, namely, _training an anomaly detector on a meta-set of related data distributions using batch normalization layers_. In this paper, we will show that this simple modification to the existing training paradigm will allow the model to automatically adapt to data from new but related distributions, i.e., do zero-shot learning. This approach applies to a variety of backbone models commonly used in deep AD (Ruff et al., 2018; Qiu et al., 2021).
We exemplify our approach using the DSVDD model for
AD. DSVDD is trained to map high-dimensional data, such as images, to a single point (called "the center") in a lower-dimensional feature space. At test time, DSVDD scores anomalies based on their \(\ell_{2}\) distance to the center, i.e., distant points are considered anomalies.
Fig. 1 illustrates our training and testing setups for DSVDD. Here, we observe four related datasets for training, each one equipped with abnormal (scattered) and normal samples (clustered and encircled). We train a single DSVDD, using batch normalization, to map normal samples into the vicinity of a center in a feature space (blue region)1. For a new dataset (blue frame), our method will instantaneously adapt to the new data distribution and map all normal samples to the same blue region in the feature space.
Footnote 1: and map abnormal samples away from the center where applicable.
Why does this method work? Batch normalization is a mechanism to re-calibrate the distribution of intermediate features in a mini-batch based on the _majority_ of data in the batch. Since we assume that this majority will be representative of each data distribution's "normal" component, we can train a single one-class AD model on multiple data distributions (with relative distribution shifts) simultaneously.
Our contributions can be summarized as follows:
* **A simple but effective new method.** Our results for the first time show that training off-the-shelf deep anomaly detectors on a meta-training set, using batch normalization layers, gives automatic zero-shot generalization for AD.
* **Zero-shot AD on tabular data.** To the best of our knowledge, we provide the first results for zero-shot AD results on tabular data. We show that our adaptation approach retains a high degree of accuracy.
* **Competitive results for images.** Our results show that we achieve state-of-the-art zero-shot AD results on non-natural images and competitive results on natural images.
## 2 Related Work
Deep AD.Many recent advances in AD built on deep learning methods (Ruff et al., 2021). One early strategy was to use autoencoder- (Principi et al., 2017; Zhou and Paffenroth, 2017; Chen and Konukoglu, 2018) or density-based (Schlegl et al., 2017; Deecke et al., 2018) models. Another pioneering stream of research combines one-class classification (Scholkopf et al., 2001) with deep learning (Ruff et al., 2018; Qiu et al., 2022). Many other approaches to deep AD are self-supervised. They employ a self-supervised loss function to train the detector and score anomalies (Golan and El-Yaniv, 2018; Hendrycks et al., 2019; Sohn et al., 2020; Bergman and Hoshen, 2020; Qiu et al., 2021; Schneider et al., 2022; Shenkar and Wolf, 2021).
All these approaches assume that the data distribution will not change too much at test time. However, in many practical scenarios, there will be significant shifts in the abnormal distribution and even the normal distribution. For example, Dragoi et al. (2022) observed that existing AD methods fail in detecting anomalies when distribution shifts occur in network intrusion detection.
Few-shot AD.Several recent works have studied adapting an anomaly detector to shifts by fine-tuning on a few test samples. One stream of research applies model-agnostic meta learning (MAML) (Finn et al., 2017) to various deep AD models, including one-class classification (Frikha et al., 2021), generative adversarial networks (Lu et al., 2020), autoencoder (Wu et al., 2021), graph deviation networks (Ding et al., 2021), and supervised classifiers (Zhang et al., 2020; Feng et al., 2021). Some approaches extend prototypical networks to few-shot AD (Kruspe, 2019; Chen et al., 2022). Kozerawski and Turk (2018) learn a linear SVM with a few samples on top of a frozen pre-trained feature extractor, while Sheynin et al. (2021) learn a hierarchical generative model from a few normal samples for image AD. Wang et al. (2022) learn an energy model for AD. The anomalies are scored by the error of reconstructing their embeddings from a set of normal features that are adapted with a few test samples.
In contrast to all few-shot AD methods, we propose a zero-shot AD method and demonstrate that the learned AD model can adapt itself to new tasks without any support samples.
Figure 1: Illustration of training (black) and testing (blue) exemplified for DSVDD. During training, the model learns to solve all tasks jointly, a) learning separable features for samples from different distributions and b) learning to map the samples from the major (normal) distribution to a shared learned center in embedding space while mapping other samples away from the center. At test time, the learned model exploits the learned inductive bias to map the normal (majority) samples to the center of embedding space while mapping anomalous samples away from the center.
Zero-shot AD.Foundation models pre-trained on massive training samples have achieved remarkable results on zero-shot tasks on images (Radford et al., 2021; Yu et al., 2022; Jia et al., 2021; Yuan et al., 2021). For example, contrastive language-image pre-training (CLIP) (Radford et al., 2021) is a pre-trained language-vision model learned by aligning images and their paired text descriptions. One can achieve zero-shot image classification with CLIP by searching for the best-aligned text description of the test images. Esmaeilpour et al. (2022) extend CLIP with a learnable text description generator for out-of-distribution detection. Liznerski et al. (2022) apply CLIP for zero-shot AD and score the anomalies by comparing the alignment of test images with the correct text description of normal samples. Schwartz et al. (2022) study the performance of masked autoencoder (He et al., 2022), a pre-trained vision foundation model, on few-shot and zero-shot image AD using the reconstruction error as the anomaly score.
However, foundation models are not available for all data types. Foundation models do not exist for, e.g., tabular data, the arguably most general and flexible data type and significant in applications such as network security and industrial fault detection. Also, existing adaptations of foundation models for AD (e.g., CLIP) may generalize poorly to specific domains that have not been covered in their massive training samples. For example, Liznerski et al. (2022) observed that CLIP performs poorly on non-natural images, such as MNIST digits. In contrast, ACR does not rely on a powerful pre-trained foundation model, enabling zero-shot AD on various data types.
## 3 Method
We begin by describing our problem statement in Sec. 3.1 and then present our proposed solution in Sec. 3.2. Finally, we discuss an important extension of our method that leads to improved performance in Sec. 3.3.
### Problem Statement
We consider a distribution of interrelated data distributions, a standard assumption in meta-learning and zero-shot learning (Baxter, 2000). Let \(\mathcal{Q}\) be a (meta-)distribution from which we sample \(K\) training distributions \(P_{1},\dots,P_{K}\) and a test distribution \(P_{*}\):
\[P_{1},\cdots,P_{K},P_{*}\stackrel{{\text{i.i.d.}}}{{\sim}}\mathcal{Q}. \tag{1}\]
We assume that the distributions in \(\mathcal{Q}\) share some common structure, such that training a model on one distribution has the potential to aid in deploying the model on another distribution. For example, the data \(\mathbf{x}\) could be radiology images from patients, and each \(P_{j}\) or \(P_{*}\) could be a distribution of images from a specific hospital. These distributions share similarities but differ systematically because of differences in radiology equipment, calibration, and patient demographics.
Our goal is to use data from only the distributions \(P_{1},...,P_{K}\) to learn an _anomaly detector_, i.e., a model that learns to distinguish samples compatible with \(P_{j}\) from samples not coming from \(P_{j}\) (see details below).
After training, we expect our anomaly detector to _instantly adapt_ (in a zero-shot fashion) to the test distribution \(P_{*}\) without further training, i.e., discover anomalies with respect to the "new normal" distribution \(P_{*}\). Details on how to accomplish this goal will be described next.
### Adaptively Centered Representations
As common in AD, we propose to learn an _anomaly score function_\(S_{\theta}(\mathbf{x}|P)\), characterized by learnable parameters \(\theta\) and a distribution \(P\in\{P_{*},P_{1},\dots,P_{K}\}\) such that
\[S_{\theta}(\mathbf{x}|P)\rightarrow\begin{cases}\text{small}&\text{if }\mathbf{x}\sim P\\ \text{large}&\text{if }\mathbf{x}\not\vdash P\end{cases}. \tag{2}\]
At training, the parameters \(\theta\) are learned to satisfy Eq. (2) with \(P=P_{j}\) on samples \(x\) from the training distributions \(P_{1},\dots,P_{K}\). Since the training and test distribution(s) are drawn from the same meta distribution (Eq. (1)), the test distribution \(P=P_{*}\) should approximately satisfy Eq. (2) too, if \(K\) is sufficiently large. In other words, \(S_{\theta}(\mathbf{x}|P_{*})\) is a reasonable anomaly score at test time.
In any practical training or testing environment, we encounter a mixture of normal samples and anomalies. To simulate the fact that each \(P_{j}\) during training is not "pure", we contaminate it by admixing a fraction \((1-\pi)\) of data points from a _complementing_ distribution \(\bar{P}_{j}\), representative of anomalies. Since we do not know what \(\bar{P}_{j}\) is, we will approximate it (heuristically) during training with a mixture over the other components in the training data. This results in the following corrupted version of \(P_{j}\):
\[P_{j}^{\pi}:=\pi P_{j}+(1-\pi)\bar{P}_{j},\qquad\bar{P}_{j}:=\frac{1}{K-1} \sum_{i\neq j}P_{i} \tag{3}\]
We thereby choose \(\pi\) such that \(\frac{1}{2}\ll\pi\leq 1\). Analogously, we can define a test mixture distribution \(P_{*}^{\pi}\). This notation is fairly general and, in particular, captures the case where the training distribution is free of anomalies (\(\pi=1\)). In Sec. 3.3, we will present a loss function that exploits artificial anomalies from \(\bar{P}_{j}\) during training.
Training Objective.We assume that optimizing our anomaly score \(S_{\theta}(\mathbf{x}|P_{j})\) can be achieved by minimizing a corresponding loss function \(L_{\theta}(\mathbf{x}|P_{j})\). Its parameters \(\theta\) are shared across the different AD tasks \(j\). In the simplest setup, this loss function is identical to the anomaly score,
i.e., \(L_{\theta}(\mathbf{x}|P_{j})=S_{\theta}(\mathbf{x}|P_{j})\), minimized over "normal" samples, but we also consider more sophisticated setups below. We thus study the following minimization problem:
\[\min_{\theta}\sum_{j=1}^{K}E_{\mathbf{x}\sim P_{j}^{\pi}}[L_{\theta}(\mathbf{x} |P_{j})]. \tag{4}\]
Typical choices for self-supervised training losses are DSVDD (Ruff et al., 2019) or neural transformation learning (NTL) (Qiu et al., 2021). Details will follow in Sec. 3.3.
Adapting to New Data Distributions.So far, we have left out how the anomaly score or loss function can depend on a training or test distribution \(P\). We stress that our work assumes that, after training, we are not allowed to adjust any parameters \(\theta\) to newly encountered distributions.
The key idea of our approach is to evaluate the anomaly scores not individually for single data points, but jointly based on a _mini-batch_\(\mathbf{x}_{1:B}\stackrel{{ iid}}{{\sim}}P\):
\[S_{\theta}(\mathbf{x}|P) \approx S_{\theta}(\mathbf{x}|\mathbf{x}_{1:B})\] \[L_{\theta}(\mathbf{x}|P) \approx L_{\theta}(\mathbf{x}|\mathbf{x}_{1:B}) \tag{5}\]
Since distributions \(P\) typically encountered in AD practice dominantly consist of "normal" samples, information about \(P\) can be extracted from the mini-batch. The minimization problem then becomes
\[\min_{\theta}\sum_{j=1}^{K}E_{\{\mathbf{x}_{1:B}\}_{j=1}^{\mathrm{iid}}P_{j}^{ \pi}}\left[\sum_{i=1}^{B}L_{\theta}(\mathbf{x}_{i}|\mathbf{x}_{1:B})\right]. \tag{6}\]
Next, we discuss another ingredient of the proposed method: batch normalization. We start with a motivating example.
Example.For illustration, let us first consider a simple outlier detector free of parameters \(\theta\):
\[S(\mathbf{x}|\mathbf{x}_{1:B})=\|\mathbf{x}-\tfrac{1}{B}\sum_{j=1}^{B}\mathbf{ x}_{j}\|_{2}^{2}. \tag{7}\]
If the \(\mathbf{x}_{i}\) lie in an informative feature space, anomalies will have a higher-than-usual distance to the mean, making the approach a simple, adaptive AD method.
While the example provides a proof of concept, in practice, the normal samples typically do not concentrate around their mean in the raw data space. Next, we develop an approach that learns to encode the samples (of potentially unseen test distributions) into a feature space where the intuition behind this example can be exploited for zero-shot AD.
Zero-shot Adaptation by Batch Normalization.The proposed method uses as anomaly score a neural network \(S_{\theta}(\mathbf{x}|P)=f_{\theta}(\mathbf{x})\) with _batch normalization layers and \(f_{\theta}\) obtained by meta-training of Eq._ (6).
The key idea--illustrated in Fig. 2--is to (batch-)normalize the normal data in each task (separately) so that each task is approximately centered around the origin (zero) and has variance one in neural feature space. In that way, the neural network adapts to the majority of data in a batch (the "new normal") to accomplish its training task.
The batch statistics \(\{(\mathbf{\mu}_{l},\mathbf{\sigma}_{l})\}_{l=1}^{L}\) in all \(L\) layers are adaptive with \(\{\mathbf{x}_{i}\}_{i=1}^{B}\). For every batch normalization layer with inputs \(\{f_{\theta,l}(\mathbf{x}_{i})\}_{i=1}^{B}\):
\[\mathbf{\mu}_{l}=\sum_{i=1}^{B}f_{\theta,l}(\mathbf{x}_{i})/B,\quad\mathbf{\sigma}_{l }=\left(\sum_{i=1}^{B}\big{(}f_{\theta,l}(\mathbf{x}_{i})-\mathbf{\mu}_{l}\big{)} ^{2}/B\right)^{1/2}.\]
All computations are point-wise. To preserve the adaptability of the batch normalization layers, all batch statistics \(\{(\mathbf{\mu}_{l},\mathbf{\sigma}_{l})\}_{l=1}^{L}\) are computed on fly from the training/test samples. Since normal samples make up the majority of the batch, computing \(\mathbf{\mu}_{l}\) is dominated by normal samples.
As a result, regardless of the task, the distance to the origin defines a reasonable anomaly score. Remarkably, we obtain as a result an anomaly detector generalizing to unseen distributions without the need to adjust any model parameters.
### Meta Outlier Exposure
We now discuss an important aspect of our approach that avoids trivial solutions in meta-learning. The approach builds on using labeled anomalies from \(\bar{P}_{j}\) during training.
As discussed in (Hendrycks et al., 2018; Qiu et al., 2022), many anomaly scores \(S_{\theta}(\mathbf{x}|P)\) allow for easily constructing a score \(A_{\theta}(\mathbf{x}|P)\) that behaves inversely. That means, we expect \(A_{\theta}(\mathbf{x}|P)\) to be _large_ when evaluated on normal samples, and small for anomalies. Importantly, both scores
Figure 2: Illustration of batch normalization for AD with two tasks \(P_{1}^{\pi}\) and \(P_{2}^{\pi}\). The method (batch-)normalizes the data in \(P_{j}^{\pi}\) separately. If each \(P_{j}^{\pi}\) consists mainly of normal samples, most samples will be shifted close to the origin (by subtracting the respective task’s mean). As a result, samples from all tasks concentrate around the origin in a joint feature space (gray area) and thus can be tightly enclosed using, e.g., one-class classification. Samples from the test task are batch normalized in the same way.
share the same parameters. In the context of DSVDD, we define \(S_{\theta}(\mathbf{x})=1/A_{\theta}(\mathbf{x})\), but other definitions are possible for alternative losses (Ruff et al., 2018, 2019; Qiu et al., 2022b). Using the inverse score, we can construct a supervised AD loss on the meta training set as follows.
We define a task-sample indicator variable \(y_{i,j}\) as
\[y_{i,j}=\begin{cases}1&\text{if }\mathbf{x}_{i}\in\bar{P}_{j}\\ 0&\text{if }\mathbf{x}_{i}\in P_{j}\end{cases}, \tag{8}\]
which is also called an anomaly label. A natural choice for the loss in Eq. (6) is therefore
\[L_{\theta}(\mathbf{x}_{i}|\mathbf{x}_{1:B})=(1-y_{i,j})S_{\theta}(\mathbf{x}_ {i}|\mathbf{x}_{1:B})+y_{i,j}A_{\theta}(\mathbf{x}_{i}|\mathbf{x}_{1:B}).\]
The loss function resembles the outlier exposure loss of Hendrycks et al. (2018), but as opposed to using synthetically generated samples (typically only available for images), we use samples from the complement \(\bar{P}_{j}\) at training time to synthesize outliers.
In addition to DSVDD, we also study backbone models such as binary classifiers and NTL (Qiu et al., 2021). For NTL, we adopt the \(\mathbf{S}_{\theta}\) and \(\mathbf{A}_{\theta}\) used by Qiu et al. (2022b). For binary classifiers, we set \(\mathbf{S}_{\theta}(\mathbf{x})=-\log\left(1-\sigma(f_{\theta}(\mathbf{x}))\right)\) and \(\mathbf{A}_{\theta}(\mathbf{x})=-\log\sigma(f_{\theta}(\mathbf{x}))\).
Meta Outlier Exposure avoids trivial solutions.The benefit of the outlier exposure loss in meta-training is that the learning algorithm cannot simply learn a model on the _average_ data distribution, i.e., without learning to adapt. This failure to adapt is a common problem in meta-learning. Our solution relies on using each training sample \(\mathbf{x}_{i}\) in different contexts: depending on the sign of \(y_{i,j}\), data point \(\mathbf{x}_{i}\) is considered normal (when drawn from \(P_{j}\)) or anomalous (when drawn from \(\bar{P}_{j}\)). This ambiguity prevents the model from learning an average model over the meta data set and forces it to adapt to individual distributions instead.
## 4 Experiments
We evaluate the proposed method ACR on both image and tabular data when distribution shifts occur at test time. We compare ACR with established baselines from the deep AD, zero-shot AD, and few-shot AD. The experiments show that our method is suitable for different data types, applicable to diverse AD models, robust to various anomaly ratios, and significantly outperforms existing baselines.
We report results on image and tabular data in Sec. 4.1 and Sec. 4.2, and perform ablation studies in Sec. 4.3.
Practical Training and Testing.We construct training and test distributions using labeled datasets2, where all \(\mathbf{x}\) from the same class \(j\) (e.g., all 0's in MNIST) are considered samples from the same \(P_{j}\). The dataset \(\mathcal{Q}\) (e.g., MNIST as a whole) is the meta-set of all these distributions.
Footnote 2: these are either classification datasets (which have labels) or datasets where one of the covariates is binned to provide classes.
For training and testing, we split the meta-dataset into disjoint subsets. In the MNIST example, we define \(P_{0},...,P_{4}\) as the distributions of images with digits \(0-4\) and use them for training. For testing, we select a single distribution of digits not seen during training (e.g., digit \(5\)) as the "new normal" distribution \(P_{*}\) to which we adapt the model. The remaining digits (\(6-9\) in this example) are used as test-time anomalies. To reduce variance, we rotate the roles among digits \(5-9\), using each digit as a test distribution once.3
Footnote 3: This is the popular “one-vs-rest” testing set-up, which is standard in AD benchmarking. (e.g., (Ruff et al., 2021))
### Experiments on Images
We evaluate ACR on images when applied to two simple backbone models: DSVDD and a binary classifier. The evaluation demonstrates that our method achieves superior zero-shot AD results on natural images, hand-written characters, and medical images.
Image Datasets.We study four image datasets: CIFAR100 (Krizhevsky et al., 2009)/CIFAR100-C (Hendrycks and Dietterich, 2019), Omniglot (Lake et al., 2015), MNIST (LeCun et al., 1998), and OrganA (Yang et al., 2021). CIFAR100 contains 100 classes of natural images, while the other datasets contain non-natural images. CIFAR100-C is the noise-corrupted version of CIFAR100's test data, thus considered as distributionally shifted data. We train using all training images from CIFAR100 and test all models on CIFAR100-C. Omniglot is a benchmark dataset for meta-learning. It has 1623 classes of hand-written characters, where each class comprises 20 images. All models are trained on the first 1200 classes and tested on the unseen 423 classes. MNIST has ten classes of hand-written digits. OrganA is a medical image dataset with 11 classes (for various body organs). On both MNIST and OrganA, we leave two successive classes out for testing and use the other classes for training. We repeat the evaluation on all combinations of two consecutive classes.
Image Baselines.We compare our proposed method with a state-of-the-art deep anomaly detector, a state-of-the-art zero-shot AD baseline, and a few-shot AD baseline.
Anomaly detection with an inductive bias (ADIB) (Deecke et al., 2021) is a state-of-the-art deep anomaly detector fine-tuning a pre-trained ResNet with outlier exposure (Hendrycks et al., 2018). It achieves an AUC of \(99\%\) on CIFAR-10, the highest reported number in the literature. CLIP-AD (Liznerski et al., 2022) is a zero-shot method based on the foundation model CLIP (Radford et al., 2021). CLIP-AD detects anomalies by comparing a test image
to a normal object's text description in a semantic space. Notice that running CLIP-AD requires a language description of the normal class, which can be a severe limitation in practice. One-class model-agnostic meta learning (OC-MAML) (Frikha et al., 2021) is a few-shot AD method trained with MAML (Finn et al., 2017). At test time, OC-MAML requires a few normal samples to update the model parameters. We implement OC-MAML using their officially released code with the same model architecture as our method. We always compare to 1-shot OC-MAML in our experiments. Feat+BN is a baseline for zero-shot AD on images, where we extract image features from a pre-trained ResNet and then apply batch normalization on the output to score anomalies based on the distance to the center. See Supp. A for more details.
**Implementation Details.** We use \(\pi=0.8\) in Eq. (3) to mix the training distributions from the different classes. For each approach, we train a single model and test it on different anomaly ratios. Two backbone models are implemented: DSVDD (ACR-DSVDD) and a binary classifier with cross entropy loss (ACR-BCE). More details are given in Supp. C.2.
**Results.** We report the results in terms of the AUROC averaged over five independent test runs with standard deviation. We apply the model to tasks with different anomaly ratios to study the robustness of ACR to the anomaly ratio at test time. The results on CIFAR100-C in Tab. 14 indicate that ACR outperforms ADIB, Feat+BN, and OC-MAML significantly. ACR achieves results competitive with CLIP-AD under various anomaly ratios. Although ADIB achieves good results when no distribution shift occurs (see results in Deecke et al. (2021)), ADIB is not able to generalize its performance to test data with distribution shifts. While the few-show method OC-MAML relies on a sufficiently large set of normal data for the adaptation, ACR requires no normal data at test time and achieves better results without any parameter updates. CLIP-AD has strong performance on CIFAR100C, presumably because it is trained on massive natural images from the internet (also covering CIFAR100/CIFAR100-C related images) rather than its adaptation ability. Also, CLIP-AD requires a text
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{MUSIST} & \multicolumn{3}{c}{OrganA} & Omniglot & \\ \cline{2-9} & 1\% & 5\% & 10\% & 1\% & 5\% & 10\% & 5\% & 10\% & 20\% \\ \cline{2-9} ADIB & 50.4\(\pm\)2.0 & 49.4\(\pm\)1.7 & 49.4\(\pm\)2.0 & 49.9\(\pm\)6.3 & 50.3\(\pm\)2.4 & 50.2\(\pm\)1.3 & 50.8\(\pm\)1.7 & 49.5\(\pm\)0.6 & 49.7\(\pm\)0.4 \\ Feat + BN & 80.0\(\pm\)1.9 & 78.4\(\pm\)1.5 & 74.9\(\pm\)0.3 & 54.2\(\pm\)1.7 & 53.5\(\pm\)0.8 & 52.9\(\pm\)0.3 & 88.1\(\pm\)0.8 & 86.7\(\pm\)0.5 & 84.4\(\pm\)0.6 \\ OC-MAML & 83.7\(\pm\)3.5 & 86.0\(\pm\)2.3 & 86.4\(\pm\)2.8 & 73.7\(\pm\)4.7 & 72.2\(\pm\)2.6 & 74.2\(\pm\)2.4 & 98.6\(\pm\)0.3 & 98.4\(\pm\)0.2 & 98.5\(\pm\)0.1 \\ CLIP-AD & 53.9\(\pm\)1.4 & 53.7\(\pm\)0.9 & 53.9\(\pm\)0.8 & 52.6\(\pm\)0.8 & 51.9\(\pm\)0.6 & 51.5\(\pm\)0.2 & N/A & N/A & N/A \\ \hline ACR-DSVDD & **91.9\(\pm\)0.8** & **90.4\(\pm\)0.2** & **88.8\(\pm\)0.2** & 79.0\(\pm\)1.0 & 77.7\(\pm\)0.4 & 76.3\(\pm\)0.3 & **99.1\(\pm\)0.2** & **99.1\(\pm\)0.2** & **99.2\(\pm\)0.0** \\ ACR-BCE & 88.7\(\pm\)0.6 & 87.8\(\pm\)0.4 & 86.5\(\pm\)0.3 & **81.1\(\pm\)0.8** & **79.5\(\pm\)0.4** & **78.3\(\pm\)0.3** & 98.5\(\pm\)0.2 & **98.9\(\pm\)0.1** & **99.1\(\pm\)0.1** \\ \hline \hline \end{tabular}
\end{table}
Table 2: AUC (\(\%\)) with standard deviation for anomaly detection on non-natural images: Omniglot, MNIST, and OrganA. ACR with both backbone models outperforms all baselines on all datasets. In comparison, CLIP-AD performs much worse on non-natural images.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Gaussian Noise} & \multicolumn{3}{c}{Gaussian Blur} \\ \cline{2-9} & 1\% & 5\% & 10\% & 20\% & 1\% & 5\% & 10\% & 20\% \\ \cline{2-9} ADIB & 50.9\(\pm\)2.4 & 50.5\(\pm\)0.9 & 50.6\(\pm\)0.9 & 50.2\(\pm\)0.5 & 50.1\(\pm\)1.4 & 51.1\(\pm\)1.4 & 49.9\(\pm\)1.0 & 49.8\(\pm\)0.3 \\ Feat + BN & 62.5\(\pm\)3.1 & 61.8\(\pm\)1.7 & 61.2\(\pm\)0.6 & 60.2\(\pm\)0.4 & 64.9\(\pm\)1.5 & 65.3\(\pm\)1.2 & 64.0\(\pm\)0.9 & 62.7\(\pm\)0.4 \\ OC-MAML & 53.0\(\pm\)3.6 & 54.1\(\pm\)1.9 & 55.8\(\pm\)0.6 & 57.1\(\pm\)1.0 & 55.6\(\pm\)3.6 & 56.6\(\pm\)0.6 & 56.8\(\pm\)1.1 & 57.6\(\pm\)0.6 \\ CLIP-AD & 82.3\(\pm\)1.1 & 82.6\(\pm\)0.9 & 82.3\(\pm\)0.9 & 82.6\(\pm\)0.1 & **91.9\(\pm\)0.8** & **92.7\(\pm\)0.5** & **92.1\(\pm\)0.5** & **92.3\(\pm\)0.2** \\ \hline ACR-DSVDD & **87.7\(\pm\)1.4** & **86.3\(\pm\)0.9** & **85.9\(\pm\)0.4** & **85.6\(\pm\)0.4** & 88.5\(\pm\)1.1 & 88.5\(\pm\)0.7 & 88.7\(\pm\)0.4 & 88.6\(\pm\)0.3 \\ ACR-BCE & 84.3\(\pm\)2.2 & **86.0\(\pm\)0.3** & **86.0\(\pm\)0.2** & **85.7\(\pm\)0.4** & 85.6\(\pm\)1.3 & 85.0\(\pm\)0.6 & 85.0\(\pm\)0.9 & 84.7\(\pm\)0.5 \\ \hline \hline \end{tabular}
\end{table}
Table 1: AUC (\(\%\)) with standard deviation for anomaly detection on CIFAR100-C with Gaussian noise or Gaussian Blur (Hendrycks and Dietterich, 2019). ACR with both backbone models perform best on images with Gaussian noise and outperform baselines except for CLIP-AD on images with Gaussian Blur.
Figure 3: 2D visualization (after PCA) of the adaptively centered representations for two test tasks in the Omniglot dataset. The same learned DSVDD model adapts with our proposed method and maps samples from the majority class (class 1 (left) and class 2 (right)) to the same center in the embedding space in both tasks.
description of the normal class and therefore receives more annotation information at test time than ACR. Still, ACR outperforms CLIP-AD when the test images are corrupted with Gaussian noise.
We also evaluate the performance on non-natural images and report the results in Tab. 2. We can see that ACR consistently achieves the best results and significantly outperforms CLIP-AD. Since non-natural images are not included in the training set of CLIP, CLIP-AD does not perform well even on the simple MNIST dataset. Also, CLIP-AD cannot be applied on Omniglot since there is no available text description of the characters.
We provide a visualization of the learned representations from DSVDD on the Omniglot dataset as qualitative evidence in Fig. 3. We observe that even though the normal and abnormal data classes flip in two plots, the model learns to center the samples from the majority class and map the samples from the minority class away to the center in the embedding space. In conclusion, ACR is an easy-to-use zero-shot AD method and achieves superior zero-shot AD results on different types of images. The performance of ACR is also robust against the test anomaly ratios.
### Experiments on Tabular Data
Tabular data is widely used in many real-world AD applications, e.g, network intrusion detection and malware detection. Distribution shifts in such data occur naturally over time (e.g., as new malware emerges), especially in a large time span. We evaluate ACR on tabular AD when applied to DSVDD and NTL. The evaluation shows that ACR achieves a new state-of-the-art of zero-shot AD on tabular data when distribution shifts occur.
**Tabular Datasets.** We evaluate all methods on two real-world tabular AD datasets: Anoshift (Dragoi et al., 2022) and Malware (Huynh et al., 2017).
Anoshift is a traffic dataset for network intrusion detection collected over ten years (2006-2015). We follow the preprocessing procedure and train/test split suggested in Dragoi et al. (2022). The model is trained on normal data collected from 2006 to 2010, validated on a mixture of normal and abnormal samples collected from 2006 to 2010, and tested on a mixture of normal and abnormal samples (with anomaly ratios varying from \(1\%\) to \(20\%\)) collected from 2011 to 2015. Dragoi et al. (2022) has observed that there are gradual distribution shifts in 2014 and 2015.
Malware is a dataset of malicious and benign computer programs, collected from 11/2010 to 07/2014. Malware attacks are designed adversarially, thus leading to shifts in both normal and abnormal data. We adopt the data reader from Li et al. (2021). We follow the preprocessing of (Huynh et al., 2017) and convert the real-valued probabilities \(p\) of being malware to binary labels (labeled one if \(p>0.6\) and zero if \(p<0.4\)). The samples with probabilities between \(0.4\) and \(0.6\) are discarded. The model is trained on normal samples collected from 01/2011 to 12/2013, validated on normal and abnormal samples from 11/2010 to 12/2010, and tested on normal and abnormal samples from 01/2014 to 07/2014 (the anomaly ratios vary between \(1\%\) and \(20\%\)).
**Tabular Baselines.** We compare with state-of-the art deep and shallow detectors for tabular AD (Dragoi et al., 2022; Alvarez et al., 2022; Han et al., 2022) and study their performance under test distribution shifts. The shallow AD baselines include OC-SVM (Scholkopf et al., 1999), IForest (Liu et al., 2012), LOF (Breunig et al., 2000), and KNN (Ramaswamy et al., 2000). The deep AD baselines include DSVDD (Ruff et al., 2018), Autoencoder (AE) (Aggarwal, 2017), LUNAR (Goodge et al., 2022), internal contrastive learning (ICL) (Shenkar and Wolf, 2021), NTL (Qiu et al., 2021), and BERT (Kenton and Toutanova, 2019; Dragoi et al., 2022). We adopt the implementations from PyOD (Han et al., 2022) or their official repositories.
**Implementation Details.** Since Anoshift and Malware do not have labels we use the collection date as class labels for separating the data into training distributions \(P_{j}\) (year for Anoshift and month for Malware). The training tasks are mixed with anomaly ratio \(\pi=0.8\). To create more training tasks, we augment the data using attribute permutations, resulting in additional training distributions. These attribute permutations increase the variability of training tasks and encourage the model to learn permutation-invariant features. In testing tasks, the attributes are not permuted.
ACR-NTL has the same model architecture as the baseline NTL, and ACR-DSVDD adds one additional batch normalization layer on top of the DSVDD baseline. Details of the model architectures are provided in Supp. B. Our algorithm is applicable to the existing backbone models without complex modifications.
**Results.** In Tab. 3, we report the results on Anoshift split into AVG (data from 2011 to 2015) and FAR (data from 2014 and 2015). The two splits allow us to compare average detection accuracy (AVG) with detection results after longer time intervals (FAR) which is expected to suffer from more distribution shift. For evaluating ACR's robustness to variations in the anomaly ratio, we report results on test data with a ratio varying from \(1\%\) to \(20\%\). We report average AUC with standard deviation over five independent test runs.
The results in Tab. 3, show that ACR outperforms all baselines on both FAR and AVG under all anomaly ratios. ACR is the only method that clearly outperforms random guessing on the FAR split. All baselines perform worse than random on the FAR split even though they achieve strong results when there is no distribution shifts (see results in Dragoi
et al. (2022); Alvarez et al. (2022); Han et al. (2022)).
Although we can see that ACR achieves the best results on all anomaly ratios, the performance of ACR degrades when the ratio increases. ACR-NTL is more robust to high anomaly ratios than ACR-DSVDD. As the anomaly ratio increases, it becomes harder to identify the majority among the mixture of normal samples and anomalies.
We report the results on Malware in Tab. 4. ACR-NTL achieves the best results under all anomaly ratios. All baselines except ICL perform worse than random guessing, meaning that the malware successfully fools most baselines.
### Ablation Study
We perform two ablation studies, one to demonstrate the benefit of our proposed meta outlier exposure loss, and one to study the behavior of batch normalization during training.
To show that meta outlier exposure is a favorable option, we compare it against the one-class classification loss and a fine-tuned version of Feat+BN, where the pretrained ImageNet features are finetuned on domain-specific training data. Tab. 5 shows that our approach outperforms the two alternatives on two image datasets. To analyze batch normalization variants, we train and test models with different combinations of batch normalization usage detailed inTab. 6. We find that for effective zero-shot AD, the batch normalization statistics should be computed on the fly both during testing and training. More details are in Supp. C.1.
## 5 Conclusion
We studied the problem of adapting a learned AD method to a new data distribution, where the concept of "normality" changed. Our method is a zero-shot approach and requires no training or fine-tuning to a new data set. We developed a new meta-training approach, where we trained an off-the-shelf deep AD method on a (meta-) set of interrelated datasets, adopting batch normalization in every layer, and used samples from the meta set as either normal samples and anomalies, depending on the context. We showed that the approach robustly generalized to new, unseen anomalies.
Our experiments on image and tabular data demonstrated state-of-the-art zero-shot adaptation performance when no foundation model was available. We stress that this is an important result since many, if not most AD applications in the real world rely on specialized datasets: medical images, data from industrial assembly lines, malware data, network intrusion data etc. Existing foundation models often do not capture these data, as we showed. Ultimately, our analysis shows that with relatively small modifications to model training (meta-learning, batch normalization, and providing artificial anomalies from the meta-set) will enable the deployment of existing models in zero-shot learning tasks.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & 1\% & 5\% & 10\% & 20\% \\ \hline OC-SVM & 19.5\(\pm\)5.6 & 20.5\(\pm\)1.4 & 20.3\(\pm\)0.9 & 20.3\(\pm\)0.8 \\ IForest & 22.8\(\pm\)2.9 & 22.9\(\pm\)1.2 & 23.3\(\pm\)0.6 & 23.4\(\pm\)0.8 \\ LOF & 22.3\(\pm\)4.9 & 23.2\(\pm\)1.8 & 23.3\(\pm\)1.3 & 23.2\(\pm\)0.4 \\ KNN & 21.6\(\pm\)6.3 & 22.5\(\pm\)1.6 & 22.7\(\pm\)0.9 & 22.6\(\pm\)0.9 \\ \hline DSVDD & 25.4\(\pm\)3.3 & 27.4\(\pm\)1.7 & 28.9\(\pm\)0.9 & 28.3\(\pm\)0.8 \\ AE & 48.8\(\pm\)2.4 & 49.1\(\pm\)1.2 & 49.4\(\pm\)0.6 & 49.3\(\pm\)0.5 \\ LUNAR & 23.1\(\pm\)4.5 & 23.8\(\pm\)1.2 & 24.1\(\pm\)0.7 & 24.2\(\pm\)0.6 \\ ICL & 83.5\(\pm\)1.9 & 81.0\(\pm\)1.0 & 82.9\(\pm\)0.8 & 83.1\(\pm\)0.9 \\ NTL & 25.9\(\pm\)4.8 & 25.4\(\pm\)1.3 & 24.5\(\pm\)1.3 & 25.0\(\pm\)0.8 \\ \hline ACR-DSVDD & 73.1\(\pm\)2.8 & 69.5\(\pm\)3.3 & 69.4\(\pm\)3.3 & 66.4\(\pm\)4.0 \\ ACR-NTL & **85.0\(\pm\)1.3** & **84.5\(\pm\)0.8** & **85.1\(\pm\)1.2** & **84.0\(\pm\)0.8** \\ \hline \hline \end{tabular}
\end{table}
Table 4: AUC (%) with standard deviation for anomaly detection on Malware (Huynh et al., 2017). ACR-NTL achieves the best results on various anomaly ratios.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{2}{c}{1\%} & \multicolumn{2}{c}{5\%} & \multicolumn{2}{c}{10\%} & \multicolumn{2}{c}{20\%} \\ \cline{2-9} & FAR & AVG & FAR & AVG & FAR & AVG & FAR & AVG \\ \cline{2-9} OC-SVM & 49.6\(\pm\)0.2 & 62.6\(\pm\)0.1 & 49.6\(\pm\)0.2 & 62.6\(\pm\)0.1 & 49.5\(\pm\)0.1 & 62.7\(\pm\)0.1 & 49.5\(\pm\)0.1 & 62.6\(\pm\)0.1 \\ IForest & 25.8\(\pm\)0.4 & 54.6\(\pm\)0.2 & 26.1\(\pm\)0.1 & 54.7\(\pm\)0.1 & 26.0\(\pm\)0.1 & 54.6\(\pm\)0.1 & 26.0\(\pm\)0.1 & 54.7\(\pm\)0.1 \\ LOF & 37.3\(\pm\)0.5 & 59.6\(\pm\)0.3 & 37.0\(\pm\)0.1 & 59.5\(\pm\)0.1 & 37.0\(\pm\)0.1 & 59.5\(\pm\)0.1 & 37.1\(\pm\)0.1 & 59.5\(\pm\)0.1 \\ KNN & 45.0\(\pm\)0.3 & 70.8\(\pm\)0.1 & 45.3\(\pm\)0.2 & 70.9\(\pm\)0.1 & 45.1\(\pm\)0.1 & 70.8\(\pm\)0.1 & 45.2\(\pm\)0.1 & 70.8\(\pm\)0.1 \\ \hline DSVDD & 34.6\(\pm\)0.3 & 62.3\(\pm\)0.2 & 34.7\(\pm\)0.1 & 62.5\(\pm\)0.1 & 34.7\(\pm\)0.2 & 62.5\(\pm\)0.1 & 34.7\(\pm\)0.1 & 62.5\(\pm\)0.1 \\ AE & 18.6\(\pm\)0.2 & 25.3\(\pm\)0.1 & 18.7\(\pm\)0.2 & 25.5\(\pm\)0.1 & 18.7\(\pm\)0.1 & 25.5\(\pm\)0.1 & 18.7\(\pm\)0.1 & 25.5\(\pm\)0.1 \\ LUNAR & 24.5\(\pm\)0.4 & 38.3\(\pm\)0.4 & 24.6\(\pm\)0.1 & 38.6\(\pm\)0.2 & 24.7\(\pm\)0.1 & 38.7\(\pm\)0.1 & 24.6\(\pm\)0.1 & 38.6\(\pm\)0.1 \\ ICL & 20.6\(\pm\)0.3 & 50.5\(\pm\)0.2 & 20.7\(\pm\)0.2 & 50.4\(\pm\)0.1 & 20.7\(\pm\)0.1 & 50.4\(\pm\)0.1 & 20.8\(\pm\)0.1 & 50.4\(\pm\)0.1 \\ NTL & 40.7\(\pm\)0.3 & 57.0\(\pm\)0.1 & 40.9\(\pm\)0.2 & 57.1\(\pm\)0.1 & 41.0\(\pm\)0.1 & 57.1\(\pm\)0.1 & 41.0\(\pm\)0.1 & 57.1\(\pm\)0.1 \\ BERT & 28.6\(\pm\)0.3 & 64.6\(\pm\)0.2 & 28.7\(\pm\)0.1 & 64.6\(\pm\)0.1 & 28.7\(\pm\)0.1 & 64.6\(\pm\)0.1 & 28.7\(\pm\)0.1 & 64.7\(\pm\)0.1 \\ \hline ACR-DSVDD & 62.0\(\pm\)0.5 & **74.0\(\pm\)0.2** & 61.3\(\pm\)0.1 & **73.3\(\pm\)0.1** & 60.4\(\pm\)0.1 & 72.5\(\pm\)0.1 & 59.1\(\pm\)0.1 & 71.2\(\pm\)0.1 \\ ACR-NTL & **62.5\(\pm\)0.2** & 73.4\(\pm\)0.1 & **62.2\(\pm\)0.1** & **73.2\(\pm\)0.1** & **62.3\(\pm\)0.1** & **73.1\(\pm\)0.1** & **62.0\(\pm\)0.1** & **72.7\(\pm\)0.1** \\ \hline \hline \end{tabular}
\end{table}
Table 3: AUC (%) with standard deviation for anomaly detection on Anoshift (Dragoi et al., 2022). ACR with both backbone models outperforms all baselines on average over time spans other than the training set. Especially, ACR is the single method performs clearly better than random guess on FAR split where distribution shift occurs.
Limitations & Societal ImpactsOur method is primarily limited by the assumption that a meta-dataset is available that is related to the new dataset of interest. If this assumption is broken, zero-shot adaptation cannot be assured.
Anomaly detectors are trained to detect atypical/under-represented data in a data set. Therefore, deploying an anomaly detector, e.g., in video surveillance, may ultimately discriminate against under-represented groups. Anomaly detection methods should therefore be critically reviewed when deployed on human data.
## Acknowledgements
SM acknowledges support by the National Science Foundation (NSF) under an NSF CAREER Award, award numbers 2003237 and 2007719, by the Department of Energy under grant DE-SC0022331, by the HPI Research Center in Machine Learning and Data Science at UC Irvine, and by gifts from Qualcomm and Disney. Part of this work was conducted within the DFG research unit FOR 5359 on Deep Learning on Sparse Chemical Process Data. MK acknowledges support by the Carl-Zeiss Foundation, the DFG awards KL 2698/2-1, KL 2698/5-1, KL 2698/6-1, and KL 2698/7-1, and the BMBF awards 03--B0770E and 01--S21010C. We thank Eliot Wong-Toi for helpful feedback on the manuscript.
The Bosch Group is carbon neutral. Administration, manufacturing and research activities do no longer leave a carbon footprint. This also includes GPU clusters on which the experiments have been performed.
|
2305.05021 | Boosting fluxons for ballistic-logic power using an Aharonov-Casher ring | Superconducting logic is fast and energy-efficient relative to CMOS, but also
fundamental studies are needed to scale up circuits for greater utility.
Recently, ballistic shift registers for single-flux quanta (SFQ) bits were
shown in simulation to allow high-efficiency superconducting gates. However,
these gates are unpowered such that the bits slow after each gate operation and
only a short sequence of gates is possible without added power. Here we show
that a circuit based on an Aharonov-Casher ring can power these shift registers
by boosting the bit velocity to a constant value, despite their unusual bit
states constituted by two polarities of SFQ. As a step in its operation, each
bit state is forced into a different ring arm and then accelerated. The circuit
dynamics depend on various circuit parameters and choices of how to merge the
bit-state paths. One design from each merge design choice is proposed to enable
scaling up to an array of gates by adding serial biasing in a relatively simple
way. We find adequate performance for ballistic logic in terms of boosted
velocity, energy efficiency, and parameter margins. We also discuss the
circuit's classical barriers; in a different regime this relates to the
Aharonov-Casher effect. | Waltraut Wustmann, Kevin Daniel Osborn | 2023-05-08T19:57:31Z | http://arxiv.org/abs/2305.05021v3 | # Boosting Fluxons for Improved Digital Logic using an Aharonov-Casher Ring
###### Abstract
Superconducting logic is efficient and fast relative to CMOS, but needs fundamental studies on how to scale to very large circuits. Recently, ballistic shift registers for single-flux quanta (SFQ) bits were shown in simulation to allow high energy efficiency and clockless operation. However, the gates are unpowered such that the bit inertia can only allow a finite number of sequential gate executions because the bits slow slightly after each gate operation. Here we show that a structure based on an Aharonov-Casher ring can power these shift registers by boosting the bit velocity to a constant value, in spite of unusual bit states constituted by two polarities of SFQ. We discuss the design of the structure that we consider to be scalable, explain its operation, and study its performance in terms of boosted velocity, efficiency, and parameter margins.
Superconducting digital logic [1] has the potential for high-performance computing due to its high energy efficiency and speed relative to CMOS logic, and the large power consumed by today's computer networks and data centers. For this purpose, single flux quantum (SFQ) [2; 3; 4] are used as bits in arithmetic logic units [5; 6] and processors [7; 8; 9]. SFQ logic is also used in analog-to-digital converters or broadband digital receivers [10; 11; 12; 13], and Josephson voltage standards are closely related [14; 15; 16; 17].
SFQ logic sometimes refers to the historic logic family named Rapid Single Flux Quantum (RFQ) logic [18], and its variants which are made for higher efficiency [19; 20; 21; 22; 23]. In RSFQ, an SFQ of a fixed polarity represents the bit state "1", and its absence the "0". In this type, dc-currents power the SFQ to propagate conditionally in a known direction during the execution of the logic. Additionally, ac-powered logic families named Reciprocal Quantum Logic [24] and Adiabatic Quantum Flux Parametron [25] are studied for higher efficiency. However, the most extreme methods for high efficiency borrow concepts from thermodynamic reversibility, which in principle allow the energy cost to scale below ln(2)\(k_{B}T\) per gate [26; 27; 28; 29].
Recent work for SFQ logic aims to scale up the size of circuits, e.g. ref. [31]. Related to this, memory is studied for large-capacity RAM [32; 33; 34] or in-logic memory named registers [35; 36; 37]. While RSFQ power is traditionally sourced to the chip ground plane, a method to reuse bias current, called bias current recycling [38], provides a scaling opportunity because the bias currents do not necessarily increase with the number of gates, and this may avoid a problem analogous to power-density limitations in CMOS [39]. Scalability is also helped by clockless gates in SFQ logic, which is novel in SFQ logic because most gates are synchronous and thus require clocking for almost every computing step [40; 41].
For the reversible logic type named Reversible Fluxon Logic (RFL) [29; 30; 42; 43], we have recently studied ballistic shift registers (BSRs) [43] which allow fully asynchronous inputs as long as a minimum time delay exists between input bits, and thus they have potential to be clockless. RFL uses degenerate (equal energy) bit states, where each polarity represents a state. Some of its gates use ballistically moving SFQ in long Josephson junctions (LJJs) such that the gates are solely powered by the input bit inertia. However, to make BSRs practical, power must be supplied to the SFQ every so often to scale up to a useful register size.
In this work, we take inspiration from the Aharonov-Casher effect (ACE) [44; 45; 46; 47] for its ring structure and its relation to fluxons. Various fluxon types exist in research [48; 49; 50; 51; 52; 53; 54; 55; 56], however, the ACE is fundamentally a self-interference effect of a single fluxon and it is extremely rare in LJJ structures due to various difficult simultaneous requirements including the ability of charge control at the ring center.
The challenge for our digital logic is the use of different-polarity bit states, and that gates require a minimum input velocity. RFL logic uses fluxons in a classical regime for reliability, but more interestingly we can borrow the above ring structure with its two connecting LJJs. Fortunately, we find that we can power the bipolar bits with only two separate current biases, modified LJJs, and damping resistors. Furthermore, we argue that our specific implementation can enable a large array of these power sources and this allows a scalable shift register memory.
The design principles of the booster follow from the science of LJJs and SFQ of two polarities. RFL gates use LJJs, and in the input and output LJJs of the gates the SFQ, or fluxons, constitute instances of dispersionless solitons of sine-Gordon type [57; 58]. It is known that a dc-current bias through an LJJ will exert a force on a fluxon, where the force direction is determined by the relative sign of the current direction and fluxon polarity. A given bias current, therefore, accelerates (boosts) a fluxon, but decelerates an antifluxon. In our RFL logic the bit states are represented by the two fluxon polarities. Thus, a dc-current bias applied to a regular LJJ is inadequate to
boost our bits in RFL because one of the bit states will gain energy while the other one loses energy. Here we propose a fluxon booster which solves this problem and allows one to accelerate slow fluxons to a high nominal velocity, irrespective of their polarity, as needed for the operation of ballistic RFL gates in sequence.
The operation principle of the fluxon booster is illustrated in Fig. 1. The booster consists of a ring LJJ which is connected to an input and an output LJJ, where each connection area forms topological S-Branches. For reference, let us first discuss the dynamics in an unpowered S-branch: in this situation, a fluxon coming in on one of the three LJJs will penetrate evanescently into both of the arms of the ring LJJ. However, it can not enter both of them fully, since this would require the local creation of a second fluxon, which could only occur through the local creation of a fluxon-antifluxon pair. At moderate fluxon energies that are relevant for RFL, this is energetically forbidden. Classically, it can also not enter one of the arms exclusively since they are equivalent and neither provides an energetic advantage over the other. Therefore, a typical S-branch [59; 60] presents a potential barrier on which a fluxon reflects back into the input LJJ.
However, in our fluxon booster, the dynamics at the input branch is modified by dc-currents, which bias the two arms of the LJJ ring, as shown in Fig. 1. The two bias currents create two (overlapping) potential steps in the ring LJJ, each of the magnitude \(\Phi_{0}I_{b}\). This results in a force on a fluxon (antifluxon) that is directed counterclockwise (clockwise). These forces act on the front of the fluxon coming in from the input LJJ and thus draw it into the ring LJJ. Depending on the force direction (and hence the fluxon polarity) this will be either the upper or the lower arm, as illustrated by panels (a) and (b). Inside the ring LJJ the fluxon is further accelerated while in the vicinity of the bias ports. Eventually, the accelerated fluxon reaches the merge point. As this is another S-branch and is unpowered, one would expect the fluxon to normally back-reflect in the manner described above. In order to avoid this behavior and to enable the fluxon to smoothly enter the output LJJ, the arms of the ring LJJ are set thereabout to have increased critical currents. In the discrete booster circuit, this is implemented by larger JJ areas of the two merge-side JJs of the S-branch cell, cf. Fig. 2(c). For a fluxon approaching the merge point from the upper arm of the ring LJJ, the large JJ on the lower arm essentially acts as a small inductance that connects the inner booster ring and the lower rail of the output LJJ, and allows the fluxon to enter into the output LJJ. A short delay in this transmission through the merge cell is caused by the potential barrier of the large JJ on the upper arm.
Our proposed designs for booster circuits are shown in Fig. 2. Standard LJJs used in this work are composed of discrete cells, each containing an undamped JJ with characteristics \(I_{c}\) and \(C_{J}\), and a geometric inductance \(L\). In the input and output LJJs of both rails contribute \(L/2\) to the total cell inductance, whereas the inner and outer inductor rail of the ring LJJ may contribute with a different fraction, \(L=L_{i}+L_{o}\). The relative discreteness of all LJJs is chosen equal, and this is given by the parameter \(a/\lambda_{J}=\sqrt{L/L_{J}}=(2\pi LI_{c}/\Phi_{0})^{1/2}\), where \(a\) corresponds to the cell width, and \(\lambda_{J}\) is the Josephson penetration depth. Our simulations of the schematics use sufficient lengths of input and output LJJs such that boundary effects are negligible and \(L_{J}/L=\Phi_{0}/(2\pi I_{c}L)=7\) for low but practical discreteness. The fluxons have a characteristic length of \(\lambda_{J}\) and an inverse plasma frequency of \(\omega_{J}^{-1}=(2\pi I_{c}/(\Phi_{0}C_{J}))^{-1/2}\). The energy of a fluxon is \(E_{\text{fl}}(v)=8E_{0}/\sqrt{1-(v/c)^{2}}\), where \(E_{0}=\Phi_{0}I_{c}\lambda_{J}/(2\pi a)\) and the velocity is bounded by \(|v|<c=\lambda_{J}\omega_{J}\).
While most JJs in the booster circuit follow nominal parameters, exceptions are made for JJs at the branch and merge cells, where they differ by area, but the ratio of critical current to capacitance, \(I_{c}/C_{J}\), is the same. The upper and lower arms of the ring LJJ are symmetric such that the fluxon booster will boost equally for either polarity of the input fluxon. In each arm, the 2nd JJ from the branch cell is current-biased and has a shunt resistor \(R_{J}^{b}\) in parallel which serves to dissipate fluctuations generated during the passage of the fluxon through the bias point or the branch and merge cell. In our designs the added damping is subcritical, \(R_{J}^{b}=5R_{J,\text{crit}}^{b}\), where \(R_{J,\text{crit}}^{b}=\sqrt{\Phi_{0}/(2\pi C_{J}I_{c})}\) is the resistance for critical JJ damping. However, we find that the booster also works with \(R_{J}^{b}\lesssim R_{J,\text{crit}}^{b}\), albeit with somewhat reduced boost efficiency. Table 1 shows the parameter values of our boosters.
The three booster designs shown in panel (a) are very similar, but differ by small series resistors on the ring-LJJ rails: these are either absent (D1), or are present only in the branch cell (D2), or are present also near the merge cell (D3). While our circuit simulations show that all designs can boost an input fluxon, we argue that D3 is scalable, as discussed next.
It has previously been shown [42; 43; 29] that unpowered (ballistic) RFL gates can operate with high efficiency, e.g., the output fluxon energy can be above 90 % of the input fluxons. This is very high compared to irreversible
Figure 1: Illustration of booster operation for fluxons of (a) positive or (b) negative (antifluxon) polarity. An input LJJ branches into two LJJ arms (which constitute the ring) which then merge into an output LJJ. Bias dc-currents \(I_{b}\) are applied in both arms near the branch point, and the bias generates anticlockwise (clockwise) force on an incoming fluxon (antifluxon). This force allows the fluxon to enter one of the arms, as determined by the force, and accelerate it. The critical current of the ring LJJ is increased near the merge point (not shown) to allow the fluxon to exit the booster ring.
gates which usually dissipate \(\sim I_{\rm c}\Phi_{0}\) per switching of a powered JJ. Nevertheless, the gradual energy loss of a fluxon traversing through a sequence of RFL gates will eventually be sufficiently large that a subsequent ballistic gate will fail. Therefore, we need to periodically include boosters, which restore the fluxon energy to a nominal value, e.g., to \(E_{\rm fl}(v)=10E_{0}\), corresponding to fluxon velocity \(v=0.6c\). As an example, Fig. 2(d) shows a \(2\times 2\) register with boosters. In this example, the gates could be BSRs with two input and output ports, where input fluxons are powered by the left boosters from left to right, and the memory could also be accessed from top to bottom. A key feature of the scalable design is that the boosters are all powered from the same two current sources.
For several boosters to share the same current sources, it is necessary that the dc-bias supercurrents through the device are at least isolated from one another, even in a circuit with slight asymmetries resulting from fabrication uncertainties. To that end, we introduce serial resistors into the rails of the booster's ring LJJs as shown in the designs D2 and D3 of Fig. 2(a). In principle, the location of series resistors may be chosen differently, as long as they isolate the superconducting path between the bias current ports. However, in our circuit simulations we find that the booster performance (output velocity, booster efficiency, margins) is negatively affected when series resistors are introduced in the merge cell (not shown). That is why, our design D3 for a fully isolated booster introduces them in the adjacent ring cells.
Numerical simulations of the fluxon dynamics in the booster circuits are executed to obtain performance data. Figure 3 illustrates the performance of the three booster designs of Fig. 2(a), which all are found to have similar characteristics.
Panel (a) shows the boosted output velocity \(v_{f}\) of the fluxon as a function of the bias current \(I_{\rm b}\). The booster has a threshold in \(I_{\rm b}\), below which an incoming fluxon is not transmitted through the entire booster. Below the threshold, the fluxon may become pinned when first reaching the branch cell (due to nearby resistors \(R_{J}^{b}\)), or when reaching the biased and damped JJ, or otherwise, it can reflect off the merge cell and eventually pin at the bias JJ on its second encounter. Once \(I_{\rm b}\) exceeds the threshold, the boosted velocity \(v_{f}\) first increases with \(I_{\rm b}\), as one would expect. The boosted velocity \(v_{f}\) reaches a maximum in the current range \(I_{\rm b}/I_{c}\approx 2.5-3.0\), depending on the design, before starting to lower for larger \(I_{\rm b}\). From our simulations, we conclude that this lowering is a combined effect of the finite (short) length of the booster arms on the one hand and the discreteness of the LJJ cells on the other hand1.
Footnote 1: \(\sqrt{L/C_{J}}\) is the characteristic impedance of the LJJ.
The formation of a \(v_{f}\)-maximum at moderate \(I_{\rm b}\) is one of the reasons to keep the booster arms relatively short. In Fig. 3 and Table 1 we have chosen 10 JJs in each arm of the booster ring, and we note that the arms then form "short LJJs" which are longer than but comparable to \(\lambda_{J}\). Making the arms even shorter appears to be detrimental to the performance at large initial velocities: in our simulations we observe a rapid decrease of \(v_{f}\) as a
\begin{table}
\begin{tabular}{c c c c c c c} \(L_{o}\) & \(\frac{L_{i}}{L}\) & \(\frac{I_{c}^{B1}}{T_{c}}\) & \(\frac{I_{c}^{B2}}{T_{c}}\) & \(\frac{I_{c}^{M1}}{I_{c}}\) & \(\frac{I_{c}^{M2}}{I_{c}}\) & \(\frac{R_{J,\rm crit}^{b}}{R_{J}^{b}}\) & \(\frac{R_{s}}{\sqrt{L/C_{J}}}\) \\ \hline
0.7 & 0.3 & 1.5 & 1.1 & 1.7 & 4.2 & 5.0 & \((0.019)\)2 \\ \(-52\) \% & \(\frac{-100}{+83}\) \% & \(\frac{-100}{+50}\) \% & & \(\frac{-27}{+39}\) \% & & \\ \end{tabular}
\end{table}
Table 1: Special circuit parameters for booster of Fig. 2, with \(L^{B}=L^{M}=0.1L\). The margins are calculated for D3, using 10 JJs per ring arm and \(I_{\rm b}/I_{c}=2.8\), and with the criterion \(v_{f}\geq 0.5c\) for \(v_{i}=0.1c\). They are determined by varying a single circuit parameter, keeping all others fixed. JJ critical current and capacitance are varied separately. Most parameter margins range from \(-100\%\) to (far) above \(+200\%\), except those given below.
Figure 2: (a) Circuit schematics of the booster, in three different designs: without series resistors (D1), with series resistors for galvanic isolation in branch cell alone (D2), or also near merge cell (D3). (b,c) Schematics near branch and merge cell indicating regular LJJ (black) and special (gray) circuit elements. The top and bottom arm of the booster are fully symmetric. (d) A scalable sequence of boosters and its possible use in an array of ballistic gates, e.g., shift register gates, especially using D3.
function of \(v_{i}\), for \(v_{i}\gtrsim 0.5\). Whereas for \(N=10\), as Fig. 3(c) illustrates, the output velocity of the booster is relatively uniform up till the break-even point \(v_{f}\approx v_{i}\) (gray dashed line). This covers the entire range of relevant input velocities \(v_{i}<0.6c\), given that the nominal operation velocity in most RFL gates is set to \(v\approx 0.6c\). Making the booster arms longer (\(N>10\)), allows \(v_{f}\) to saturate before exiting the ring, but at a larger value of \(I_{\rm b}\) where the booster efficiency is much reduced (see below).
The threshold for booster operation typically lies above \(I_{\rm b}>I_{c}\), cf. Fig. 3(a). This is quite different from RSFQ circuits, where the critically damped JJs are typically biased with \(I_{\rm b}\approx 0.7I_{c}\). In our case, however, the circuit forms (weakly) discrete LJJs, where the small cell inductance \(L\) causes the bias current to spread over several JJs. The undamped (underdamped) JJs are thus biased subcritically when the bias current \(I_{\rm b}\) is in a steady state. Additionally, the \(I_{c}\) has been previously designed [30] to be much smaller than in RSFQ, such that a single subcritical bias current should also be smaller than in RSFQ.
We calculate the booster efficiency \(\eta=\Delta E_{\rm fl}/W\) as the ratio of the fluxon's energy gain \(\Delta E_{\rm fl}=E_{\rm fl}(v_{f})-E_{\rm fl}(v_{i})\) to the energy cost of the current source \(W\), where \(W\) is calculated numerically from all of the circuit elements. In cases of a successful fluxon boost, where one of the two biased JJs undergoes a \(2\pi\)-phase change during the fluxon's passage, the total energy cost is consistent with the formula \(W=I_{\rm b}\Phi_{0}\). Thus the energy cost formula of the booster is the same as for JJ switching in an RSFQ gate. However, in the proposed architecture of RFL, the SFQ receive only occasional boosts and thus much fewer biases are required per logic gate sequence than in RSFQ.
Using sine-Gordon perturbation theory [61], we also find \(I_{\rm b}\Phi_{0}\) as the height of the potential energy step experienced by a fluxon in a continuous LJJ with a local current bias. The booster efficiency \(\eta\) is shown in Fig. 3(b) as function of \(I_{\rm b}\). Over a range of moderate \(I_{\rm b}\), it increases together with \(v_{f}\) and \(\Delta E_{\rm fl}\). However, \(\eta\) assumes a maximum when \(v_{f}\) and \(\Delta E_{\rm fl}\) no longer substantially grows with \(I_{\rm b}\) and afterwards it decreases because of the \(I_{\rm b}\)-proportional energy cost \(W\).
As a function of the initial velocity \(v_{i}\), the efficiency \(\eta\) decreases, as seen in Fig. 3(d). This follows since \(W=I_{\rm b}\Phi_{0}\) is independent of \(v_{i}\) while the \(v_{i}\)-insensitive output velocity \(v_{f}\) implies that \(\Delta E_{\rm fl}\) is largest for small \(v_{i}\). The black dashed line shows an approximation for \(\mathtt{D3}\), using \(\Delta E_{\rm fl}=E_{\rm fl}(\bar{v}_{f})-E_{\rm fl}(v_{i})\) with the mean output velocity \(\bar{v}_{f}=0.68\) in the range \(v_{i}/c\leq 0.6\). For the data shown in panel (b), which is for \(v_{i}=0.3\), and at the operation point \(I_{\rm b}/I_{c}=2.8\), the output velocity is \(v_{f}=0.67c\) and the efficiency is \(\eta=0.36\) (\(\Delta E_{\rm fl}=2.5E_{0}\), \(W=6.7E_{0}\), \(E_{\rm diss}=3.4E_{0}\)). This can be compared with the performance of the (polarity-dependent) fluxon boost in a regular (discrete) LJJ. Using the same values of \(a/\lambda_{J}\), \(R_{J}^{b}\), \(I_{\rm b}=2.8I_{c}\), and \(v_{i}=0.3c\) we find that a fluxon in a regular LJJ is boosted to \(v_{f}=0.79c\) with \(\eta=0.72\) (\(\Delta E_{\rm fl}=4.8E_{0}\), \(W=6.7E_{0}\), \(E_{\rm diss}=0.7E_{0}\)). Thus, the efficiency in our scalable ring booster is reduced by a factor of \(\approx 1/2\) relative to the regular LJJ (for known fluxon polarity), owing to the dynamics at the branch and merge cells, the boundary conditions imposed by these cells during the fluxon boost, and the added series resistors.
In conclusion, SFQ logic needs better scaling of logic and memory. Also, a previously studied shift-register gate for SFQ computing did not require synchronous input bits and had the potential to impact SFQ broadly in terms of clocking and scaling. In the gate, and other gates from the same logic family, the bit states are unconventional. In this letter, boosters are proposed to power and scale-up register memory from the individual ballistic shift registers. Here we report on the first boosters; they were built up from a classical version of an Aharonov-Casher ring structure. We explain the operation and performance of the boosters related to two fluxon polarities, fluxon dynamics, JJ modifications, and resistors added to enable scalability. The data show uniformly high output velocity with good efficiency, reasonable margins, and simplicity of design. Briefly put, the boosters are efficient in the proposed scalable register architecture because the SFQ bit only needs to pass through one biased JJ per gate sequence, which contrasts standard logic in which an SFQ passes through many biased JJs per gate.
###### Acknowledgements.
KDO is grateful for scientific discussions on digital logic with Q. Herr, A. Herr, M. Frank, I. Sutherland, B. Sarabi, C. Richardson, G. Herrera, and N. Yoshikawa,
Figure 3: Booster performance: \(I_{\rm b}\)-dependence of (a) fluxon output velocity \(v_{f}\) and (b) booster efficiency \(\eta\), and (c,d) their dependence on fluxon input velocity \(v_{i}\), for the three booster designs of Fig. 2. In (a,b) the input velocity of the fluxon is set to \(v_{i}=0.3c\), for which a maximum in \(v_{f}\) appears at \(I_{\rm b}=2.8I_{c}\) in the scalable design \(\mathtt{D3}\). In (c,d) the bias is fixed at that value, \(I_{\rm b}=2.8I_{c}\), and we note that \(v_{f}\) is \(v_{i}\)-insensitive over a wide \(v_{i}\)-range, before decreasing near where \(v_{f}\equiv v_{i}\), marked by the dashed line in (c). The dashed line in (d) shows an estimate for \(\eta\) assuming constant \(v_{f}\).
and also broader scientific discussions with B. Palmer, B. Butera, F. Gaitan, R. Lewis, and A. Murphy. Both KDO and WW would like to thank H. Cai for recent discussions of experiments. WW would like to thank the University of Otago Physics Department for hosting her.
|
2304.05308 | A priori data-driven robustness guarantees on strategic deviations from
generalised Nash equilibria | In this paper we focus on noncooperative games with uncertain constraints
coupling the agents' decisions. We consider a setting where bounded deviations
of agents' decisions from the equilibrium are possible, and uncertain
constraints are inferred from data. Building upon recent advances in the so
called scenario approach, we propose a randomised algorithm that returns a
nominal equilibrium such that a pre-specified bound on the probability of
violation for yet unseen constraints is satisfied for an entire region of
admissible deviations surrounding it, thus supporting neighbourhoods of
equilibria with probabilistic feasibility certificates. For the case in which
the game admits a potential function, whose minimum coincides with the social
welfare optimum of the population, the proposed algorithmic scheme opens the
road to achieve a trade-off between the guaranteed feasibility levels of the
region surrounding the nominal equilibrium, and its system-level efficiency.
Detailed numerical simulations corroborate our theoretical results. | George Pantazis, Filiberto Fele, Kostas Margellos | 2023-04-11T16:07:09Z | http://arxiv.org/abs/2304.05308v2 | # A priori data-driven robustness guarantees on strategic deviations from generalised Nash equilibria
###### Abstract
In this paper we focus on noncooperative games with uncertain constraints coupling the agents' decisions. We consider a setting where bounded deviations of agents' decisions from the equilibrium are possible, and uncertain constraints are inferred from data. Building upon recent advances in the so called scenario approach, we propose a randomised algorithm that returns a nominal equilibrium such that a _pre-specified_ bound on the probability of violation for yet unseen constraints is satisfied for an entire region of admissible deviations surrounding it--thus supporting neighbourhoods of equilibria with probabilistic feasibility certificates. For the case in which the game admits a potential function, whose minimum coincides with the social welfare optimum of the population, the proposed algorithmic scheme opens the road to achieve a trade-off between the guaranteed feasibility levels of the region surrounding the nominal equilibrium, and its system-level efficiency. Detailed numerical simulations corroborate our theoretical results.
G +
Footnote †: footnote
vancements were leveraged for the first time in a game-theoretic context, for the formulation of distribution-free probabilistic feasibility guarantees for randomised Nash equilibria. These works provide guarantees for one specific equilibrium point (often assumed to be unique); this was extended in [36, 37], by providing _a posteriori_ feasibility guarantees for the entire domain. Besides the game-theoretic context, alternative methodologies for set-oriented probabilistic feasibility guarantees have been proposed in the seminal works [5, 15], which a priori characterise probabilistic feasibility regions constructed out of sampled constraints using statistical learning theoretic results. More recently, the so called probabilistic scaling [4, 31] has been proposed to obtain a posteriori guarantees on the probability that a polytope generated out of samples is a subset of some chance-constrained feasibility region. Following an approach similar to [36], the works [16, 17] deliver tighter probabilistic feasibility guarantees by focusing on variational-inequality (VI) solution sets.
The results above follow a standard approach in the game-theoretic literature, where a strict behavioural assumption--the so called _rationality_--is imposed on the players' decision making. Namely, the players are viewed as rational agents wishing to maximize their profit (expressed by some given cost function). However, studies have shown that this is unrealistic in practice [27, 38, 39, 45] and that agents usually exhibit a _boundedly rational_ behaviour [39], i.e., their decisions can deviate from rationality due to individual biases, behavioural inertia, restricted computational power/time, etc. The consequences of this become relevant in engineering applications, as the human role in technical systems evolves beyond mere users and consumers to active agents, operators, decision-makers and enablers of efficient, resilient and sustainable infrastructures [30].
To bridge this gap between real-world applications and the cognate literature, here we study games with uncertain constraints, where deviations from a _nominal_ equilibrium are explicitly considered. We follow a randomised approach to approximate the coupling constraints by means of data. In this more general setting, where deviations are considered, providing guarantees for a single solution is devoid of any meaning: indeed, repetition of the game might lead to a different solution in a neighbourhood around the nominal equilibrium, irrespective of the employed dataset. Technically speaking, this renders the identification of the data samples that support the solution (cf. sample compression [32]) a challenging task. Focussing on the class of generalised Nash equilibrium seeking problems [18], we contribute to the provision of data-driven robustness guarantees for the collection of possible deviations from the equilibrium as follows:
1. Adopting a scenario-theoretic paradigm, we establish a methodology for the provision of _a posteriori_ probabilistic feasibility guarantees for a domain around the randomised equilibrium of the game under study.
2. We design a data-driven equilibrium-seeking algorithm that converges to a solution that meets an _a priori_ defined level of probabilistic feasibility for a fixed admissible region surrounding this equilibrium. This can model possible deviations from a nominal equilibrium that the designer wishes to take into account when incentivising a certain operation profile. The strength of the provided feasibility guarantees depends on a prespecified quantity, which in turn can affect the location of the nominal equilibrium and the volume of the region for which these probabilistic guarantees hold. Furthermore, when the game under study admits a potential function--whose minimum coincides with some social welfare optimum--our methodology provides a new perspective for trading off the probabilistic feasibility of the region surrounding the nominal equilibrium and its system-level efficiency.
The rest of the paper is organized as follows. In Section 2 we provide fundamentals of game theory and the scenario approach which will be used as main ingredients for the subsequent developments. In Section 3.1 we show how the feasibility guarantees for a region around the game solution can be a posteriori quantified. In Section 3.2 we propose a data driven algorithm and prove its convergence to an equilibrium such that the considered neighbourhood of strategic deviations can satisfy prespecified probabilistic feasibility requirements. An illustrative example in Section 4 corroborates our theoretical analysis. Section 5 concludes the paper and presents future research directions. To streamline the presentation of our results, some proofs are deferred to the Appendix.
## 2 Preliminaries
_Notation_: All vectors are column unless otherwise indicated. We denote by \(\mathbb{R}^{n}_{+}\) the nonnegative orthant in \(\mathbb{R}^{n}\). When a matrix \(A\) is positive definite we write \(A\succ 0\); similarly, positive semi-definitess is denoted as \(A\succeq 0\). Note that our definition of (semi-)definiteness does not require the matrix to be symmetric. We denote by \(\mathbf{0}_{q\times r}\) a \(q\times r\) matrix full of zeros, by \(I_{r}\) the \(r\times r\) identity matrix, and by \(\mathbf{1}_{r}\) the vector of \(r\) ones; dimensions can be omitted when clear from the context. \(e_{q}\) is the unit vector whose \(q\)-th element is \(1\) and all other elements are \(0\), \(\|\cdot\|_{p}\) the \(p\)-norm operator, and \((\cdot)_{r}\) denotes the \(r\)-th component of its vector argument. \(\mathbb{B}_{p}(x,\rho)=\{y\in\mathbb{R}^{d}:\|y-x\|_{p}<\rho\}\) is the open \(p\)-normed ball centred at \(x\) with radius \(\rho\); when \(p\) is omitted, any choice of norm is valid. For a set \(S\), \(|S|\) denotes its cardinality, while \(2^{S}\) denotes its power set, i.e., the collection of all subsets of \(S\). Finally, given \(D\succ 0\), \(\operatorname{proj}_{K,D}[x]:=\arg\min_{y\in K}(y-x)^{\intercal}D(y-x)\) is the skewed projection of \(x\) onto the set \(K\).
### Games with uncertain constraints
We consider a population of agents with index set \(\mathcal{N}=\{1,\ldots,N\}\). The decision vector \(x_{i}\) of each agent \(i\in\mathcal{N}\) takes value in the set \(X_{i}\subseteq\mathbb{R}^{n}\), while \(x=(x_{i})_{i=1}^{N}\in X=\prod_{i=1}^{N}X_{i}\subseteq\mathbb{R}^{nN}\) is the global decision vector that is formed by concatenating the decisions of the entire population. The vector \(x_{-i}\in\mathbb{R}^{n(N-1)}\) comprises all agents' decisions except for those of agent \(i\). In our setup, the cost incurred by agent \(i\in\mathcal{N}\) is expressed by a real-valued function \(J_{i}(x_{i},x_{-i})\) that depends on local decisions as well as on the decisions from other agents \(j\in\mathcal{N}\setminus\{i\}\). In the following, with a slight abuse of notation, we can exchange \(x\) for \((x_{i},x_{-i})\) to single out agent \(i\)'s decision from the global decision vector. Furthermore, we consider _uncertain_ constraints coupling the agents' decisions. These can be expressed in the form 1
Footnote 1: This formulation can describe deterministic and/or local constraints as special cases.
\[C_{\delta}=\{x\in X:g(x,\delta)\leq 0\},\;\delta\in\Delta, \tag{1}\]
where \(g:\mathbb{R}^{nN}\times\Delta\to\mathbb{R}\) depends on some uncertain parameter \(\delta\) taking values in a support set \(\Delta\) according to a probability measure \(\mathbb{P}\).
Feasible collective decisions under this setup can be found by letting every agent \(i\in\mathcal{N}\) solve the following optimization program, where the decisions \(x_{-i}\) of all other agents are given,
\[\left.\begin{aligned} G:&\min_{x_{i}\in X_{i}}J_{i}(x_{i},x_{-i})\\ &\text{subject to }x_{i}\in\bigcap_{\delta\in\Delta}C_{ \delta}^{i}(x_{-i})\end{aligned}\right\}\;\forall i\in\mathcal{N}, \tag{2}\]
where \(C_{\delta}^{i}(x_{-i})=\{x_{i}\in X_{i}:g(x_{i},x_{-i},\delta)\leq 0\}\) denotes the projection of the coupling constraint on \(X_{i}\) for fixed \(x_{-i}\) and uncertain realization \(\delta\in\Delta\). The collection of coupled optimization programs in (2) for all \(i\in\mathcal{N}\) constitutes an _uncertain noncooperative game_; we denote it as \(G\).
Note that \(G\) follows a worst-case paradigm, taking into account all possible coupling constraints that can be realised by variations of the uncertain parameter \(\delta\in\Delta\). This can render the solutions of \(G\) rather conservative. Furthermore, it is in general not possible to compute a solution for \(G\) without an accurate knowledge of and/or additional assumptions on the support set \(\Delta\) and the probability distribution \(\mathbb{P}\). To circumvent these limitations, we follow a data-driven paradigm and approximate \(G\) by means of a finite number of samples drawn from \(\Delta\), namely the \(K\)-multisample \(\boldsymbol{\delta}_{K}=(\delta_{1},\ldots,\delta_{K})\in\Delta^{K}\). In the remainder of this document, we hold on to the standing assumption that these samples are independent and identically distributed (i.i.d.). Apart from this, no other knowledge on the support set \(\Delta\) and the probability distribution \(\mathbb{P}\) of the uncertain parameter is required. Then, for given multi-sample \(\boldsymbol{\delta}_{K}\) and other agents' decisions \(x_{-i}\), agent \(i\in\mathcal{N}\) solves the optimization program
\[G_{K}:\;\min_{x_{i}\in X_{i}}J_{i}(x_{i},x_{-i})\\ \text{subject to }x_{i}\in\bigcap_{k=1}^{K}C_{\delta_{k}}^{i}(x_{-i })\end{aligned}\;\forall i\in\mathcal{N}. \tag{3}\]
Instead of considering all possible uncertainty realizations \(\delta\in\Delta\) as in (2), we let the data encoded in \(\boldsymbol{\delta}_{K}\) lead agents to their decision by solving (3). We refer to the collection of coupled optimization programs in (3) as the _scenario game_\(G_{K}\), where the subscript \(K\) implies dependence on the drawn multi-sample \(\boldsymbol{\delta}_{K}\). Under standard assumptions, a solution to the scenario game \(G_{K}\) exists and the problem is, in contrast to \(G\), tractable using state-of-the-art equilibrium seeking algorithms.
### Variational inequalities and game equilibria
Notably--under certain assumptions detailed next--solutions to the game \(G_{K}\) can be retrieved as solutions to a variational inequality (VI), for specific choices of the mapping \(F:\,X\to\mathbb{R}^{nN}\)[18, Thm 3.9]:
\[\begin{aligned} \text{VI}_{K}:&\text{Find }x^{*}\in\Pi_{K}\text{ such that}\\ &(x-x^{*})^{\intercal}F(x^{*})\geq 0\text{ for any }x\in\Pi_{K},\end{aligned} \tag{4}\]
where \(\Pi_{K}:=X\cap\bigcap_{k=1}^{K}C_{\delta_{k}}\) denotes the problem domain. A classic game solution concept, which encounters wide application in the literature, is the Nash equilibrium (NE) [33]. At a NE, no agent can decrease their cost by unilaterally changing their decision. Formally, this can be stated as follows.
**Definition 1**.: _A point \(x^{*}=(x^{*}_{i},x^{*}_{-i})\in\Pi_{K}\) is called a generalised Nash equilibrium (GNE) of \(G_{K}\) if, for all \(i\in\mathcal{N}\),_
\[J_{i}\left(x^{*}_{i},x^{*}_{-i}\right)\leq J_{i}(y_{i},x^{*}_{-i}),\;\forall y _{i}\in X_{i}\cap\bigcap_{k=1}^{K}C_{\delta_{k}}^{i}(x^{*}_{-i}).\]
For our analysis, we rely on the following conditions:
**Assumption 1**.: _For all \(i\in\mathcal{N}\), \(J_{i}(x_{i},x_{-i})\) is convex and continuously differentiable in \(x_{i}\) for any fixed \(x_{-i}\)._
**Assumption 2**.:
1. _For any multi-sample_ \(\boldsymbol{\delta}_{K}\in\Delta^{K}\)_, the domain_ \(\Pi_{K}\) _is non-empty._
2. _The set_ \(X=\prod_{i=1}^{N}X_{i}\) _is compact, polytopic and convex._
_._
3. _For any_ \(\delta\in\Delta\)_,_ \(g\) _is an affine function of the form_ \(g(x,\delta)=a(\delta)^{\intercal}x-b(\delta)\)_, where_ \(a:\Delta\to\mathbb{R}^{nN}\) _and_ \(b:\Delta\to\mathbb{R}\)_._
Under these assumptions, we can determine a GNE as in Definition 1 by solving (4) with
\[F(x)=F_{\mathrm{NE}}(x):=\begin{bmatrix}\nabla_{x_{1}}J_{1}(x_{1},x_{-1})\\ \vdots\\ \nabla_{x_{N}}J_{N}(x_{N},x_{-N})\end{bmatrix}. \tag{5}\]
A class of problems of common interest can be modelled by the so called _aggregative_ games [1, 26, 28], where the cost incurred by agents depends on some aggregate measure--typically the average--of the decision of the entire population. Formally, such a cost can be expressed in (3) by the real-valued function \(J_{i}(x_{i},\sigma(x))\), where the aggregate \(\sigma:\mathbb{R}^{nN}\to\mathbb{R}^{n}\) is defined as the mapping \(x\mapsto\frac{1}{N}\sum_{i=1}^{N}x_{i}\). A solution frequently linked to this class of games is the Wardrop equilibrium (WE), a concept akin to the NE but specifically defined in the context of transportation networks [6]. The variational WEs of \(G_{K}\) can be expressed by using \(F(x)=F_{\mathrm{WE}}(x):=[\nabla_{x_{i}}J_{i}(x_{i},z)]_{z=\sigma(x)}]_{i\in \mathcal{N}}\); notice that in this case the second argument of \(J_{i}\) is fixed and set to \(\sigma(x)\), consistently with the notion of WE where agents neglect the impact of their decision on others.
We restrict the considered class of variational mappings as follows:
**Assumption 3**.: _The mapping \(F\) is_
1. \(\alpha\)_-strongly monotone, i.e.,_ \((x-y)^{\intercal}(F(x)-F(y))\geq\alpha\|x-y\|^{2}\) _for any_ \(x,y\in X\)_,_
2. \(L_{F}\)_-Lipschitz continuous, i.e.,_ \(\|F(x)-F(y)\|\leq L_{F}\|x-y\|\) _for any_ \(x,y\in X\)_._
Assumptions 1 and 3 are standard in the game-theoretic literature [19, 43]. Assumption 2 is relatively mild; the affine form of the constraints is exploited in the proposed algorithm (see Section 3) for the convergence to an equilibrium bearing the desired robustness properties.
We point out that in general only a subset of solutions to \(G_{K}\) can be retrieved through (4): these are referred to as _variational equilibria_, and enjoy favourable properties over nonvariational ones, as with the former the coupling constraints' burden is equally split among agents [29, 24]. The following lemma, adapted from [19, Thm. 2.3.3], formalises the connection between the solutions to \(\mathrm{VI}_{K}\) and the GNEs (or GWEs) of \(G_{K}\).
**Lemma 1**.: _Under Assumptions 1, 2 and 3, \(\mathit{VI}_{K}\) has a unique solution that is also an equilibrium of \(G_{K}\)._
For the considered class of VIs, several algorithms from the literature can be employed to obtain a variational equilibrium of \(G_{K}\); see, e.g., [18, 35]. We remark that, even if not explicitly shown for ease of notation, any solution \(x^{*}\) to \(G_{K}\) is itself a function of the drawn multisample \(\boldsymbol{\delta}_{K}\in\Delta^{K}\). Probabilistic feasibility guarantees for the unique solution of \(\mathrm{VI}_{K}\) can then be provided both in an _a priori_ and _a posteriori_ fashion by resorting to the results in [21, 22, 34]. However, these results are tailored to the provision of probabilistic feasibility guarantees for a single point (namely the solution of a VI): any strategic deviation from the equilibrium is not covered by such guarantees. We cover this issue in Section 3. First, we provide some background on the scenario approach.
### Basic concepts in the scenario approach
A fundamental notion in the scenario approach is the _probability of violation_ of an uncertain constraint.
**Definition 2**.:
1. _The probability of violation_ \(\mathcal{V}:\mathbb{R}^{nN}\to[0,1]\) _of a point_ \(x\in\Pi_{K}\) _is defined as the probability that a new yet unseen sample_ \(\delta\in\Delta\) _will give rise to a constraint_ \(C_{\delta}\) _(as defined in (_1_)) such that_ \(x\notin C_{\delta}\)_, i.e.,_ \[\mathcal{V}(x):=\mathbb{P}\{\delta\in\Delta:x\notin C_{\delta}\}.\]
2. _The probability of violation_ \(\mathbb{V}:2^{\mathbb{R}^{nN}}\to[0,1]\) _of a set_ \(S\subseteq\Pi_{K}\) _is defined as the worst-case_ \(\mathcal{V}\) _among all the points in_ \(S\)_, i.e.,_ \[\mathbb{V}(S)=\sup_{x\in S}\mathbb{P}\{\delta\in\Delta:x\notin C_{\delta}\}.\]
Formally, a data-driven decision-making process can be characterized by a mapping--the _algorithm_--that takes as input the data encoded by the samples and returns a set of decisions.
**Definition 3**.: _An algorithm is a function \(\mathpzc{s}d:\Delta^{l}\to 2^{\mathbb{R}^{nN}}\times\mathbb{R}^{nN}\) that takes as input an \(l\)-multisample and returns the pair \((S^{*}_{l},x^{*})\), namely, a solution set \(S^{*}_{l}\) and a point \(x^{*}\in S_{l}\)._
In the following, we interpret the above definition as context-dependent, in that the size \(l\) of the input multisample is admitted to vary--all else remaining fixed for a given algorithm \(\mathpzc{s}d\). A key notion, strongly linked to that of algorithm, is the _minimal compression set_[32]. This concept springs from the observation that typically only a subset of the sampled data is relevant to a decision or set of decisions, and all other samples are redundant.
**Definition 4** (Compression set).: _Consider an algorithm \(\mathpzc{s}d\) as in Definition 3. A subset of samples
is called a _compression_ for \(\mathit{sl}(\boldsymbol{\delta}_{K})\) if \(\mathit{sl}(I)=\mathit{sl}(\boldsymbol{\delta}_{K})\).2 As multiple subset of samples can exist that fulfil this property, the ones with the minimal cardinality are called minimal compression sets._
Footnote 2: With some abuse of notation, in the remainder the symbol \(\boldsymbol{\delta}_{K}\) is interpreted as either the i.i.d. sample vector \(\boldsymbol{\delta}_{K}\in\Delta^{K}\), or the set comprising its components, i.e., \(\boldsymbol{\delta}_{K}=\{\delta_{1},\ldots,\delta_{K}\}\subseteq\Delta\), depending on the context.
If we feed the algorithm with the set of samples corresponding to a compression, then the same decision (in our case a set and a point in the set) will be returned as when we feed the algorithm with the entire multi-sample. The compression set is related to the notion of support samples that is presented below.
**Definition 5** (Support sample): _A sample \(\delta_{i}\in\boldsymbol{\delta}_{K}\) is a support sample if its removal changes the decision returned by \(\mathit{sl}\), i.e., if \(\mathit{sl}(\boldsymbol{\delta}_{K})\neq\mathit{sl}(\boldsymbol{\delta}_{K} \setminus\delta_{i})\)._
Notice that since the output of \(\mathit{sl}\) is a pair, it follows directly from the previous definition that if a support sample is removed, at least one of the two elements of the pair will have to change for the algorithm's output to change as well. A sample whose removal changes the \(x^{*}\) component of \(\mathit{sl}\) is called a support sample for \(x^{*}\). We link the two previous concepts by imposing the following _non-degeneracy_ assumption.
**Assumption 4**: _With \(\mathbb{P}^{K}\)-probability 1, if \(I^{*}\) is the set of support samples of \((S^{*}_{K},x^{*})=\mathit{sl}(\boldsymbol{\delta}_{K})\), then \(\mathit{sl}(\boldsymbol{\delta}_{K})=\mathit{sl}(I^{*})\)._
In other words, non-degeneracy implies that the solution returned by algorithm \(\mathit{sl}\) when fed with all the drawn samples is the same as the one obtained by using only the support samples of the problem. Under such assumption, the set of support samples and the minimal compression set coincide--and the latter is unique. This avoids degenerate cases where there is an accumulation of constraints active at the solution.
## 3 Probabilistic feasibility of sets around equilibria
### A first a posteriori result
Returning to the scenario game \(G_{K}\) in (3), we now consider a more general setup where agents are allowed to deviate from \(x^{*}\) following, e.g., unmodelled changes in their cost functions; while we suppose that these deviations are feasible with respect to the local constraints, we want to study the feasibility as regards the coupling constraints obtained through sampling. Specifically, the region in which agents' strategies can deviate from the nominal equilibrium is assumed to lie within a predefined open ball \(\mathbb{B}(x^{*},\rho)\), where \(\rho>0\) is a fixed radius that denotes the maximum possible distance of agents' deviations from \(x^{*}\); the latter is assumed to be unique as per Lemma 1. As such, the region of interest is \(S^{*}_{K}=\Pi_{K}\cap\mathbb{B}(x^{*},\rho)\).
This is pictorially illustrated in Figure 1 using the \(\infty\)-norm (note that any other norm could have been used instead): an algorithm \(\mathit{sl}\) (see Sec. 2.3) takes as input a multi-sample \(\boldsymbol{\delta}_{K}\) and returns the region \(S^{*}_{K}\) around the solution \(x^{*}\in\mathbb{R}^{2}\) of a game with two players whose decisions are defined as scalar quantities. We assume in this case that \(\Pi_{K}\) is shaped exclusively by sampled coupling constraints: only two constraints (in solid blue) are of support for the set, as the removal of any of the constraints depicted by dashed red lines will not change \(S^{*}_{K}\). However, depending on the given multi-sample \(\boldsymbol{\delta}_{K}\) and the location of the resulting equilibrium, more constraints could intersect the surrounding region \(\mathbb{B}(x^{*},\rho)\), thus increasing the cardinality of the support sample set.
We can quantify the number of samples that support \(S^{*}_{K}\) in an _a posteriori_ fashion as established in Theorem 1. To this end, for a fixed confidence \(\beta\in(0,1)\), let the violation level be defined as a function \(\epsilon:\{0,\ldots,K\}\rightarrow[0,1]\) satisfying
\[\epsilon(K)=1\text{ and }\sum_{i=0}^{K-1}\binom{K}{i}(1-\epsilon)^{K-i}=\beta, \tag{6}\]
where \(K\) is the size of the multisample; see, e.g., [13, Eq. (7)].
**Theorem 1**: _Under Assumptions 1-4, consider some algorithm \(\mathit{sl}\) returning a pair \((S^{*}_{K},x^{*})\) where \(S^{*}_{K}\) is parametrised by \(x^{*}\). Fix a confidence parameter \(\beta\in(0,1)\) and a violation level \(\epsilon:\{0,\ldots,K\}\rightarrow[0,1]\) that satisfies (6). We have that_
\[\mathbb{P}^{K}\left\{\boldsymbol{\delta}_{K}\in\Delta^{K}:\ \mathbb{V}(S^{*}_{K})> \epsilon(s^{*}+M)\right\}\leq\beta,\]
_where \(s^{*}\) is the number of samples that support the equi
Figure 1: Illustration of region \(S_{K}\) (in dark green) as the intersection of the set of deviations \(\mathbb{B}_{\infty}(x^{*},\rho)\) around the unique equilibrium \(x^{*}\) (red dot) with the domain \(\Pi_{K}\).
librium \(x^{*}\) and \(M\) the number of facets of \(\Pi_{K}\) that intersect \(S^{*}_{K}\)._
_Proof_: Let \((S^{*}_{K},x^{*})\) be the solution returned by \(\mathpzc{sd}\) for some given \(\boldsymbol{\delta}_{K}\), according to Definition 3. We aim at determining a compression set for \(\mathpzc{sd}(\boldsymbol{\delta}_{K})\), as this would in turn allow us to obtain the theorem's conclusion by means of Theorem 2 in [37]. Under Assumption 4, this is equivalent to determining an upper bound on the number of support samples for \(\mathpzc{sd}(\boldsymbol{\delta}_{K})\) (see Definition 5). Since by construction \(S^{*}_{K}\) is parametrised by \(x^{*}\), the support samples of \(\mathpzc{sd}(\boldsymbol{\delta}_{K})\) are given by the union of (i) the samples that support the equilibrium \(x^{*}\), and (ii) the samples that do not support \(x^{*}\), but whose removal can still lead to a change of the region \(S^{*}_{K}\). In the first case, removing a support sample for \(x^{*}\) would move the entire region (e.g., by its centre, as in Fig. 1); we denote the number of such samples as \(s^{*}\). On the other hand, the removal of a sample corresponding to the second case yields a larger region \(S^{*}_{K}\), leaving \(x^{*}\) unaffected; the number of such samples can be upper bounded by the \(M\) facets of \(\Pi_{K}\) that intersect \(S^{*}_{K}\). Hence, the number of samples that form a compression set for \(\mathpzc{sd}(\boldsymbol{\delta}_{K})\) is bounded by \(s^{*}+M\). Existence of a compression set \(I\) with a bound on its cardinality is sufficient for the application of Theorem 2 in [37]. The fact that for the minimal compression set \(|I^{*}|\leq|I|\leq s^{*}+M\) always holds leads then to the statement of this theorem. \(\blacksquare\)
It is important to stress that the application of Theorem 1 is agnostic on the choice of the equilibrium seeking algorithm. To use the result of Theorem 1, one needs to quantify (an upper bound of) the number of samples \(s^{*}\) that support the randomised equilibrium \(x^{*}\) and (an upper bound of) the number \(M\) of coupling constraints that correspond to facets of \(S^{*}_{K}\). While \(s^{*}\leq nN\) under Assumptions 1-3,3 an upper bound for \(M\) can in general only be achieved _a posteriori_, i.e., once \(\boldsymbol{\delta}_{K}\) is sampled. Then an important question naturally arises: _When we allow for bounded deviations about some equilibrium, is it possible to devise an algorithm that converges to a solution such that an a priori defined robustness for all points in the region of admissible deviations is achieved?_ We aim at answering this question next.
Footnote 3: By arguments similar to those in [37] it can be shown that a tighter bound \(s^{*}\leq n\) holds for the game \(G_{K}\) in case coupling constraints only concern the aggregate variable; see also [42].
### A priori probabilistic certificates
Consider the scenario game \(G_{K}\) and suppose that bounded deviations from the solution are allowed. We model such deviations as a ball of radius \(\rho\) around the equilibrium, as in Section 3.1. In contrast to the a posteriori nature of the result therein, our goal here is to achieve an a priori bound. Namely, we aim at establishing the main statement of Theorem 1 with a _prespecified_ violation level, which does not depend on the given multi-sample \(\boldsymbol{\delta}_{K}\). In other words, we seek a statement--holding with known confidence--of the form \(\mathbb{V}(\Pi_{K}\cap\mathbb{B}(x^{*},\rho))<\bar{\epsilon}\), with \(\bar{\epsilon}\in(0,1)\) a priori fixed.
To achieve this, we build upon the previous conclusions, which expose a link between the probability of constraint violation and the number \(M\) of uncertainty samples constituting facets of \(\Pi_{K}\) that \(\mathbb{B}(x^{*},\rho)\) intersects. In particular, a monotonic relationship follows from (6): the smaller the number \(M\) of facets of the uncertain domain intersected by \(\mathbb{B}(x^{*},\rho)\), the better, i.e., less conservative, the theoretical feasibility guarantees on constraint violation for the strategies belonging to the feasible region \(S^{*}_{K}\) surrounding the equilibrium. Moreover, in certain cases (as illustrated next) a smaller value of \(M\) is associated with a larger region for which the guarantees of Theorem 1 hold--due to a smaller portion of \(\mathbb{B}(x^{*},\rho)\) being cut off by the intersection with \(\Pi_{K}\). This motivates us to study the role of \(M\) as a modulating parameter for the robustness of the feasibility certificates offered for the region \(S^{*}_{K}\), as well as the extent of deviation from the nominal equilibrium covered by such certificates. These concepts are leveraged in the algorithm proposed next.
#### 3.2.1 GNE-seeking algorithm with a priori robustness guarantees
We consider an iterative scheme to determine a solution of \(\mathrm{VI}_{K}\) in (4). In particular, since we address a NE problem characterised by coupling constraints, we build our Algorithm 1 upon a primal-dual scheme, where constraint satisfaction is achieved by the use of Lagrange multipliers. As deterministic constraints do not play a role in the evaluation of the robustness guarantees, suppose for ease of exposition that \(\Pi_{K}\) only comprises uncertain coupling constraints. Let \(A\in\mathbb{R}^{m\times nN}\) and \(b\in\mathbb{R}^{m}\) such that
\[\Pi_{K}=\{x\in X:Ax\leq b\}, \tag{7a}\] \[\|a_{\ell}\|_{2}=1,\;\ell=1,\ldots,m, \tag{7b}\]
where \(a_{\ell}\)7 denotes the \(\ell\)-th row of \(A\). Eq. (7) is the irredundant \(H\)-representation of the polytopic feasibility region \(\Pi_{K}\) defined in (4), where the rows of matrix \(A\) are unit vectors. Property (7b) is key to the second statement in Lemma 2. It entails no loss of generality, since for any \(A,b\) forming an equivalent \(H\)-representation of \(\Pi_{K}\), (7) can be obtained by normalising each row of \(A\) and the corresponding component of \(b\) by the row-vector norm. Thus, the pair \((A,b)\) encodes the set of randomised coupling constraints that constitute facets of \(\Pi_{K}\). Formally, \(A:\Delta^{K}\to\mathbb{R}^{m\times nN}\) and \(b:\Delta^{K}\to\mathbb{R}^{m}\) are mappings from the \(K\)-multisample to the space of real \(m\times nN\) matrices and \(m\)-dimensional vectors, respectively. Although we do not reflect this in the notation for simplicity, we will consider this to be implicit in the remainder of the paper.
Now, to understand the mechanism underlying Algorithm 1, first note that tightening an affine constraint by a distance corresponding to the radius \(\rho\) is equivalent to preventing the original constraint from intersecting \(\mathbb{B}(x^{*},\rho)\). Based on this and the idea illustrated in previous paragraphs, we leverage the information carried by Lagrange multipliers to control the intersection between \(\Pi_{K}\) and \(\mathbb{B}(x^{(\kappa)},\rho)\) while the algorithm is running. In particular, all but the coupling constraints that correspond to the \(M\) largest multipliers are tightened in Algorithm 1 by a distance \(\rho\); as a result, the number of facets of \(\Pi_{K}\) intersecting \(\mathbb{B}(x^{*},\rho)\) is at most \(M\). This enables us to obtain an _a priori_ estimate on the number of support samples, that in turn allows us to provide a priori bounds on \(\mathbb{V}(\Pi_{K}\cap\mathbb{B}(x^{*},\rho))\) (see Theorem 3).
It is worth pointing out that the constraints corresponding to the \(M\) largest multipliers contribute the most in reducing the nominal cost if \(x^{*}\) is allowed to lie at their boundaries. It can therefore be understood that while smaller values for \(M\) can result in a more robust--and possibly larger--region \(S^{*}_{K}\), they can also move the location of the nominal equilibrium \(x^{*}\) to a less efficient point towards the interior of \(\Pi_{K}\). As we will demonstrate numerically in the sequel, this is indeed the case with _potential_ games [20].
We build upon the class of asymmetric projection algorithms (APA) [19, Ch. 12] to seek a solution of the game \(G_{K}\). This involves an iterative scheme, whose trajectory is characterised by the dynamics in line 3 of Algorithm 1, where \(y:=(x^{\intercal},\mu^{\intercal})^{\intercal}\in\mathbb{R}^{nN+m}\) is the concatenation of the global decision vector \(x\) and the Lagrange multipliers \(\mu=(\mu_{\ell})_{\ell=1}^{m}\in\mathcal{M}\subseteq\mathbb{R}_{+}^{m}\), \(D\succ 0\) is the so called _asymmetric projection_ matrix, and the mapping \(T:\mathbb{R}^{nN}\times\mathbb{R}^{m}\times\mathbb{R}\times\mathbb{N}\to \mathbb{R}^{nN+m}\) is given by
\[T(y,\rho,M)=\begin{bmatrix}F(x)+A^{\intercal}\mu\\ -\left(Ax-b+Q(\mu,M)\boldsymbol{\rho}\right)\end{bmatrix}, \tag{8}\]
where \(\boldsymbol{\rho}=c\rho\mathbf{1}_{m}\), and \(c,\rho,M\) are fixed during the execution of the algorithm. Note that \(T\) follows from the primal-dual conditions of the game solution; see [18, Sec. 4.2], [19, Sec. 1.4.1]. In (8), \(F\) is defined as in Section 2.2, and \(A,b\) are as in (7). Moreover, Algorithm 1 relies on the mapping \(Q:\mathbb{R}_{+}^{m}\times\mathbb{N}\to\{0,1\}^{m\times m}\) which allows convergence to possibly different nominal solutions \(x^{*}\), according to the specified parameters \(M\) and \(\rho\). \(Q\)--formally introduced in the next subsection--allows to perform constraint tightening along the trajectory of the algorithm, following the concepts illustrated above.
As illustrated in Figure 2, \(\mathcal{M}\) is the union of a finite number of disjoint convex sets, hence the projection in line 3 of Algorithm 1 is computationally viable (e.g., projecting on each convex subset of the union and then setting \(y^{(\kappa+1)}\) to be the solution among these projections that results in the minimum distance from \(y^{(\kappa)}-D^{-1}T(y^{(\kappa)},\rho,M)\)). Finally, it should be noted that Algorithm 1 could be performed in a distributed manner, although this is outside the scope of the present work.
#### 3.2.2 Constraint tightening via mapping \(Q\)
We define the mapping \(Q\) as
\[Q(\mu,M)=P^{\intercal}(\mu)R(M), \tag{9}\]
where
* \(P:\mathbb{R}^{m}\to\{0,1\}^{m\times m}\) returns a permutation matrix such that \(P(\mu)\mu\) is the vector composed by the elements of \(\mu\) arranged in decreasing order.
* \(R:\mathbb{N}\to\{0,1\}^{m\times m}\) takes as input the number of coupling constraints \(M\leq m\) we allow \(\mathbb{B}(x^{*},\rho)\) to intersect with and returns as output the matrix \[R(M)=\begin{bmatrix}\mathbf{0}_{m\times M}\begin{bmatrix}0_{M\times m-M}\\ I_{m-M}\end{bmatrix}.\end{bmatrix}\] (10) Compatibly with the definition of \(P(\cdot),R(M)P(\mu)\boldsymbol{\rho}=(\mathbf{0}_{M}{}^{\intercal}\,c\rho \mathbf{1}_{m-M}{}^{\intercal})^{\intercal}=R(M)\boldsymbol{\rho}\), where the last equality holds since all components of \(\boldsymbol{\rho}\) are equal.
It can be seen from (8) and the above definition that \(Q(\cdot,M)\) allows to tighten the constraints corresponding to the smallest \(m-M\) multipliers. In particular, following the discussion in Section 3.2.1, we consider tightening these constraints by an amount equal to the radius of the sphere that circumscribes \(\mathbb{B}(x^{*},\rho)\), for any choice of norm. This amount is equal to \(\boldsymbol{\rho}_{\ell}=c\rho\|a_{\ell}\|_{2}=c\rho\), where the last equality is due to (7b); \(c=1\) if \(\mathbb{B}(\cdot,\rho)\) is expressed by a \(p\)-norm with \(p\leq 2\), \(c=\sqrt{n}\) otherwise. Conversely, at most \(M\) constraints can intersect \(\mathbb{B}(x^{*},\rho)\) upon convergence of the algorithm. Let \(\mathcal{L}(M)\subseteq\{1,\ldots,m\}\) contain the indices of the \(M\) largest multipliers. Then, \(\ell\in\mathcal{L}(M)\Leftrightarrow(Q(\mu,M)\boldsymbol{\rho})_{\ell}=0\), and the second block row of \(T\) in (8) expresses
\[\begin{cases}a_{\ell}{}^{\intercal}x\leq b_{\ell}&\text{if }(Q(\mu,M) \boldsymbol{\rho})_{\ell}=0,\\ a_{\ell}{}^{\intercal}x\leq b_{\ell}-c\rho&\text{if }(Q(\mu,M)\boldsymbol{\rho})_{\ell}=c\rho. \end{cases} \tag{11}\]
The following example further clarifies this concept.
Illustrative example:Suppose \(\Pi_{K}\) is composed of 3 uncertain coupling constraints and that we allow the region of strategic deviations \(\mathbb{B}(\cdot,\rho)\) to intersect at most \(M=1\) of them. It follows from (10) that \(R(M)=\left[\begin{smallmatrix}0&0&0\\ 0&1&0\\ 0&0&1\end{smallmatrix}\right]\). Now, suppose that at some iteration \(\kappa\) of Algorithm 1 the multiplier vector is \(\mu^{(\kappa)}=(\mu^{(\kappa)}_{\ell})^{3}_{\ell=1}\) with
\[\mu^{(\kappa)}_{2}>\mu^{(\kappa)}_{1}>\mu^{(\kappa)}_{3}. \tag{12}\]
Then, \(P(\mu^{(\kappa)})=\left[\begin{smallmatrix}0&1&0\\ 0&0&1\\ 0&0&1\end{smallmatrix}\right]\) is a permutation matrix such that \(P(\mu^{(\kappa)})\mu^{(\kappa)}=(\mu^{(\kappa)}_{1}\ \mu^{(\kappa)}_{1}\ \mu^{(\kappa)}_{3})^{\intercal}\). Finally, we have \(Q(\mu^{(\kappa)},M)\boldsymbol{\rho}=P^{\intercal}(\mu^{(\kappa)})R(M) \boldsymbol{\rho}=(c\rho\ 0\ c\rho)^{\intercal}\), where \(P^{\intercal}(\cdot)\) applies the correct ordering to the vector \(R(M)\boldsymbol{\rho}\). Therefore, in case (12) continues to hold for all \(j\geq\kappa\), the region \(\mathbb{B}(x^{(j)},\rho)\) will intersect the constraint associated to the largest multiplier from some iteration \(\hat{j}\geq\kappa\) until convergence.
### Convergence analysis and main result
In this section we show convergence for Algorithm 1. Before proceeding, we provide details on the set \(\mathcal{M}\subseteq\mathbb{R}^{m}_{+}\) where the Lagrange multipliers updates are projected (line 3, Algorithm 1). Let \(\mathcal{Z}:=[\zeta,+\infty)\cup\{0\}\), for some small \(\zeta>0\), i.e., \(\mathcal{Z}\subset\mathbb{R}\) contains all nonnegative scalars which take value greater than \(\zeta\) when nonzero.
**Assumption 5**.: \(\mathcal{M}\) _is compact and admits the form_
\[\mathcal{M}:=\{\mu\in\mathbb{R}^{m}:(P(\mu)\mu)_{\ell+1}<(P(\mu) \mu)_{\ell}-\zeta,\\ \forall\ell=1,\ldots,m-1\}\cap\mathcal{Z}^{m}. \tag{13}\]
Recalling that \(P(\mu)\mu\) rearranges the multipliers in descending order, the set \(\mathcal{M}\) contains all vectors where the difference between every pair of strictly positive components--and the distance of the smallest of these from zero--exceeds \(\zeta\). We note that (13) can be expressed as the disjoint union of \(q=m!+m+1\) convex subsets of \(\mathbb{R}^{m}_{+}\), each of which we denote in the following as \(\mathcal{M}_{j}\), i.e., \(\mathcal{M}=\bigcup_{j=1}^{q}\mathcal{M}_{j}\). Figure 2 provides an illustration of this set for the case \(m=3\).
Restricting the Lagrange multipliers to \(\mathcal{M}\) facilitates convergence of the algorithm in cases where the \(\text{VI}(X\times\mathcal{M},T)\) has multiple solutions. In particular, it (i) alleviates discontinuity issues of the mapping \(T\), and (ii) endows the latter with desired nonexpasiveness properties (see proof of Lemma 3). In the numerical implementation of the algorithm, ensuring \(\mu\in\mathcal{M}\) can possibly introduce small perturbations in the multipliers--compared to standard formulations where \(\mu\in\mathbb{R}^{m}_{+}\)--which in turn could produce a slight violation of the constraints (this can be controlled through the magnitude of \(\zeta\)). We further note that assuming compactness of \(\mathcal{M}\) is only required for the proof of Theorem 2 below; the assumption can be numerically satisfied by intersecting (13) with an arbitrarily large compact set. While necessary for a formal statement of convergence, we note that in many practical instances, including our numerical study, the desired solution can be attained without the need of imposing such an assumption on \(\mathcal{M}\).
We are now ready to introduce the following lemmas as an intermediate step towards proving convergence of Algorithm 1.
**Lemma 2**.: _Define \(T\) as in (8)-(9), where \(A,b\) satisfy (7). Then, for any \(\mu,\mu^{\prime}\in\mathcal{M}\), \(\mu\neq\mu^{\prime}\), there exists an integer \(0\leq h\leq M\) such that_
\[(\mu-\mu^{\prime})^{\intercal}(Q(\mu,M)-Q(\mu^{\prime},M))\boldsymbol{\rho} \leq-h\zeta c\rho. \tag{14}\]
The construction of \(h\) can be found in the proof of Lemma 2; nonetheless, its exact value does not play a role in the subsequent analysis.
**Lemma 3**.: _Consider \(T\) as in (8)-(9), where \(A,b\) satisfy (7) and \(\mathcal{M}=\bigcup_{j=1}^{q}\mathcal{M}_{j}\) as in (13). For each \(j=1,\ldots,q\), let \(\text{VI}(X\times\mathcal{M}_{j},T)\) denote the VI problem defined by the map \(T\) restricted to the subdomain \(X\times\mathcal{M}_{j}\). Under Assumptions 1-3 the following holds:_
1. \(T\) _is continuous on_ \(X\times\mathcal{M}\)_._
2. _Let_ \[D=\begin{bmatrix}\frac{1}{\tau}I_{nN}&0\\ -2A&\frac{1}{\tau}I_{m}\end{bmatrix},\] (15)
Figure 2: Illustration of the domain \(\mathcal{M}\) of the Lagrange multipliers associated to the coupling constraints, for the case \(\zeta=0.2\) and \(m=3\). This results in \(q=10\) convex subsets, including the origin and a portion of the axes.
_and set \(\tau>0\) such that_
\[\tau<\min\left\{\frac{-L_{F}^{2}+\sqrt{L_{F}^{4}+4\alpha^{2}\|A\|^{2 }}}{2\alpha\|A\|^{2}},\right.\] \[\left.\frac{-\rho(1+\|A\|^{2})+\sqrt{\rho^{2}(1+\|A\|^{2})^{2}+16 \zeta^{2}\|A\|^{2}}}{4\zeta\|A\|^{2}}\right\}. \tag{16}\]
_Then, for any \(j=1,\ldots,q\), Algorithm 1 converges to a solution of VI\((X\times\mathcal{M}_{j},T)\), when the gradient step in line 3 is projected on the corresponding subdomain, for any \(y^{(0)}\in X\times\mathcal{M}_{j}\)._
The second part of Lemma 3 provides an admissible range of values for \(\tau\) such that Algorithm 1 converges to a solution of VI\((X\times\mathcal{M}_{j},T)\) if at each iteration the projection in line 3 is performed on the (convex) subdomain \(\mathcal{M}_{j}\subset\mathcal{M}\), \(j\in\{1,\ldots,q\}\). However, we are interested in establishing convergence on the entire domain \(\mathcal{M}\), so at each iteration the projected solution might belong to a different subdomain \(\mathcal{M}_{j}\), \(j\in\{1,\ldots,q\}\). This does not trivially follow from the second part of Lemma 3; therefore, we capitalize on Lemmas 2 and 3 to establish an additional condition on \(\tau\) such that Algorithm 1 retrieves a solution of VI\((X\times\mathcal{M},T)\). This is formally established in the following theorem.
**Theorem 2**.: _Consider Assumptions 1, 2, 3, and 5. Fixed \(0\leq M\leq m\), assume the domain \(\Pi_{K}\) is nonempty for any of the \(\binom{m}{M}\) combinations of constraints tightened as in (11). Let \(D\succ 0\) be defined as in (15), where \(\tau\) satisfies (16) and_
\[\tau<\frac{-(\bar{C}+\bar{R})+\sqrt{(\bar{C}+\bar{R})^{2}+2\zeta\bar{R}}}{2 \bar{R}}, \tag{17}\]
_where \(\bar{R}=\max\big{\{}\sup_{x\in X}\sup_{\mu\in\mathcal{M}}\|2A(F(x)+A^{\intercal }\mu)\|,\)\(\sup_{x\in X}\|Ax-b\|\big{\}}\) and \(\bar{C}=c\rho\sqrt{m-\bar{M}}\)._
_Then Algorithm 1 converges to a solution of VI\((X\times\mathcal{M})\) for any given multi-sample \(\boldsymbol{\delta}_{K}\in\Delta^{K}\) and initial condition \(y^{(0)}\in X\times\mathcal{M}\)._
Note that as \(\mu^{(\kappa)}\to\mu^{*}\), we have \(Q(\mu^{(\kappa)})\to Q(\mu^{*})=:Q^{*}\). Then, the solution returned by Algorithm 1 is the equilibrium of a variant of \(G_{K}\) with \(m-M\) tightened constraints--as it follows from (11) with \(Q(\mu)\) replaced by \(Q^{*}\). The next result accompanies the region \(S^{*}_{K}=\Pi_{K}\cap\mathbb{B}(x^{*},\rho)\) of strategic deviations from the equilibrium \(x^{*}\) with _a priori_ probabilistic feasibility guarantees that can be tuned by means of \(M\). It should be noted that Theorem 2 establishes that there exists a choice of \(\tau\) to guarantee convergence of Algorithm 1. The exact admissible range for \(\tau\)--albeit explicit via (16) and (17)--can be difficult to quantify due to \(\bar{R}\). Numerical evidence suggests that selecting a small enough value is sufficient to achieve convergence.
**Theorem 3**.: _Consider Assumptions 1-5. Let \(x^{*}\) and \(S^{*}_{K}=\Pi_{K}\cap\mathbb{B}(x^{*},\rho)\) be returned by Algorithm 1; fix \(\overline{\epsilon}\in(0,1)\) and \(M\). We then have that_
\[\mathbb{P}^{K}\Big{\{}\boldsymbol{\delta}_{K}\in\Delta^{K}:\; \mathbb{V}(S^{*}_{K})\leq\overline{\epsilon}\Big{\}}\\ \geq 1-\sum_{i=0}^{nN+M-1}\binom{K}{i}\overline{\epsilon}^{i}(1- \overline{\epsilon})^{K-i}. \tag{18}\]
By Definition 2, Theorem 3 guarantees that for any point in \(S^{*}_{K}\), the probability of constraint violation is bounded by \(\bar{\epsilon}\), with confidence at least \(1-\sum_{i=0}^{nN+M-1}\binom{K}{i}\overline{\epsilon}^{i}(1-\overline{\epsilon} )^{K-i}\). The dependence of this term on \(M\) gives us an additional degree of freedom in trading the robustness of the solution for its associated probabilistic confidence. The choice of \(M\) can also have an effect on the size of \(S^{*}_{K}\), as well as on the location of \(x^{*}\), thus resulting in a trade-off between performance and robustness.
For the case in which the coupling constraints concern exclusively the aggregate variable, it can be shown that the support rank for all points in the region \(S^{*}_{K}\) is upper-bounded by \(n+M-1\). This allows to state (18) with a much higher confidence of \(1-\sum_{i=0}^{n+M-1}\binom{K}{i}\overline{\epsilon}^{i}(1-\overline{\epsilon} )^{K-i}\); for details, we refer the reader to [37] and [42].
## 4 Numerical example
Consider a game with \(N\) agents whose decisions are subject to deterministic local constraints and uncertain coupling constraints on the aggregate decision:
\[\left.\begin{array}{l}\min_{x_{i}\in X_{i}}\;x_{i}{}^{\intercal}(C\sigma(x)+ d)\\ \mbox{subject to}\;\;\underline{b}_{\delta_{k}}\leq\sigma(x)\leq\overline{b}_{ \delta_{k}},\\ k=1,\ldots,K\end{array}\right\}\;\forall i\in\mathcal{N}, \tag{19}\]
where \(C\succ\alpha I_{n}\), for some \(\alpha>0\), and \(d\in\mathbb{R}^{n}\). We impose no knowledge of \(\Delta\) and \(\mathbb{P}\); we rely instead on a scenario-based approximation of the game, whereby each sample \(\delta_{k}\in\boldsymbol{\delta}_{K}\) gives rise to \(\underline{b}_{\delta_{k}},\overline{b}_{\delta_{k}}\). Eq. (19) is an _aggregative_ game in the form of (3).
In this instance, we assume each agent's action has negligible effect on the aggregate, and accordingly consider a WE-seeking problem. Following the definition of \(F_{\mathrm{WE}}\) (Sec. 2.2), we get \(F(x)=F_{\mathrm{WE}}=[C\sigma(x)+c]_{i\in\mathcal{N}}\). It can be verified that \(F\) is Lipschitz continuous and strongly monotone with respect to \(\sigma\): by [19, Thm. 2.3.3], (19) admits a unique aggregate equilibrium \(\sigma^{*}=\sigma(x^{*})\).4
We employ Algorithm 1 to seek a WE \(x^{*}\) such that, by fixing \(M\), a prespecified theoretical violation level is guaranteed for the set \(\Pi_{K}\cap\mathbb{B}(x^{*},\rho)\). Due to uniqueness of \(\sigma^{*}\), all sets \(\mathbb{B}(x^{*},\rho)\)--parametrised by any \(x^{*}\) solving (19)--are projected into the unique ball \(\mathbb{B}(\sigma^{*},\rho/N)\) in the aggregate space. Also note that by definition of \(\sigma\), at most \(n\) non-redundant samples will contribute to define the domain \(\Pi_{K}^{\sigma}:=\frac{1}{N}(\mathbf{1}_{N}{}^{\intercal}\Diamond I_{n})X \cap\left(\bigcap_{k=1}^{K}C_{\delta_{k}}\right)\) in (19). For the derivation of the robustness guarantees, we can thus restrict our attention to \(S_{K}^{*}=\Pi_{K}^{\sigma}\cap\mathbb{B}(\sigma^{*},\rho/N)\subseteq\mathbb{R} ^{n}\). As remarked at the end of Section 3.3, we can apply (18) from Theorem 3 with tighter confidence, since in this case the support rank for all points belonging to \(S_{K}^{*}\) is bounded by \(n+M-1\). For the case \(n=2\), \(N=50\), and for different choices of \(M\), Figure 3 depicts the iterates \(\{\sigma(x^{(\kappa)})\}\), \(\kappa=1,2,\ldots\), generated by our algorithm, i.e., the projection of the algorithm iterates on the space \(\Pi_{K}^{\sigma}\). It can be observed how the region \(S_{K}^{*}\) changes as the value of \(M\) is modified.
It is worth noting that in this case \(F(x)\) is _integrable_--this can be inferred by [19, Thm. 1.3.1] since the Jacobian of the game is symmetric, i.e., \(\nabla_{x}F(x)=\nabla_{x}F(x){}^{\intercal}\). Therefore, a WE \(x^{*}\) can also be obtained by solving
\[\begin{split}&\min_{x\in X}\;\sigma(x){}^{\intercal}C\sigma(x)+d {}^{\intercal}\sigma(x)\\ &\text{subject to }\underline{b}_{\delta_{k}}\leq\sigma(x)\leq \overline{b}_{\delta_{k}},\;\;k=1,\ldots,K.\end{split} \tag{20}\]
In other words, this game admits a _potential function_\(E(x):=\sigma(x){}^{\intercal}C\sigma(x)+d{}^{\intercal}\sigma(x)\), whose minimizers correspond to a WE. \(E\) can be interpreted as the total cost incurred by the population of agents, and its minimization leads to the optimum social welfare. The contour lines of \(E\) are depicted in Figure 3: since \(x^{*}\) minimises \(E(\cdot)\), \(\sigma^{*}\) lies on the contour associated to the minimum value of \(E\) within the feasible domain. Lower values of \(M\) result in larger regions for which guarantees are provided. Figure 4 shows how the sequence \(\{E(x^{(\kappa)})\}_{\kappa=1,2,\ldots}\) converges to the minimum _potential_ within the possibly tightened feasibility region. It can be observed how in this case the efficiency of the equilibrium decreases as smaller values of \(M\) are chosen. The three panels in Figure 4 show the trade-off between system level efficiency and the guaranteed robustness levels. The lower the value of \(M\), the lower the empirical constraints violation--corresponding to a better confidence bound in the right-hand side of (18).
## 5 Concluding remarks
This work proposes a data-driven equilibrium-seeking algorithm such that probabilistic feasibility guarantees are provided for a region surrounding a game equilibrium. These guarantees are a priori and the region that is accompanied with such a probabilistic certificate is tunable. For games that admit a potential function, the proposed scheme is shown to achieve a trade-off between cost and the level of probabilistic feasibility guarantees. In fact, our scheme returns the most efficient equilibrium such that the predefined guarantees are achieved. Proving this conjecture is left for future work.
## 6 Appendix
### Proof of Lemma 2
Let \(\mu,z\) be arbitrary vectors in \(\mathcal{M}\) and, as in the proof of Lemma 3, define \(\vec{\mu},\vec{z}\) as the vectors composed by rearranging the elements of \(\mu,z\) in decreasing order. According to this arrangement, let \(\mathcal{I}_{\mu}=\{i_{1},i_{2},\ldots,i_{m}\}\) be the ordered set of indices of \(\mu\), i.e., \(i_{k}:\,\mu_{i_{k}}=\vec{\mu}_{k}\), \(k=1,\ldots,m\); as a result, \(i_{1}\) and \(i_{m}\) will be the indices of the largest and smallest components of \(\mu\), respectively. Applying a similar definition to \(z\), we denote the corresponding set \(\mathcal{I}_{z}:=\{j_{1},j_{2},\ldots,j_{m}\}\). Then, the first \(M\) indices in \(\mathcal{I}_{\mu}\) and \(\mathcal{I}_{z}\), denoted as \(\mathcal{L}_{\mu}\) and \(\mathcal{L}_{z}\), respectively, are relative to the constraints not tightened by the application of \(Q(\cdot,M)\). In other words, for all \(\ell\in\mathcal{L}_{\mu}\), \((Q(\mu,M)\boldsymbol{\rho})_{\ell}=0\)--and similarly for \(z\). Vice versa, the complementary sets \(\mathcal{L}_{\mu}^{c}=\mathcal{I}_{\mu}\setminus\mathcal{L}_{\mu}\) and \(\mathcal{L}_{z}^{c}=\mathcal{I}_{z}\setminus\mathcal{L}_{z}\) are such that for all \(\ell\in\mathcal{L}_{\mu}^{c}\), \((Q(\mu,M)\boldsymbol{\rho})_{\ell}=c\rho\), and for all \(\ell\in\mathcal{L}_{z}^{c}\), \((Q(z,M)\boldsymbol{\rho})_{\ell}=c\rho\). Let \(q=[Q(\mu,M)-Q(z,M)]\boldsymbol{\rho}\). We distinguish between the following cases:
1. \(\ell\in\mathcal{L}_{\mu}^{c}\cap\mathcal{L}_{z}\): we have \((Q(\mu,M)\boldsymbol{\rho})_{\ell}=c\rho\) since \(\ell\in\mathcal{L}_{\mu}^{c}\), while \((Q(z,M)\boldsymbol{\rho})_{\ell}=0\) as \(\ell\in\mathcal{L}_{z}\). Then, \(q_{\ell}=c\rho\).
2. \(\ell\in\mathcal{L}_{\mu}\cap\mathcal{L}_{z}^{c}\): from \(\ell\in\mathcal{L}_{z}^{c}\) we have \((Q(z,M)\boldsymbol{\rho})_{\ell}=c\rho\). On the other hand, since \(\ell\in\mathcal{L}_{\mu}\), \((Q(\mu,M)\boldsymbol{\rho})_{\ell}=0\). This results in \(q_{\ell}=-c\rho\).
3. \(\ell\in(\mathcal{L}_{\mu}\cap\mathcal{L}_{z})\cup(\mathcal{L}_{\mu}^{c}\cap \mathcal{L}_{z}^{c})\). If \(\ell\in\mathcal{L}_{\mu}\cap\mathcal{L}_{z}\) then \((Q(\cdot,M)\boldsymbol{\rho})_{\ell}=0\) for both \(\mu\) and \(z\). Therefore, \(q_{\ell}=0\). Conversely, if \(\ell\in\mathcal{L}_{\mu}^{c}\cap\mathcal{L}_{z}^{c}\), then \((Q(\cdot,M)\boldsymbol{\rho})_{\ell}=c\rho\) for both \(\mu\) and \(z\), which results again in \(q_{\ell}=0\).
The sets \(\mathcal{L}_{\mu}^{c}\cap\mathcal{L}_{z}\), \(\mathcal{L}_{\mu}\cap\mathcal{L}_{z}^{c}\), \((\mathcal{L}_{\mu}\cap\mathcal{L}_{z})\cup(\mathcal{L}_{\mu}^{c}\cap\mathcal{L }_{z}^{c})\) are pairwise disjoint and exhaust the set \(\{1,\ldots,m\}\). Hence we can write
\[\begin{split}& U=(\mu-z){}^{\intercal}(Q(\mu,M)-Q(z,M)) \boldsymbol{\rho}=\sum_{\ell=1}^{m}(\mu_{\ell}-z_{\ell})q_{\ell}\\ &=\sum_{i\in\mathcal{L}_{\mu}^{c}\cap\mathcal{L}_{z}}(\mu_{i}-z_{i })c\rho+\sum_{j\in\mathcal{L}_{\mu}\cap\mathcal{L}_{z}^{c}}(\mu_{j}-z_{j}) \cdot(-c\rho)\\ &=\underbrace{\big{(}\sum_{i\in\mathcal{L}_{\mu}^{c}\cap \mathcal{L}_{z}}\!\!
Now, notice that for any \(i\in\mathcal{L}_{\mu}^{c}\cap\mathcal{L}_{z}\subseteq\mathcal{L}_{\mu}^{c}\) and \(j\in\mathcal{L}_{\mu}\cap\mathcal{L}_{z}^{c}\subseteq\mathcal{L}_{\mu}\), we have by definition of \(\mathcal{L}_{\mu}\) and \(\mathcal{L}_{\mu}^{c}\) that \(\mu_{i}\leq\mu_{j}\) (which by (13) only holds with equality if \(\mu_{i}=\mu_{j}=0\)). With analogous reasoning, we have \(z_{i}\geq z_{j}\) for any \(i\in\mathcal{L}_{\mu}^{c}\cap\mathcal{L}_{z}\subseteq\mathcal{L}_{z}\) and \(j\in\mathcal{L}_{\mu}\cap\mathcal{L}_{z}^{c}\subseteq\mathcal{L}_{z}^{c}\). Let \(h_{1}\) be the cardinality of the set \(\mathcal{L}_{\mu}^{c}\cap\mathcal{L}_{z}\), and \(h_{2}\) that of \(\mathcal{L}_{\mu}\cap\mathcal{L}_{z}^{c}\). Then,
\[h_{1}=|\mathcal{L}_{\mu}^{c}\cap\mathcal{L}_{z}|\stackrel{{ (a)}}{{=}}|\mathcal{L}_{z}\setminus\mathcal{L}_{\mu}|=|\mathcal{L}_{z}|-| \mathcal{L}_{\mu}\cap\mathcal{L}_{z}|\] \[\stackrel{{(b)}}{{=}}|\mathcal{L}_{\mu}|-|\mathcal{ L}_{z}\cap\mathcal{L}_{\mu}|=|\mathcal{L}_{\mu}\setminus\mathcal{L}_{z}|=| \mathcal{L}_{\mu}\cap\mathcal{L}_{z}^{c}|=h_{2},\]
where \((a)\) holds since \(\mathcal{L}_{\mu},\mathcal{L}_{z}\subseteq\{1,\ldots,M\}\), and \((b)\) follows from \(|\mathcal{L}_{\mu}|=|\mathcal{L}_{z}|=M\). Therefore \(h_{1}=h_{2}=:h\) and \(0\leq h\leq M\), which implies \(U_{1}\leq 0\) and \(U_{2}\leq 0\) in (21). We can observe that \(U_{1}<0\) and \(U_{2}<0\) if \(\mathcal{L}_{\mu}\cap\mathcal{L}_{z}^{c}\) and \(\mathcal{L}_{\mu}^{c}\cap\mathcal{L}_{z}\) are nonempty and the corresponding components of \(\mu\) and \(z\) are nonzero. In such a case \(h\geq 1\) and we can write
\[U_{1}=\sum_{i\in\mathcal{L}_{z}^{c}\cap\mathcal{L}_{z}}\mu_{i}-\sum_{j\in \mathcal{L}_{\mu}\cap\mathcal{L}_{z}^{c}}\mu_{j}\leq-h\zeta, \tag{22}\]
where the inequality follows from (13) and the above discussion. A similar reasoning holds for \(U_{2}\). Lastly, note that if \(\mu\neq z\) and \(h\geq 1\), then at least one of \(U_{1}\leq-h\zeta\) and \(U_{2}\leq-h\zeta\) will hold. By (21), we can thus conclude \(U\leq-h\zeta c\rho\) for any \(\mu,z\in\mathcal{M}\), \(\mu\neq z\).
### Proof of Lemma 3
Part (1): To prove that the mapping \(T\) is continuous on its domain, we first notice that \(T\) is by construction continuous on \(X\times\mathcal{M}\) when the operator \(Q(\cdot,M)\) is continuous on \(\mathcal{M}\) (as the parameter \(M\) is fixed). Therefore, it is sufficient to show that for any \(\mu,z\in\mathcal{M}\) and any \(\eta>0\), there exists \(\delta>0\) such that
\[\|\mu-z\|<\delta\ \Rightarrow\ \|Q(\mu,M)-Q(z,M)\|\|\boldsymbol{\rho}\|<\eta, \tag{23}\]
where \(\boldsymbol{\rho}=c\rho\mathbf{1}_{m}\neq\mathbf{0}\). To this end, consider any \(\mu,z\in\mathcal{M}\) such that \(\|\mu-z\|<\frac{\zeta}{2}\), with \(\zeta\) as defined in (13). 5 Let \(\vec{\mu}\) and \(\vec{z}\) denote the vectors \(\mu\) and \(z\) sorted in decreasing order; thus, \(\vec{\mu}_{\ell}\) is the \(\ell\)-th largest element of \(\mu\) (and similarly for \(z\)). For any given \(\ell\), let \(i\) : \(\mu_{i}=\vec{\mu}_{\ell}\), \(j\) : \(z_{j}=\vec{z}_{\ell}\), and \(\vec{\ell}:=\min_{\{1,\ldots,m\}}\ell\) : \(i\neq j\). In words, \(\vec{\ell}\) is the smallest index for which the \(\ell\)-th largest elements of \(\mu\) and \(z\) do
Figure 4: Potential function \(E(x^{(\kappa)})\) evaluated along the iterations of Algorithm 1. Lower values of \(M\) yield better confidence on the theoretical robustness certificates for the considered region (see Thm. 3), which results in a lower empirical probability of constraint violation. On the other hand, the system-level efficiency of the equilibrium increases for higher values of \(M\).
not appear at the same row of their respective vectors. We then let \(\mathcal{I}\) be the set of indices for which the ordering of the elements of \(\mu\) and \(z\) agrees, i.e., for all \(k\in\mathcal{I}\), there exists \(\ell<\bar{\ell}\) such that \(i=j=k\), with \(i:\,\mu_{i}=\tilde{\mu}_{\ell}\) and \(j:\,z_{j}=\tilde{z}_{\ell}\).
We prove our statement by contradiction. Suppose there exists \(i,j\notin\mathcal{I}\) such that \(i:\,\mu_{i}=\tilde{\mu}_{\ell}\) and \(j:\,z_{j}=\tilde{z}_{\ell}\) for some \(\ell>\bar{\ell}\), where \(\mu_{i}<\mu_{j}\) and \(z_{i}>z_{j}\). First, we note that such an instance exists by hypothesis, as otherwise the only possible case is where \(i=j\), which contradicts \(i,j\notin\mathcal{I}\) and implies \(Q(\mu,M)=Q(z,M)\). Since \(z\in\mathcal{M}\), it further holds \(z_{j}<z_{i}-\zeta\), which by \(\|\mu_{i}-z_{i}\|\leq\|\mu-z\|<\frac{\zeta}{2}\) implies
\[z_{j}<\mu_{i}+\frac{\zeta}{2}-\zeta. \tag{24}\]
We bound (24) from below by noting \(z_{j}>\mu_{j}-\frac{\zeta}{2}\), which holds since \(\|\mu_{j}-z_{j}\|<\frac{\zeta}{2}\), obtaining
\[\mu_{j}-\frac{\zeta}{2}<\mu_{i}+\frac{\zeta}{2}-\zeta,\]
or equivalently \(\mu_{j}<\mu_{i}\), which contradicts our hypothesis. Hence the elements of any pair of vectors \(\mu,z\in\mathcal{M}\) such that \(\|\mu-z\|<\frac{\zeta}{2}\) must follow the same ordering. By definition of \(P(\cdot)\), this implies \(P(\mu)=P(z)\) and, in turn, \(\|Q(\mu,M)-Q(z,M)\|=0\). This validates (23) with \(\delta=\frac{\zeta}{2}\) and any \(\eta>0\), establishing the continuity of \(Q(\cdot,M)\) on \(\mathcal{M}\) and concluding the proof of the first part.
Part (2): We show here that the mapping \(T\) fulfils certain nonexpansiveness properties required for the convergence of Algorithm 1, for compatible choices of \(\tau\). In particular, we provide here a sufficient condition for which the iteration
\[y^{(\kappa+1)}=\operatorname{proj}_{X\times\mathcal{M}_{j},D}\left[y^{(\kappa )}-D^{-1}T(y^{(\kappa)},\rho,M)\right], \tag{25}\]
converges to a solution of \(\operatorname{VI}(X\times\mathcal{M}_{j},T)\), where \(j\in\{1,\ldots,q\}\) is fixed, for any \(y^{(0)}\in X\times\mathcal{M}_{j}\). Notice that in (25) the skew projection is performed on the convex subdomain \(X\times\mathcal{M}_{j}\). Then (25) is the solution of the \(\operatorname{VI}(X\times\mathcal{M}_{j},T_{D}^{(\kappa)})\) (see [19, Sec. 12.5.1]), where \(T_{D}^{(\kappa)}(y):=T(y^{(\kappa)},\rho,M)+D(y-y^{(\kappa)})\) is strongly monotone due to \(D\succ 0\) and \((T(y,\rho,M)-T(y^{\prime},\rho,M))^{\intercal}(y-y^{\prime})\geq 0\), for all \(y,y^{\prime}\in X\times\mathcal{M}\), which in turn follows from Assumption 3 and Lemma 2. The fixed-point iteration (25) is an instance of the forward-backward splitting method: we thus resort to standard results in the literature to prove its convergence. Following the notation in [19, Sec. 12.5.1], let \(\tilde{D}:=D_{s}^{-1/2}(D-D_{s})D_{s}^{-1/2}\), where \(D_{s}:=\frac{D+D^{\intercal}}{2}\). Also, let \(\mathcal{U}_{j}:=\{D_{s}^{1/2}y:\,y\in X\times\mathcal{M}_{j}\}\), \(\mathcal{U}=\bigcup_{j=1}^{q}\mathcal{U}_{j}\), and \(\tilde{T}(w):=D_{s}^{-1/2}T(D_{s}^{-1/2}w,\rho,M)\), for all \(w\in\mathcal{U}\). To ease notation, we drop the dependence of \(\tilde{T}\) and \(\tilde{T}_{D}\) on \(\rho,M\), as they remain fixed throughout the proof. According to [19, Thm. 12.5.2] (see also [46, Sec. 4.3]), to ensure convergence of (25) to a solution of the \(\operatorname{VI}(X\times\mathcal{M}_{j},T)\) it is sufficient to show that \(\tilde{T}_{D}=\tilde{T}-\tilde{D}\) is \(\beta\)-cocoercive on \(\mathcal{U}_{j}\), i.e.,
\[(\tilde{T}_{D}(v)-\tilde{T}_{D}(w))^{\intercal}(v-w)\geq\beta\|\tilde{T}_{D}(v )-\tilde{T}_{D}(w)\|^{2}, \tag{26}\]
for some \(\beta>\frac{1}{2}\) and all \(v,w\in\mathcal{U}_{j}\), \(j\in\{1,\ldots,q\}\). In fact, we will go a step forward and demonstrate here that \(\tilde{T}_{D}\) is co-coercive on \(\mathcal{U}\) with \(\beta>\frac{1}{2}\). Due to the particular saddle problem structure of the mapping in (8), we adopt the procedure in [19, Prop. 12.5.4] and define \(D\) as in (15) (see also [35]). It then follows from the above definitions that \(\tilde{T}_{D}(w)\), for any \(w\in\mathcal{U}\), reduces to
\[\tilde{T}_{D}(D_{s}^{1/2}y)=D_{s}^{-1/2}\begin{bmatrix}F(x)\\ b-Q(\mu,M)\boldsymbol{\rho}\end{bmatrix},\;\forall y\in X\times\mathcal{M}, \tag{27}\]
which can be easily seen by rewriting (8) as
\[T(y,\rho,M)=\begin{bmatrix}F(x)\\ 0\end{bmatrix}+\underbrace{\begin{bmatrix}0&A^{\intercal}\\ -A&0\end{bmatrix}}_{D-D_{s}}y+\begin{bmatrix}0\\ b-Q(\mu,M)\boldsymbol{\rho}\end{bmatrix}.\]
Define \(W:=(D_{s}^{-1/2})^{\intercal}D_{s}^{-1/2}=D_{s}^{-1}\), and let \(\vec{Q}(\cdot)\) be a shorthand for \(Q(\cdot,M)\boldsymbol{\rho}\) (as \(M\) is a fixed parameter). Then, for any \(w_{a},w_{b}\in\mathcal{U}\), we can expand (26) by using (27), obtaining
\[(w_{a}-w_{b})^{\intercal}(\tilde{T}_{D}(w_{a})-\tilde{T}_{D}(w_{b} ))-\beta\|\tilde{T}_{D}(w_{a})-\tilde{T}_{D}(w_{b})\|^{2}\] \[=(D_{s}^{-1/2}w_{a}-D_{s}^{-1/2}w_{b})^{\intercal}\begin{bmatrix} F(x_{a})-F(x_{b})\\ \vec{Q}(\mu_{b})-\vec{Q}(\mu_{a})\end{bmatrix}\] \[-\beta\left\|D_{s}^{-1/2}\begin{bmatrix}F(v_{a})-F(v_{b})\\ \vec{Q}(\mu_{b})-\vec{Q}(\mu_{a})\end{bmatrix}\right\|^{2}\] \[=\begin{bmatrix}x_{a}-x_{b}\\ \mu_{a}-\mu_{b}\end{bmatrix}^{\intercal}\begin{bmatrix}F(x_{a})-F(x_{b})\\ \vec{Q}(\mu_{b})-\vec{Q}(\mu_{a})\end{bmatrix}\] \[-\beta\begin{bmatrix}F(x_{a})-F(x_{b})\\ \vec{Q}(\mu_{b})-\vec{Q}(\mu_{a})\end{bmatrix}^{\intercal}W\begin{bmatrix}F(x_{ a})-F(x_{b})\\ \vec{Q}(\mu_{b})-\vec{Q}(\mu_{a})\end{bmatrix}, \tag{28}\]
for all \(y_{a},y_{b}\in X\times\mathcal{M}\), where the last equality follows from the definition of \(\mathcal{U}_{j}\) and by expanding the norm. Matrix \(W\) can be written as \(W=\begin{bmatrix}W_{11}&W_{12}\\ W_{21}&W_{22}\end{bmatrix}\), where
\(W_{11}\in\mathbb{R}^{nN\times nN}\), \(W_{12}\in\mathbb{R}^{nN\times m}\), \(W_{33}\in\mathbb{R}^{m\times m}\) are:
\[W_{11} =\tau(I_{n}-\tau^{2}A^{\intercal}A)^{-1},\] \[W_{12} =W_{21}{}^{\intercal}=\tau^{2}(I_{n}-\tau^{2}A^{\intercal}A)^{-1}A ^{\intercal},\] \[W_{22} =\tau I_{m}+\tau^{3}A(I_{n}-\tau^{2}A^{\intercal}A)^{-1}A^{ \intercal}.\]
Expanding the inner product in (28) with respect to the matrix blocks \(W_{11},W_{12},W_{21},W_{33}\) we obtain
\[\beta(F(x_{a})-F(x_{b}))^{\intercal}\big{[}\tfrac{1}{\beta}(x_{a }-x_{b})\] \[\quad-W_{11}(F(x_{a})-F(x_{b}))-2W_{12}(\vec{Q}(\mu_{b})-\vec{Q}( \mu_{1}))\big{]}\] \[\quad+\beta(\vec{Q}(\mu_{b})-\vec{Q}(\mu_{a}))^{\intercal}\big{[} \tfrac{1}{\beta}(\mu_{a}-\mu_{b})\] \[\qquad\qquad\qquad-W_{22}(\vec{Q}(\mu_{b})-\vec{Q}(\mu_{a})) \big{]}\] \[=(F(x_{a})-F(x_{b}))^{\intercal}(x_{a}-x_{b})\] \[\quad-\beta(F(x_{a})-F(x_{b}))^{\intercal}W_{11}(F(x_{a})-F(x_{b}))\] \[\quad-2\beta(F(x_{a})-F(x_{b}))^{\intercal}W_{12}(\vec{Q}(\mu_{b })-\vec{Q}(\mu_{a}))\] \[\quad+(\vec{Q}(\mu_{b})-\vec{Q}(\mu_{a}))^{\intercal}(\mu_{a}- \mu_{b})\] \[\quad-\beta(\vec{Q}(\mu_{b})-\vec{Q}(\mu_{a}))^{\intercal}W_{22}( \vec{Q}(\mu_{b})-\vec{Q}(\mu_{a})).\]
Setting \(p_{\tau}:=(I-\tau^{2}A^{\intercal}A)^{-1/2}(F(x_{a})-F(x_{b}))\) and \(q_{\tau}:=\tau(I-\tau^{2}A^{\intercal}A)^{-1/2}A^{\intercal}(\vec{Q}(\mu_{b}) -\vec{Q}(\mu_{a}))\) above we obtain
\[(F(x_{a})-F(x_{b}))^{\intercal}(x_{a}-x_{b})\] \[\quad+(\vec{Q}(\mu_{b})-\vec{Q}(\mu_{a}))^{\intercal}(\mu_{a}- \mu_{b})\] \[\quad-\beta\tau(\vec{Q}(\mu_{b})-\vec{Q}(\mu_{a}))^{\intercal}( \vec{Q}(\mu_{b})-\vec{Q}(\mu_{a})) \tag{29}\] \[\quad-\beta\tau(p_{\tau}+q_{\tau})^{\intercal}(p_{\tau}+q_{\tau})\] \[\quad\geq\alpha\|x_{a}-x_{b}\|^{2}+2h\zeta c\rho\] \[\qquad\qquad\qquad-2\beta\tau h(c\rho)^{2}-2\beta\tau(p_{\tau}{}^ {\intercal}p_{\tau}+q_{\tau}{}^{\intercal}q_{\tau}),\]
where for the last inequality we used, in order, \((i)\) strong monotonicity of \(F\) (cf. Assumption 3), \((ii)\) Lemma 2, \((iii)\)\(\|\vec{Q}(\mu_{b})-\vec{Q}(\mu_{a})\|^{2}\leq 2h(c\rho)^{2}\)--which follows from the same arguments used in the proof of Lemma 2--and \((iv)\)\((p_{\tau}+q_{\tau})^{\intercal}(p_{\tau}+q_{\tau})\leq 2(p_{\tau}{}^{\intercal}p_{ \tau}+q_{\tau}{}^{\intercal}q_{\tau})\). Expanding the term containing \(p_{\tau},q_{\tau}\) in (29) we get
\[\alpha\|x_{a}-x_{b}\|^{2}+2h\zeta c\rho-2\beta\tau h(c\rho)^{2}\] \[\quad-2\beta\tau(F(x_{a})-F(x_{b}))^{\intercal}(I_{n}-\tau^{2}A^ {\intercal}A)^{-1}(F(x_{a})-F(x_{b}))\] \[\quad-2\beta\tau^{3}(\vec{Q}(\mu_{b})-\vec{Q}(\mu_{a}))^{\intercal}\] \[\qquad\qquad\qquad\qquad\cdot A(I_{n}-\tau^{2}A^{\intercal}A)^{- 1}A^{\intercal}(\vec{Q}(\mu_{b})-\vec{Q}(\mu_{a}))\] \[\stackrel{{(a)}}{{\geq}}\alpha\|x_{a}-x_{b}\|^{2}+2h \zeta c\rho-2\beta\tau h(c\rho)^{2}\] \[\quad-2\beta\tau\|F(x_{a})-F(x_{b})\|^{2}\cdot\|(I_{n}-\tau^{2}A ^{\intercal}A)^{-1}\|\] \[\quad-2\beta\tau^{3}\|\vec{Q}(\mu_{b})-\vec{Q}(\mu_{a})\|^{2} \cdot\|(I_{n}-\tau^{2}A^{\intercal}A)^{-1}\|\cdot\|A\|^{2}\] \[\stackrel{{(b)}}{{\geq}}(\alpha-2\beta\tau L_{F}^{2} \|(I_{n}-\tau^{2}A^{\intercal}A)^{-1}\|)\|x_{a}-x_{b}\|^{2}\] \[\quad+2h\zeta c\rho-2\beta\tau h(c\rho)^{2}\bigg{(}1+\frac{2\tau^ {2}}{1-\tau^{2}\|A\|^{2}}\|A\|^{2}\bigg{)}, \tag{30}\]
where \((a)\) is obtained by applying the Cauchy-Schwarz inequality, and in \((b)\) we use the Lipschitz continuity of \(F\) (cf. Assum. 3), \(\|\vec{Q}(\mu_{b})-\vec{Q}(\mu_{a})\|^{2}\leq 2h(c\rho)^{2}\), and the triangle inequality. Now, notice that for the last term in (30),
\[2\beta\tau h(c\rho)^{2}\bigg{(}1+\frac{2\tau^{2}}{1-\tau^{2}\|A \|^{2}}\|A\|^{2}\bigg{)}\] \[\quad=2\beta\tau h(c\rho)^{2}\frac{1+\tau^{2}\|A\|^{2}}{1-\tau^{2} \|A\|^{2}}\leq 2\beta\tau h(c\rho)^{2}\frac{1+\|A\|^{2}}{1-\tau^{2}\|A\|^{2}}, \tag{31}\]
holds for any choice of \(\tau\in\big{(}0,\max\big{\{}\tfrac{1}{\|A\|},1\big{\}}\big{)}\). Recall that by invoking [19, Thm. 12.5.2], our objective is to show that (26) holds for some \(\tau>0\) and \(\beta>\frac{1}{2}\). Then, by inspecting (30) and using (31), to achieve this it is sufficient to guarantee
\[\alpha-\tau L_{F}^{2}\|(I_{n}-\tau^{2}A^{\intercal}A)^{-1}\|>0,\] \[2h\zeta c\rho-\tau h(c\rho)^{2}\frac{1+\|A\|^{2}}{1-\tau^{2}\|A\| ^{2}}>0,\text{ if }1\leq h\leq M.\]
Solving the quadratic expressions above with respect to \(\tau\) results in the admissible range of values in (16) (these are also satisfying \(\tau\in\big{(}0,\max\big{\{}\tfrac{1}{\|A\|},1\big{\}}\big{)}\), required for (31) to hold). Therefore, for any \(\tau\) satisfying this condition, \(\tilde{T}_{D}\) is co-coercive with \(\beta>\frac{1}{2}\) on the entire domain \(\mathcal{U}\), which in turn implies that co-coercivity of \(\tilde{T}_{D}\) holds on each subdomain \(\mathcal{U}_{j}\), \(j=1,\ldots,q\), with the same modulus. By [19, Thm. 12.5.2], this is sufficient to guarantee the convergence of (25) to a solution of the VI\((X\times\mathcal{M}_{j},T)\), thus concluding the proof.
### Proof of Theorem 2
Fix any \(\tau\) satisfying the conditions of Lemma 3 and (17). The sequence of iterates \(\{y^{(\kappa)}\}_{\kappa=1,2,\ldots}\) (where \(y^{(\kappa)}=(x^{(\kappa)},\mu^{(\kappa)})\)) generated by Algorithm 1 lives in a bounded set since \(X\) and \(\mathcal{M}\) are assumed to be compact (see Assumption 5). As such there exist convergent subsequences, or in other words, the set
\[\Omega:=\Big{\{}\bar{y}=(\bar{x},\bar{\mu})\colon\ \exists\text{ subsequence }\{\kappa_{i}\}_{i\in\mathbb{N}}\\ \text{ such that }\lim_{i\to\infty}\kappa_{i}=\infty,\lim_{i\to\infty}y^{( \kappa_{i})}=\bar{y}\Big{\}}, \tag{32}\]
containing the limit points of \(\{y^{(\kappa)}\}\) is non-empty; see, e.g., [40, p. 48]. In particular, we will show that \(\Omega\) is a singleton for any \(\tau\) satisfying (16)-(17), which implies that the iterates generated by Algorithm 1 have a unique limit point, hence they converge. To achieve this
\(i\neq j\). Note that if this were not the case, then we would be in a trivial case where \(\bar{y}_{1}=\bar{y}_{2}\), due to co-coercivity of \(T\) (see Lemma 3)--by which Algorithm 1 converges to a unique solution when restricted to any convex subdomain \(X\times\mathcal{M}_{j}\), \(j=1,\ldots,q\). To ease the notation in the remainder of the proof, we assume without loss of generality that \(\bar{\mu}_{1}\in\mathcal{M}_{1}\), \(\bar{\mu}_{2}\in\mathcal{M}_{2}\) (see Fig. 5). By (32) there exist an infinite-length subsequence \(\{\kappa_{i}\}_{i\in\mathbb{N}}\) of the iterates generated by Algorithm 1 whose elements get arbitrarily close to \(\bar{\mu}_{1}\) while staying in \(\mathcal{M}_{1}\) where this cluster point belongs (similarly for \(\bar{\mu}_{2}\)). We then have that for any \(\delta>0\), there exists \(\tilde{\kappa}\) such that for all \(\kappa_{i}\geq\tilde{\kappa}\), \(\|y^{(\kappa_{i})}-\bar{y}_{1}\|\leq\delta\); this implies \(\|x^{(\kappa_{i})}-\bar{x}_{1}\|\leq\delta\) and \(\|\mu^{(\kappa_{i})}-\bar{\mu}_{1}\|\leq\delta\).
Due to our contradiction hypothesis (recall that \(\{\kappa_{i}\}_{i\in\mathbb{N}}\) is a subsequence), the sequence of iterates generated by Algorithm 1 would be leaving \(\mathcal{M}_{1}\) towards \(\mathcal{M}_{2}\) infinitely often. Denote then by \(\overline{\kappa}>\tilde{\kappa}\) the smallest index of the subsequence such that \(\mu^{(\overline{\kappa})}\in\mathcal{M}_{1}\), but \(\mu^{(\overline{\kappa}+1)}\in\mathcal{M}_{2}\), i.e., after the \(\overline{\kappa}\)-th iterate the original sequence would jump to \(\mathcal{M}_{2}\) for the first time after \(\tilde{\kappa}\).
For this jump to occur, the unprojected solution for the Lagrange multipliers must be "closer" to \(\mathcal{M}_{2}\) than to any other sub-domain of \(\mathcal{M}\). To see this more formally, let \(D_{\mu}^{-1}\) denote the lower block-row of \(D^{-1}=\left[\begin{smallmatrix}\tau I_{nN}&0\\ 2A\tau^{2}&\tau I_{m}\end{smallmatrix}\right]\), corresponding to the Lagrange multiplier update in line 3 of Algorithm 1. By definition of \(\mathcal{M}\), such a jump requires the distance between the unprojected gradient step at \(\overline{\kappa}+1\) and \(\mu^{(\overline{\kappa})}\) to satisfy
\[\|\mu^{(\overline{\kappa})}-D_{\mu}^{-1}T(y^{(\overline{\kappa})},\rho,M)- \mu^{(\overline{\kappa})}\|>\zeta/2, \tag{33}\]
where \(\zeta\) is the worst-case (minimum) distance between any two subdomains of \(\mathcal{M}\)--corresponding to the case where one of these is the origin (in this case \(\mathcal{M}_{2}\)). Figure 5 provides a pictorial illustration of this construction. However, we have that
\[\|\mu^{(\overline{\kappa})}-D_{\mu}^{-1}T(y^{(\overline{\kappa})},\rho,M)-\mu^{(\overline{\kappa})}\|\] \[=\tau\|-2\tau A(F(x^{(\overline{\kappa})})+A^{\intercal}\mu^{( \overline{\kappa})})\] \[\quad+Ax^{(\overline{\kappa})}-b+Q(\mu^{(\overline{\kappa})},M) \boldsymbol{\rho}\|\] \[=\tau\|-2\tau A(F(x^{(\overline{\kappa})})-F(\bar{x}_{1})+A^{ \intercal}(\mu^{(\overline{\kappa})}-\bar{\mu}_{1}))\] \[\quad+A(x^{(\overline{\kappa})}-\bar{x}_{1})+Q(\mu^{(\overline{ \kappa})},M)\boldsymbol{\rho}\] \[\quad-2\tau A(F(\bar{x}_{1})+A^{\intercal}\bar{\mu}_{1})+(A\bar {x}_{1}-b)\|\] \[\leq\tau^{2}\|2A(F(\bar{x}_{1})+A^{\intercal}\bar{\mu}_{1})\|+ \tau\|A\bar{x}_{1}-b\|\] \[\quad+\tau\|Q(\mu^{(\overline{\kappa})},M)\boldsymbol{\rho}\|+\tau \|A\|\|x^{(\overline{\kappa})}-\bar{x}_{1}\|\] \[\quad+2\tau^{2}\big{(}\|A(F(x^{(\overline{\kappa})})-F(\bar{x}_{ 1}))\|+\|AA^{\intercal}(\mu^{(\overline{\kappa})}-\bar{\mu}_{1})\|\big{)}\] \[\leq(\tau^{2}+\tau)\bar{R}+\tau c\rho\sqrt{m-M}\] \[\quad+\tau\delta\big{(}2\tau(L_{F}\|A\|+\|AA^{\intercal}\|)+\|A \|\big{)} \tag{34}\]
where the first equality follows from the definition of \(D_{\mu}^{-1}\) and \(T\), and the second one by adding and subtracting \(F(\bar{x}_{1})\), \(A^{\intercal}\bar{\mu}_{1}\) and \(A\bar{x}_{1}\). The first inequality is due to the triangle inequality, while the last one follows from the previous one by upper-bounding (i) the first two terms using the definition of \(\bar{R}\); (ii) \(\|Q(\mu^{(\overline{\kappa})},M)\boldsymbol{\rho}\|\) by \(c\rho\sqrt{m-M}\) based on its definition; and (iii) the last three terms using \(\|F(x^{(\overline{\kappa})})-F(\bar{x}_{1})\|\leq L_{F}\|x^{(\overline{\kappa} )}-\bar{x}_{1}\|\) by Assumption 3, and \(\|x^{(\overline{\kappa})}-\bar{x}_{1}\|\leq\delta\), \(\|\mu^{(\overline{\kappa})}-\bar{\mu}_{1}\|\leq\delta\).
By (34), and choosing \(\tau\) as in (17), we have that
\[\|\mu^{(\overline{\kappa})}-D_{\mu}^{-1}T(y^{(\overline{\kappa})},\rho,M)-\mu^ {(\overline{\kappa})}\|<\frac{\zeta}{2}+\bar{K}\delta \tag{35}\]
where \(\bar{K}\) is a constant, emanating from the coefficient of \(\delta\) in (34) when substituting for \(\tau\) the upper-bound in (17). Since \(\delta\) is arbitrary, taking \(\limsup_{\delta\to 0}\) in (35) establishes a contradiction with (33). The latter implies then that \(\bar{\mu}_{2}\) must belong to the same subdomain \(\mathcal{M}_{1}\) with \(\bar{\mu}_{1}\), i.e., all cluster points should be in the same subdomain of \(\mathcal{M}\). However, by Lemma 3, due to co-coercivity of \(T\) on each subdomain \(X\times\mathcal{M}_{j}\), \(j=1,\ldots,q\), we can only have one cluster point, i.e., \(\Omega\) is a singleton, implying that Algorithm 1 converges, thus concluding the proof. \(\blacksquare\)
### Proof of Theorem 3
The elements of the minimal compression set \(I\) of Algorithm 1 can belong to one or both of the following subsets:
1. The subset \(I_{1}\) of samples that support the solution \(x^{*}\). Note that since Algorithm 1 converges to the point \((x^{*},\mu^{*})\) for a fixed choice of \(M\), \(Q(\mu^{*},M)\) will be a fixed quantity. As such, Algorithm 1 will
Figure 5: Domain \(\mathcal{M}\) of the Lagrange multipliers associated to the coupling constraints, for the case \(m=2\). Notice the minimum distance \(\zeta\) between any two subdomains of \(\mathcal{M}\) involves the origin as one of the subdomains.
converge to a solution of
\[\text{Find }x^{*}\in\widehat{\Pi}_{K}\text{ such that}\] \[F(x^{*})^{\intercal}(x-x^{*})\geq 0\text{ for any }x\in\widehat{\Pi}_{K}, \tag{36}\]
where \(\widehat{\Pi}_{K}\) denotes the polytope obtained from \(\Pi_{K}\) by tightening at most \(M\) coupling constraints, as dictated by (11) with \(Q(\mu^{*},M).\) The constraints in (36) are equivalent to \(F(x^{*})^{\intercal}x\geq F(x^{*})^{\intercal}x^{*}\) for all \(x\in\widehat{\Pi}_{K}.\) Then, \(x^{*}\) is the minimiser of
\[\min_{x\in\mathbb{R}^{nN}}F(x^{*})^{\intercal}x\] \[\text{subject to }x\in\widehat{\Pi}_{K}, \tag{37}\]
which is unique due to Lemma 1. Since the cost function is linear in \(x\) and \(\widehat{\Pi}_{K}\) is convex by Assumption 2, we obtain a scenario program as in [10]. Applying [10, Prop. 1] to (37), we have that \(|I_{1}|\leq nN,\) i.e., the number of support samples of \(x^{*}\) is bounded by the dimension of the decision vector \(nN\).
2. The subset \(I_{2}\) of samples whose corresponding coupling constraints intersect \(\mathbb{B}(x^{*},\rho).\) By construction of Algorithm 1 we have that \(|I_{2}|\leq M\).
As such, we have that \(I=I_{1}\cup I_{2}\) is a compression set with cardinality \(|I|=|I_{1}\cup I_{2}|\leq|I_{1}|+|I_{2}|\leq nN+M\). Then, from Corollary 2 in [32] it follows that
\[\mathbb{P}^{K}\Big{\{}\delta_{K}\in\Delta^{K} :\ \mathbb{V}(S^{*}_{K})>\overline{\epsilon}\Big{\}}\] \[\leq\sum_{i=0}^{nN+M-1}\binom{K}{i}\overline{\epsilon}^{i}(1- \overline{\epsilon})^{K-i}, \tag{38}\]
which concludes the proof.
|
2307.04546 | Safety Analysis of Parameterised Networks with Non-Blocking Rendez-Vous | We consider networks of processes that all execute the same finite-state
protocol and communicate via a rendez-vous mechanism. When a process requests a
rendez-vous, another process can respond to it and they both change their
control states accordingly. We focus here on a specific semantics, called
non-blocking, where the process requesting a rendez-vous can change its state
even if no process can respond to it. In this context, we study the
parameterised coverability problem of a configuration, which consists in
determining whether there is an initialnumber of processes and an execution
allowing to reach a configuration bigger than a given one. We show that this
problem is EXPSPACE-complete and can be solved in polynomial time if the
protocol is partitioned into two sets of states, the states from which a
process can request a rendez-vous and the ones from which it can answer one. We
also prove that the problem of the existence of an execution bringing all the
processes in a final state is undecidable in our context. These two problems
can be solved in polynomial time with the classical rendez-vous semantics. | Lucie Guillou, Arnaud Sangnier, Nathalie Sznajder | 2023-07-10T13:24:37Z | http://arxiv.org/abs/2307.04546v1 | # Safety Analysis of Parameterised Networks with Non-Blocking Rendez-Vous
###### Abstract
We consider networks of processes that all execute the same finite-state protocol and communicate via a rendez-vous mechanism. When a process requests a rendez-vous, another process can respond to it and they both change their control states accordingly. We focus here on a specific semantics, called non-blocking, where the process requesting a rendez-vous can change its state even if no process can respond to it. In this context, we study the parameterised coverability problem of a configuration, which consists in determining whether there is an initial number of processes and an execution allowing to reach a configuration bigger than a given one. We show that this problem is EXPSPACE-complete and can be solved in polynomial time if the protocol is partitioned into two sets of states, the states from which a process can request a rendez-vous and the ones from which it can answer one. We also prove that the problem of the existence of an execution bringing all the processes in a final state is undecidable in our context. These two problems can be solved in polynomial time with the classical rendez-vous semantics.
Parameterised verification, Coverability, Counter machines 10.4230/LIPIcs..2023. 2012
## 1 Introduction
Verification of distributed/concurrent systemsBecause of their ubiquitous use in applications we rely on constantly, the development of formal methods to guarantee the correct behaviour of distributed/concurrent systems has become one of the most important research directions in the field of computer systems verification in the last two decades. Unfortunately, such systems are difficult to analyse for several reasons. Among others, we can highlight two aspects that make the verification process tedious. First, these systems often generate a large number of different executions due to the various interleavings generated by the concurrent behaviours of the entities involved. Understanding how these interleavings interact is a complex task which can often lead to errors at the design-level or make the model of these systems very complex. Second, in some cases, the number of participants in a distributed system may be unbounded and not known a priori. To fully guarantee the correctness of such systems, the analysis would have to be performed for all possible instances of the system, i.e., an infinite number of times. As a consequence, classical techniques to verify finite state systems, like testing or model-checking, cannot be easily adapted to distributed systems and it is often necessary to develop new techniques.
Parameterised verificationWhen designing systems with an unbounded number of participants, one often provides a schematic program (or protocol) intended to be implemented by multiple identical processes, parameterised by the number of participants. In general, even if the verification problem is decidable for a given instance of the parameter, verifying all
possible instances is undecidable ([3]). However, several settings come into play that can be adjusted to allow automatic verification. One key aspect to obtain decidability is to assume that the processes do not manipulate identities and use simple communication mechanisms like pairwise synchronisation (or rendez-vous) [13], broadcast of a message to all the entities [10] (which can as well be lossy in order to simulate mobility [6]), shared register containing values of a finite set [11], and so on (see [9] for a survey). In every aforementioned case, all the entities execute the same protocol given by a finite state automaton. Note that parameterised verification, when decidable like in the above models, is also sometimes surprisingly easy, compared to the same problem with a fixed number of participants. For instance, liveness verification of parameterised systems with shared memory is Pspace-complete for a fixed number of processes and in NP when parameterised [7].
Considering rendez-vous communicationIn one of the seminal papers for the verification of parameterised networks [13], German and Sistla (and since then [4, 14]) assume that the entities communicate by "rendez-vous", a synchronisation mechanism in which two processes (the _sender_ and the _receiver_) agree on a common action by which they jointly change their local state. This mechanism is synchronous and symmetric, meaning that if no process is ready to receive a message, the sender cannot send it. However, in some applications, such as Java Thread programming, this is not exactly the primitive that is implemented. When a Thread is suspended in a waiting state, it is woken up by the reception of a message notify sent by another Thread. However, the sender is not blocked if there is no suspended Thread waiting for its message; in this case, the sender sends the notify anyway and the message is simply lost. This is the reason why Delzanno et. al. have introduced _non-blocking_ rendez-vous in [5] a communication primitive in which the sender of a message is not blocked if no process receives it. One of the problems of interest in parameterised verification is the coverability problem: is it possible that, starting from an initial configuration, (at least) one process reaches a bad state? In [5], and later in [19], the authors introduce variants of Petri nets to handle this type of communication. In particular, the authors investigate in [19] the coverability problem for an extended class of Petri nets with non-blocking arcs, and show that for this model the coverability problem is decidable using the techniques of Well-Structured Transitions Systems [1, 2, 12]. However, since their model is an extension of Petri nets, the latter problem is Expspace-hard [16] (no upper bound is given). Relying on Petri nets to obtain algorithms for parameterised networks is not always a good option. In fact, the coverability problem for parameterised networks with rendez-vous is in P[13], while it is Expspace-complete for Petri nets [18, 16]. Hence, no upper bound or lower bound can be directly deduced for the verification of networks with non-blocking rendez-vous from [19].
Our contributionsWe show that the coverability problem for parameterised networks with _non-blocking rendez-vous communication_ over a finite alphabet is Expspace-complete. To obtain this result, we consider an extension of counter machines (without zero test) where we add non-blocking decrement actions and edges that can bring back the machine to its initial location at any moment. We show that the coverability problem for these extended counter machines is Expspace-complete (Section 3) and that it is equivalent to our problem over parameterised networks (Section 4). We consider then a subclass of parameterised networks - _wait-only protocols_ - in which no state can allow to both request a rendez-vous and wait for one. This restriction is very natural to model concurrent programs since when a thread is waiting, it cannot perform any other action. We show that coverability problem can then be solved in polynomial time (Section 5). Finally, we show that the synchronization problem, where we look for a reachable configuration with all the processes in a given state, is undecidable in our framework, even for wait-only protocols (Section 6).
Due to lack of space, some proofs are only given in the appendix.
## 2 Rendez-vous Networks with Non-Blocking Semantics
For a finite alphabet \(\Sigma\), we let \(\Sigma^{*}\) denote the set of finite sequences over \(\Sigma\) (or words). Given \(w\in\Sigma^{*}\), we let \(|w|\) denote its length: if \(w=w_{0}\ldots w_{n-1}\in\Sigma^{*}\), then \(|w|=n\). We write \(\mathbb{N}\) to denote the set of natural numbers and \([i,j]\) to represent the set \(\{k\in\mathbb{N}\mid i\leq k\text{ and }k\leq j\}\) for \(i,j\in\mathbb{N}\). For a finite set \(E\), the set \(\mathbb{N}^{E}\) represents the multisets over \(E\). For two elements \(m,m^{\prime}\in\mathbb{N}^{E}\), we denote \(m+m^{\prime}\) the multiset such that \((m+m^{\prime})(e)=m(e)+m^{\prime}(e)\) for all \(e\in E\). We say that \(m\leq m^{\prime}\) if and only if \(m(e)\leq m^{\prime}(e)\) for all \(e\in E\). If \(m\leq m^{\prime}\), then \(m^{\prime}-m\) is the multiset such that \((m^{\prime}-m)(e)=m^{\prime}(e)-m(e)\) for all \(e\in E\). Given a subset \(E^{\prime}\subseteq E\) and \(m\in\mathbb{N}^{E}\), we denote by \(||m||_{E^{\prime}}\) the sum \(\Sigma_{e\in E^{\prime}}m(e)\) of elements of \(E^{\prime}\) present in \(m\). The size of a multiset \(m\) is given by \(||m||=||m||_{E}\). For \(e\in E\), we use sometimes the notation \(\{e\}\) for the multiset \(m\) verifying \(m(e)=1\) and \(m(e^{\prime})=0\) for all \(e^{\prime}\in E\setminus\{e\}\) and, to represent for instance the multiset with four elements \(a,b,b\) and \(c\), we will also use the notations \(\{a,b,b,c\}\) or \(\{a,2\cdot b,c\}\).
### Rendez-Vous Protocols
We can now define our model of networks. We assume that all processes in the network follow the same protocol. Communication in the network is pairwise and is performed by _rendez-vous_ through a finite communication alphabet \(\Sigma\). Each process can either perform an internal action using the primitive \(\tau\), or request a rendez-vous by sending the message \(m\) using the primitive \(!m\) or answer to a rendez-vous by receiving the message \(m\) using the primitive \(?m\) (for \(m\in\Sigma\)). Thus, the set of primitives used by our protocols is \(RV(\Sigma)=\{\tau\}\cup\{?m,!m\mid m\in\Sigma\}\).
[Rendez-vous protocol] A _rendez-vous protocol_ (shortly protocol) is a tuple \(\mathcal{P}=(Q,\Sigma,q_{in},q_{f},T)\) where \(Q\) is a finite set of states, \(\Sigma\) is a finite alphabet, \(q_{in}\in Q\) is the initial state, \(q_{f}\in Q\) is the final state and \(T\subseteq Q\times RV(\Sigma)\times Q\) is the finite set of transitions.
For a message \(m\in\Sigma\), we denote by \(R(m)\) the set of states \(q\) from which the message \(m\) can be received, i.e. states \(q\) such that there is a transition \((q,?m,q^{\prime})\in T\) for some \(q^{\prime}\in Q\).
A _configuration_ associated to the protocol \(\mathcal{P}\) is a non-empty multiset \(C\) over \(Q\) for which \(C(q)\) denotes the number of processes in the state \(q\) and \(||C||\) denotes the total number of processes in the configuration \(C\). A configuration \(C\) is said to be _initial_ if and only if \(C(q)=0\) for all \(q\in Q\setminus\{q_{in}\}\). We denote by \(\mathcal{C}(\mathcal{P})\) the set of configurations and by \(\mathcal{I}(\mathcal{P})\) the set of initial configurations. Finally for \(n\in\mathbb{N}\setminus\{0\}\), we use the notation \(\mathcal{C}_{n}(\mathcal{P})\) to represent the set of configurations of size \(n\), i.e. \(\mathcal{C}_{n}(\mathcal{P})=\{C\in\mathcal{C}(\mathcal{P})\mid||C||=n\}\). When the protocol is made clear from the context, we shall write \(\mathcal{C}\), \(\mathcal{I}\) and \(\mathcal{C}_{n}\).
We explain now the semantics associated with a protocol. For this matter we define the relation \(\xrightarrow{}_{\mathcal{P}}\subseteq\bigcup_{n\geq 1}\mathcal{C}_{n}\times \left(\{\tau\}\cup\Sigma\cup\{\mathbf{nb}(m)\mid m\in\Sigma\}\right)\times \mathcal{C}_{n}\) as follows (here \(\mathbf{nb}(\cdot)\) is a special symbol). Given \(n\in\mathbb{N}\setminus\{0\}\) and \(C,C^{\prime}\in\mathcal{C}_{n}\) and \(m\in\Sigma\), we have:
1. \(C\xrightarrow{}_{\mathcal{P}}C^{\prime}\) iff there exists \((q,\tau,q^{\prime})\in T\) such that \(C(q)>0\) and \(C^{\prime}=C-\{q\}+\{q^{\prime}\}\) **(internal)**;
2. \(C\xrightarrow{}_{\mathcal{P}}C^{\prime}\) iff there exists \((q_{1},!m,q_{1}^{\prime})\in T\) and \((q_{2},?m,q_{2}^{\prime})\in T\) such that \(C(q_{1})>0\) and \(C(q_{2})>0\) and \(C(q_{1})+C(q_{2})\geq 2\) (needed when \(q_{1}=q_{2}\)) and \(C^{\prime}=C-\{q_{1},q_{2}\}+\{q_{1}^{\prime},q_{2}^{\prime}\}\) **(rendez-vous)**;
3. \(C\xrightarrow{\mathbf{nb}(m)}_{\mathcal{P}}C^{\prime}\) iff there exists \((q_{1},!m,q_{1}^{\prime})\in T\), such that \(C(q_{1})>0\) and \((C-\{q_{1}\})(q_{2})=0\) for all \((q_{2},?m,q_{2}^{\prime})\in T\) and \(C^{\prime}=C-\{q_{1}\}+\{q_{1}^{\prime}\}\) **(non-blocking request)**.
Intuitively, from a configuration \(C\), we allow the following behaviours: either a process takes an internal transition (labeled by \(\tau\)), or two processes synchronize over a rendez-vous \(m\), or a process requests a rendez-vous to which no process can answer (non-blocking sending).
This allows us to define \(S_{\mathcal{P}}\) the transition system \((\mathcal{C}(\mathcal{P}),\xrightarrow{}_{\mathcal{P}})\) associated to \(\mathcal{P}\). We will write \(C\xrightarrow{}_{\mathcal{P}}C^{\prime}\) when there exists \(a\in\{\tau\}\cup\Sigma\cup\{\mathbf{nb}(m)\mid m\in\Sigma\}\) such that \(C\xrightarrow{a}_{\mathcal{P}}C^{\prime}\) and denote by \(\xrightarrow{}_{\mathcal{P}}^{*}\) the reflexive and transitive closure of \(\xrightarrow{}_{\mathcal{P}}\). Furthermore, when made clear from the context, we might simply write \(\xrightarrow{}\) instead of \(\xrightarrow{}_{\mathcal{P}}\). An _execution_ is a finite sequence of configurations \(\rho=C_{0}C_{1}\dots\) such that, for all \(0\leq i<|\rho|\), \(C_{i}\xrightarrow{}_{\mathcal{P}}C_{i+1}\). The execution is said to be initial if \(C_{0}\in\mathcal{I}(\mathcal{P})\).
Figure 1 provides an example of a rendez-vous protocol where \(q_{in}\) is the initial state and \(q_{1}\) the final state. A configuration associated to this protocol is for instance the multiset \(\{2\cdot q_{1},1\cdot q_{4},1\cdot q_{5}\}\) and the following sequence represents an initial execution: \(\{2\cdot q_{in}\}\xrightarrow{\mathbf{nb}(a)}\{q_{in},q_{5}\}\xrightarrow{b }\{q_{1},q_{6}\}\xrightarrow{c}\{2\cdot q_{2}\}\). When we only allow behaviours of type **(internal)** and **(rendez-vous)**, this semantics corresponds to the classical rendez-vous semantics ([13, 4, 14]). In opposition, we will refer to the semantics defined here as the _non-blocking semantics_ where a process is not _blocked_ if it requests a rendez-vous and no process can answer to it. Note that all behaviours possible in the classical rendez-vous semantics are as well possible in the non-blocking semantics but the converse is false.
### Verification Problems
We now present the problems studied in this work. For this matter, given a protocol \(\mathcal{P}=(Q,\Sigma,q_{in},q_{f},T)\), we define two sets of final configurations. The first one \(\mathcal{F}_{\exists}(\mathcal{P})=\{C\in\mathcal{C}(\mathcal{P})\ \mid\ C(q_{f})>0\}\) characterises the configurations where one of the processes is in the final state. The second one \(\mathcal{F}_{\forall}(\mathcal{P})=\{C\in\mathcal{C}(\mathcal{P})\ \mid\ C(Q \setminus\{q_{f}\})=0\}\) represents the configurations where all the processes are in the final state. Here again, when the protocol is clear from the context, we might use the notations \(\mathcal{F}_{\exists}\) and \(\mathcal{F}_{\forall}\). We study three problems: the _state coverability problem_ (SCover), the _configuration coverability_ problem (CCover) and the _synchronization problem_ (Synchro), which all take as input a protocol \(\mathcal{P}\) and can be stated as follows:
\begin{tabular}{|c|l|} \hline
**Problem name** & **Question** \\ \hline \hline SCover & Are there \(C_{0}\in\mathcal{I}\) and \(C_{f}\in\mathcal{F}_{\exists}\), such that \(C_{0}\xrightarrow{}^{*}C_{f}\)? \\ \hline CCover & Given \(C\in\mathcal{C}\), are there \(C_{0}\in\mathcal{I}\) and \(C^{\prime}\geq C\), such that \(C_{0}\xrightarrow{}^{*}C^{\prime}\)? \\ \hline Synchro & Are there \(C_{0}\in\mathcal{I}\) and \(C_{f}\in\mathcal{F}_{\forall}\), such that \(C_{0}\xrightarrow{}^{*}C_{f}\)? \\ \hline \end{tabular}
SCover expresses a safety property: if \(q_{f}\) is an error state and the answer is negative, then for any number of processes, no process will ever be in that error state. Term, in another hand, is a liveness property: if \(q_{f}\) is a deadlock state (a state in which no action is possible),
Figure 1: Example of a rendez-vous protocol \(\mathcal{P}\)
and the answer is negative, then for any number of processes, all processes together are never blocked at the same time.
The difficulty in solving these problems lies in the fact that we are seeking for an initial configuration allowing a specific execution but the set of initial configurations is infinite. The difference between SCover and Synchro is that in the first one we ask for at least one process to end up in the final state whereas the second one requires all the processes to end in this state. Note that SCover is an instance of CCover but Synchro is not.
The rendez-vous protocol of Figure 1 is a positive instance of SCover, as shown in Example 2. However, this is not the case for Synchro: if an execution brings a process in \(q_{2}\), this process cannot be brought afterwards to \(q_{1}\). If \(q_{2}\) is the final state, \(\mathcal{P}\) is now a positive instance of Synchro (see Example 2). Note that if the final state is \(q_{4}\), \(\mathcal{P}\) is not a positive instance of SCover anymore. In fact, the only way to reach a configuration with a process in \(q_{4}\) is to put (at least) two processes in state \(q_{5}\) as this is the only state from which one process can send the message \(b\). However, this cannot happen, since from an initial configuration, the only available action consists in sending the message \(a\) as a non-blocking request. Once there is one process in state \(q_{5}\), any other attempt to put another process in this state will induce a reception of message \(a\) by the process already in \(q_{5}\), which will hence leave \(q_{5}\). Finally, note that for any \(n\in\mathbb{N}\), the configuration \(\{n\cdot q_{3}\}\) is coverable, even if \(\mathcal{P}\) with \(q_{3}\) as final state is not a positive instance of Synchro.
## 3 Coverability for Non-Blocking Counter Machines
We first detour into new classes of counter machines, which we call _non-blocking counter machines_ and _non-blocking counter machines with restore_, in which a new way of decrementing the counters is added to the classical one: a non-blocking decrement, which is an action that can always be performed. If the counter is strictly positive, it is decremented; otherwise it is let to \(0\). We show that the coverability of a control state in this model is Expspace-complete, and use this result to solve coverability problems in rendez-vous protocols.
To define counter machines, given a set of integer variables (also called counters) \(X\), we use the notation \(\mathsf{CAct}(X)\) to represent the set of associated actions given by \(\{\mathtt{x}+,\mathtt{x}-,\mathtt{x}=0\mid\mathtt{x}\in X\}\cup\{\bot\}\). Intuitively, \(\mathtt{x}+\) increments the value of the counter \(\mathtt{x}\), while \(\mathtt{x}-\) decrements it and \(\mathtt{x}=0\) checks if it is equal to \(0\). We are now ready to state the syntax of this model.
A _counter machine_ (shortly CM) is a tuple \(M=(\mathit{Loc},X,\Delta,\ell_{\mathit{im}})\) such that \(\mathit{Loc}\) is a finite set of locations, \(\ell_{\mathit{im}}\in\mathit{Loc}\) is an initial location, \(X\) is a finite set of counters, and \(\Delta\subseteq\mathit{Loc}\times\mathsf{CAct}(X)\times\mathit{Loc}\) is finite set of transitions.
We will say that a CM is test-free (shortly test-free CM) whenever \(\Delta\cap\mathrm{Loc}\times\{\mathtt{x}=0\mid\mathtt{x}\in X\}\times\mathrm{ Loc}=\emptyset\). A configuration of a CM \(M=(\mathrm{Loc},X,\Delta,\ell_{\mathit{im}})\) is a pair \((\ell,v)\) where \(\ell\in\mathrm{Loc}\) specifies the current location of the CM and \(v\in\mathbb{N}^{X}\) associates to each counter a natural value. The size of a CM \(M\) is given by \(|M|=|\mathrm{Loc}|+|X|+|\Delta|\). Given two configurations \((\ell,v)\) and \((\ell^{\prime},v^{\prime})\) and a transition \(\delta\in\Delta\), we define \((\ell,v)\overset{\delta}{\rightsquigarrow_{M}}(\ell^{\prime},v^{\prime})\) if and only if \(\delta=(\ell,op,\ell^{\prime})\) and one of the following holds:
* \(op=\bot\) and \(v=v^{\prime}\);
* \(op=\mathtt{x}-\) and \(v^{\prime}(\mathtt{x})=v(\mathtt{x})-1\) and \(v(\mathtt{x}^{\prime})=v(\mathtt{x}^{\prime})+1\) and \(v(\mathtt{x}^{\prime})\) for all \(\mathtt{x}^{\prime}\in X\setminus\{\mathtt{x}\}\);
* \(op=\mathtt{x}=0\) and \(v(\mathtt{x})=0\) and \(v^{\prime}=v\).
In order to simulate the non-blocking semantics of our rendez-vous protocols with counter machines, we extend the class of test-free CM with non-blocking decrement actions.
**Definition 3.2**.: _A non-blocking test-free counter machine (shortly NB-CM) is a tuple \(M=(\mathit{Loc},X,\Delta_{b},\Delta_{nb},\ell_{in})\) such that \((\mathit{Loc},X,\Delta_{b},\ell_{in})\) is a test-free CM and \(\Delta_{nb}\subseteq\mathit{Loc}\times\{nb(\mathtt{x}-)\mid\mathtt{x}\in X\}\times \mathit{Loc}\) is a finite set of non-blocking transitions._
Observe that in a NB-CM, both blocking and non-blocking decrements are possible, according to the definition of the transition relation. Again, a configuration is given by a pair \((\ell,v)\in\mathrm{Loc}\times\mathbb{N}^{X}\). Given two configurations \((\ell,v)\) and \((\ell,v^{\prime})\) and \(\delta\in\Delta_{b}\cup\Delta_{nb}\), we extend the transition relation \((\ell,v)\stackrel{{\delta}}{{\leadsto}}_{M}(\ell,v^{\prime})\) over the set \(\Delta_{nb}\) in the following way: for \(\delta=(\ell,nb(\mathtt{x}-),\ell^{\prime})\in\Delta_{nb}\), we have \((\ell,v)\stackrel{{\delta}}{{\leadsto}}_{M}(\ell^{\prime},v^{ \prime})\) if and only if \(v^{\prime}(\mathtt{x})=\max(0,v(\mathtt{x})-1)\), and \(v^{\prime}(\mathtt{x}^{\prime})=v(\mathtt{x}^{\prime})\) for all \(\mathtt{x}^{\prime}\in X\setminus\{\mathtt{x}\}\).
We say that \(M\) is an NB-CM _with restore_ (shortly NB-R-CM) when \((\ell,\bot,\ell_{in})\in\Delta\) for all \(\ell\in\mathrm{Loc}\), i.e. from each location, there is a transition leading to the initial location with no effect on the counters values.
For a CM \(M\) with set of transitions \(\Delta\) (resp. an NB-CM with sets of transitions \(\Delta_{b}\) and \(\Delta_{nb}\)), we will write \((\ell,v)\leadsto_{M}(\ell^{\prime},v^{\prime})\) whenever there exists \(\delta\in\Delta\) (resp. \(\delta\in\Delta_{b}\cup\Delta_{nb}\)) such that \((\ell,v)\stackrel{{\delta}}{{\leadsto}}_{M}(\ell^{\prime},v^{ \prime})\) and use \(\leadsto_{M}^{*}\) to represent the reflexive and transitive closure of \(\leadsto_{M}\). When the context is clear we shall write \(\leadsto\) instead of \(\leadsto_{M}\). We let \(\mathbf{0}_{X}\) be the valuation such that \(\mathbf{0}_{X}(\mathtt{x})=0\) for all \(\mathtt{x}\in X\). An execution is a finite sequence of configurations \((\ell_{0},v_{0})\leadsto(\ell_{1},v_{1})\leadsto\ldots\leadsto(\ell_{k},v_{k})\). It is said to be initial if \((\ell_{0},v_{0})=(\ell_{in},\mathbf{0}_{X})\). A configuration \((\ell,v)\) is called reachable if \((\ell_{in},\mathbf{0}_{X})\leadsto^{*}(\ell,v)\).
We shall now define the coverability problem for (non-blocking test-free) counter machines, which asks whether a given location can be reached from the initial configuration. We denote this problem \(\textsc{Cover}[\mathcal{M}]\), for \(\mathcal{M}\in\{\mathrm{CM},\mathrm{test-free}\ \mathrm{CM},\mathrm{NB-CM},\mathrm{NB-R-CM}\}\). It takes as input a machine \(M\) in \(\mathcal{M}\) (with initial location \(\ell_{in}\) and working over a set \(X\) of counters) and a location \(\ell_{f}\) and it checks whether there is a valuation \(v\in\mathbb{N}^{X}\) such that \((\ell_{in},\mathbf{0}_{X})\leadsto^{*}(\ell_{f},v)\).
In the rest of this section, we will prove that \(\textsc{Cover}[\mathrm{NB-R-CM}]\) is \(\textsc{Expspace}\)-complete. To this end, we first establish that \(\textsc{Cover}[\mathrm{NB-CM}]\) is in \(\textsc{Expspace}\), by an adaptation of Rackoff's proof which shows that coverability in Vector Addition Systems is in \(\textsc{Expspace}\)[18]. This gives also the upper bound for NB-R-CM, since any NB-R-CM is a NB-CM. This result is established by the following theorem, whose proof is omitted due to lack of space.
**Theorem 3.3**.: \(\textsc{Cover}[\mathrm{NB-CM}]\) _and \(\textsc{Cover}[\mathrm{NB-R-CM}]\) are in \(\textsc{Expspace}\)._
To obtain the lower bound, inspired by Lipton's proof showing that coverability in Vector Addition Systems is \(\textsc{Expspace}\)-hard [8, 16], we rely on \(2\textsc{Exp}\)-bounded test-free CM. We say that a CM \(M=(\mathrm{Loc},X,\Delta,\ell_{in})\) is \(2\textsc{Exp}\)_-bounded_ if there exists \(n\in O(|M|)\) such that any reachable configuration \((\ell,v)\) satisfies \(v(\mathtt{x})\leq 2^{2^{n}}\) for all \(\mathtt{x}\in X\). We use then the following result.
**Theorem 3.4** ([8, 16]).: \(\textsc{Cover}[\textsc{2Exp}\)-bounded test-free CM] _is \(\textsc{Expspace}\)-hard._
We now show how to simulate a \(2\textsc{Exp}\)-bounded test-free CM by a NB-R-CM, by carefully handling restore transitions that may occur at any point in the execution. We will ensure that each restore transition is followed by a reset of the counters, so that we can always extract from an execution of the NB-R-CM a correct initial execution of the original test-free CM. The way we enforce resetting of the counters is inspired by the way Lipton simulates 0-tests of a CM in a test-free CM. As in [16, 8], we will describe the final NB-R-CM by means of several submachines. To this end, we define _procedural non-blocking counter machines_ that are NB-CM with several identified _output states_: formally, a procedural-NB-CM is a tuple \(N=(\mathrm{Loc},X,\Delta_{b},\Delta_{nb},\ell_{in},L_{out})\) such that \((\mathrm{Loc},X,\Delta_{b},\Delta_{nb},\ell_{in})\) is a NB-CM, \(L_{out}\subseteq\mathrm{Loc}\), and there is no outgoing transitions from states in \(L_{out}\).
Now fix a 2Exp-bounded test-free CM \(M=(\operatorname{Loc},X,\Delta,\ell_{in})\), \(\ell_{f}\in\operatorname{Loc}\) the location to be covered. There is some \(c\), such that, any reachable configuration \((\ell,v)\) satisfies \(v(\mathbf{x})<2^{2^{c|M|}}\) for all \(\mathbf{x}\in X\), fix \(n=c|M|\). We build a NB-R-CM \(N\) as pictured in Figure 2. The goal of the procedural NB-CM RstInc is to ensure that all counters in \(X\) are reset. Hence, after each restore transition, we are sure that we start over a fresh execution of the test-free CM \(M\). We will need the mechanism designed by Lipton to test whether a counter is equal to \(0\). So, we define two families of sets of counters \((Y_{i})_{0\leq i\leq n}\) and \((\overline{Y}_{i})_{0\leq i\leq n}\) as follows. Let \(Y_{i}=\{\mathbf{y}_{i},\mathbf{z}_{i},\mathbf{s}_{i}\}\) and \(\overline{Y}_{i}=\{\overline{\mathbf{y}}_{i},\overline{\mathbf{z}}_{i}, \overline{\mathbf{s}}_{i}\}\) for all \(0\leq i<n\) and \(Y_{n}=X\) and \(\overline{Y}_{n}=\emptyset\) and \(X^{\prime}=\bigcup_{0\leq i\leq n}Y_{i}\cup\overline{Y}_{i}\). All the machines we will describe from now on will work over the set of counters \(X^{\prime}\).
_Procedural_-NB-CM TestSwap\({}_{i}(\mathbf{x})\). We use a family of procedural-NB-CM defined in [16, 8]: for all \(0\leq i<n\), for all \(\overline{\mathbf{x}}\in\overline{Y}_{i}\), TestSwap\({}_{i}(\overline{\mathbf{x}})\) is a procedural-NB-CM with an initial location \(\ell_{in}^{\mathtt{R},i,\mathbf{x}}\), and two output locations \(\ell_{z}^{\mathtt{R},i,\mathbf{x}}\) and \(\ell_{nz}^{\mathtt{R},i,\mathbf{x}}\). It tests if the value of \(\overline{\mathbf{x}}\) is equal to \(0\), using the fact that the sum of the values of \(\mathbf{x}\) and \(\overline{\mathbf{x}}\) is equal to \(2^{2^{i}}\). If \(\overline{\mathbf{x}}=0\), it swaps the values of \(\mathbf{x}\) and \(\overline{\mathbf{x}}\), and the execution ends in the output location \(\ell_{z}^{\mathtt{R},i,\mathbf{x}}\). Otherwise, counters values are left unchanged and the execution ends in \(\ell_{nz}^{\mathtt{R},i,\mathbf{x}}\). In any case, other counters are not modified by the execution. Note that TestSwap\({}_{i}(\mathbf{x})\) makes use of variables in \(\bigcup_{1\leq j<i}Y_{i}\cup\overline{Y}_{i}\).
_Procedural_ NB-CM Rst\({}_{i}\). We use these machines to define a family of procedural-NB-CM \((\mathtt{Rst}_{i})_{0\leq i\leq n}\) that reset the counters in \(Y_{i}\cup\overline{Y}_{i}\), assuming that their values are less than or equal to \(2^{2^{i}}\). Let \(0\leq i\leq n\), we let \(\mathtt{Rst}_{i}=(\operatorname{Loc}^{\mathtt{R},i},X^{\prime},\Delta_{b}^{ \mathtt{R},i},\Delta_{nb}^{\mathtt{R},i},\ell_{in}^{\mathtt{R},i},\{\ell_{out}^ {\mathtt{R},i}\})\). The machine Rst\({}_{0}\) is pictured Figure 3. For all \(0\leq i<n\), the machine Rst\({}_{i+1}\) uses counters from \(Y_{i}\cup\overline{Y}_{i}\) and procedural-NB-CM Testswap\({}_{i}(\overline{\mathbf{z}}_{i})\) and Testswap\({}_{i}(\overline{\mathbf{y}}_{i})\) to control the number of times variables from \(Y_{i+1}\) and \(\overline{Y}_{i+1}\) are decremented. It is pictured Figure 4. Observe that since \(Y_{n}=X\), and \(\overline{Y}_{n}=\emptyset\), the machine Rst\({}_{n}\) will be a bit different from the picture: there will only be non-blocking decrements over counters from \(Y_{n}\), that is over counters \(X\) from the initial test-free CM \(M\). If \(\overline{\mathbf{y}}_{i}\), \(\overline{\mathbf{z}}_{i}\) (and \(\overline{\mathbf{s}}_{i}\)) are set to \(2^{2^{i}}\) and \(\mathbf{y}_{i}\), \(\mathbf{z}_{i}\) (and \(\overline{\mathbf{s}}_{i}\)) are set to \(0\), then each time this procedural-NB-CM takes an outer loop, the variables of \(Y_{i+1}\cup\overline{Y}_{i+1}\) are decremented (in a non-blocking fashion) \(2^{2^{i}}\) times. This is ensured by the properties of TestSwap\({}_{i}(\mathbf{x})\). Moreover, the location \(\ell_{z}^{\mathtt{R},i,\mathbf{y}}\) will only be reached when the counter \(\overline{\mathbf{y}}_{i}\) is set to \(0\), and this will happen after \(2^{2^{i}}\) iterations of the outer loop, again thanks to the properties of TestSwap\({}_{i}(\mathbf{x})\). So, all in all, variables from \(Y_{i}\) and \(\overline{Y}_{i+1}\) will take a non-blocking decrement \(2^{2^{i}}.2^{2^{i}}\) times, that is \(2^{2^{i+1}}\).
For all \(\mathbf{x}\in X^{\prime}\), we say that \(\mathbf{x}\) is _initialized_ in a valuation \(v\) if \(\mathbf{x}\in Y_{i}\) for some \(0\leq i\leq n\) and \(v(\mathbf{x})=0\), or \(\mathbf{x}\in\overline{Y}_{i}\) for some \(0\leq i\leq n\) and \(v(\mathbf{x})=2^{2^{i}}\). For \(0\leq i\leq n\), we say that a valuation \(v\in\mathbb{N}^{X^{\prime}}\) is _i-bounded_ if for all \(\mathbf{x}\in Y_{i}\cup\overline{Y}_{i}\), \(v(\mathbf{x})\leq 2^{2^{i}}\).
The construction ensures that when one enters Rst\({}_{i}\) with a valuation \(v\) that is _i-bounded_, and in which all variables in \(\bigcup_{0\leq j<i}Y_{j}\cup\overline{Y}_{j}\) are initialized, the location \(\ell_{out}^{\mathtt{R},i}\) is reached with a valuation \(v^{\prime}\) such that: \(v^{\prime}(\mathbf{x})=0\) for all \(\mathbf{x}\in Y_{i}\cup\overline{Y}_{i}\) and \(v^{\prime}(\mathbf{x})=v(\mathbf{x})\) for all \(\mathbf{x}\notin Y_{i}\cup\overline{Y}_{i}\). Moreover, if \(v\) is \(j\)-bounded for all \(0\leq j\leq n\), then any valuation reached during the execution
Figure 2: The NB-R-CM \(N\)
remains \(j\)-bounded for all \(0\leq j\leq n\).
_Procedural_ NB-CM \(\mathtt{Inc}_{i}\).: The properties we seek for \(\mathtt{Rst}_{i}\) are ensured whenever the variables in \(\bigcup_{0\leq j<i}Y_{j}\cup\overline{Y}_{j}\) are initialized. This is taken care of by a family of procedural-NB-CM introduced in [16, 8]. For all \(0\leq i<n\), \(\mathtt{Inc}_{i}\) is a procedural-NB-CM with initial location \(\ell_{in}^{\mathtt{Inc},i}\), and unique output location \(\ell_{out}^{\mathtt{Inc},i}\). They enjoy the following property: for \(0\leq i<n\), when one enters \(\mathtt{Inc}_{i}\) with a valuation \(v\) in which all the variables in \(\bigcup_{0\leq j<i}Y_{j}\cup\overline{Y}_{j}\) are initialized and \(v(\mathtt{x})=0\) for all \(\mathtt{x}\in\overline{Y}_{i}\), then the location \(\ell_{out}^{\mathtt{Inc}_{i}}\) is reached with a valuation \(v^{\prime}\) such that \(v^{\prime}(\mathtt{x})=2^{2^{i}}\) for all \(\mathtt{x}\in\overline{Y}_{i}\), and \(v^{\prime}(\mathtt{x})=v(\mathtt{x})\) for all other \(\mathtt{x}\in X^{\prime}\). Moreover, if \(v\) is \(j\)-bounded for all \(0\leq j\leq n\), then any valuation reached during the execution remains \(j\)-bounded for all \(0\leq j\leq n\).
_Procedural_ NB-CM \(\mathtt{RstInc}\).: Finally, let \(\mathtt{RstInc}\) be a procedural-NB-CM with initial location \(\ell_{a}\) and output location \(\ell_{b}\), over the set of counters \(X^{\prime}\) and built as an alternation of \(\mathtt{Rst}_{i}\) and \(\mathtt{Inc}_{i}\) for \(0\leq i<n\), finished by \(\mathtt{Rst}_{n}\). It is depicted in Figure 5. Thanks to the properties of the machines \(\mathtt{Rst}_{i}\) and \(\mathtt{Inc}_{i}\), in the output location of each \(\mathtt{Inc}_{i}\) machine, the counters in \(\overline{Y}_{i}\) are set to \(2^{2^{i}}\), which allow counters in \(Y_{i+1}\cup\overline{Y}_{i+1}\) to be set to \(0\) in the output location of \(\mathtt{Rst}_{i+1}\). Hence, in location \(\ell_{out}^{\mathtt{Inc},n}\), counters in \(Y_{n}=X\) are set to \(0\).
From [16, 8], each procedural machine \(\mathtt{TestSwap}_{i}(\mathtt{x})\) and \(\mathtt{Inc}_{i}\) has size at most \(C\times n^{2}\) for some constant \(C\). Hence, observe that \(N\) is of size at most \(B\) for some \(B\in O(|M|^{3})\). One can show that \((\ell_{in},\mathbf{0}_{X})\leadsto_{M}^{*}(\ell_{f},v)\) for some \(v\in\mathbb{N}^{X}\), if and only if \((\ell_{in}^{\prime},\mathbf{0}_{X^{\prime}})\leadsto_{N}^{*}(\ell_{f},v^{\prime})\) for some \(v^{\prime}\in\mathbb{N}^{X^{\prime}}\). Using Theorem 3.4, we obtain:
**Theorem 3.5**.: Cover/NB-R-CM _is Expspace-hard._
## 4 Coverability for Rendez-Vous Protocols
In this section we prove that SCover and CCover problems are both Expspace-complete for rendez-vous protocols. To this end, we present the following reductions: CCover reduces to Cover[NB-CM] and Cover[NB-R-CM] reduces to SCover. This will prove that CCover is in Expspace and SCover is Expspace-hard (from Theorem 3.3 and Theorem 3.5). As SCover is an instance of CCover, the two reductions suffice to prove Expspace-completeness for both problems.
### From Rendez-vous Protocols to \(\mathrm{NB}\)-CM
Let \(\mathcal{P}=(Q,\Sigma,q_{in},q_{f},T)\) a rendez-vous protocol and \(C_{F}\) a configuration of \(\mathcal{P}\) to be covered. We shall also decompose \(C_{F}\) as a sum of multisets \(\{\mathbf{q}_{1}\}+\{\mathbf{q}_{2}\}+\cdots+\{\mathbf{q}_{s}\}\). Observe that there might be \(\mathbf{q}_{i}=\mathbf{q}_{j}\) for \(i\neq j\). We build the \(\mathrm{NB}\)-CM \(M=(\mathrm{Loc},X,\Delta_{b},\Delta_{nb},\ell_{in})\) with \(X=Q\). A configuration \(C\) of \(\mathcal{P}\) is meant to be represented in \(M\) by \((\ell_{in},v)\), with \(v(q)=C(q)\) for all \(q\in Q\). The only meaningful location of \(M\) is then \(\ell_{in}\). The other ones are here to ensure correct updates of the counters when simulating a transition. We let \(\mathrm{Loc}=\{\ell_{in}\}\cup\{\ell_{(t,t^{\prime})}^{1},\ell_{(t,t^{\prime})} ^{2},\ell_{(t,t^{\prime})}^{3},\ell_{(t,t^{\prime})}^{3}\mid t=(q,!a,q^{\prime }),t^{\prime}=(p,?a,p^{\prime})\in T\}\cup\{\ell_{t},\ell_{t,p_{1}}^{a},\cdots, \ell_{t,p_{k}}^{a}\mid t=(q,!a,q^{\prime})\in T,R(a)=\{p_{1},\ldots,p_{k}\} \}\cup\{\ell_{q}\mid t=(q,\tau,q^{\prime})\in T\}\cup\{\ell_{1}\ldots\ell_{s}\}\), with final location \(\ell_{f}=\ell_{s}\), where \(R(m)\) for a message \(m\in\Sigma\) has been defined in Section 2. The sets \(\Delta_{b}\) and \(\Delta_{nb}\) are shown Figures 6-10. Transitions pictured Figures 6-8 and 10 show how to simulate a rendez-vous protocol with the classical rendez-vous mechanism. The non-blocking rendez-vous are handled by the transitions pictured Figure 9. If the \(\mathrm{NB}\)-CM \(M\) faithfully simulates \(\mathcal{P}\), then this loop of non-blocking decrements is taken when the values of the counters in \(R(a)\) are equal to \(0\), and the configuration reached still corresponds to a configuration in \(\mathcal{P}\). However, it could be that this loop is taken in \(M\) while some counters in \(R(a)\) are strictly positive. In this case, a blocking rendez-vous has to be taken in \(\mathcal{P}\), e.g. \((q,!a,q^{\prime})\) and \((p,?a,p^{\prime})\) if the counter \(p\) in \(M\) is strictly positive. Therefore, the value of the reached configuration \((\ell_{in},v)\) and the corresponding configuration \(C\) in \(\mathcal{P}\) will be different: first, \(C(p^{\prime})>v(q^{\prime})\), since the process in \(p\) has moved in the state \(p^{\prime}\) in \(\mathcal{P}\) when there has been no increment of \(p^{\prime}\) in \(M\). Furthermore, all other non-blocking decrements of counters in \(R(a)\) in \(M\) may have effectively decremented the counters, when in \(\mathcal{P}\) no other process has left a state of \(R(a)\). However, this ensures that \(C\geq v\). The reduction then ensures that if \((\ell_{in},v)\) is reachable in \(M\), then a configuration \(C\geq v\) is reachable in \(\mathcal{P}\). Then, if it is possible to reach a configuration \((\ell_{in},v)\) in \(M\) whose counters are high enough to cover \(\ell_{F}\), then the corresponding initial execution in \(\mathcal{P}\) will reach a configuration \(C\geq v\), which hence covers \(C_{F}\).
### From \(\mathrm{NB}\)-R-CM to Rendez-Vous Protocols
The reduction from Cover[NB-R-CM] to SCover in rendez-vous protocols mainly relies on the mechanism that can ensure that at most one process evolves in some given set of states, as explained in Example 2.5. This will allow to somehow select a "leader" among
the processes that will simulate the behaviour of the NB-R-CM whereas other processes will simulate the values of the counters. Let \(M=(\mathrm{Loc},X,\Delta_{b},\Delta_{nb},\ell_{in})\) a NB-R-CM and \(\ell_{f}\in\mathrm{Loc}\) a final target location. We build the rendez-vous protocol \(\mathcal{P}\) pictured in Figure 11, where \(\mathcal{P}(M)\) is the part that will simulate the NB-R-CM \(M\). The locations \(\{1_{\mathtt{x}}\mid\mathtt{x}\in X\}\) will allow to encode the values of the different counters during the execution: for a configuration \(C\), \(C(1_{\mathtt{x}})\) will represent the value of the counter \(\mathtt{x}\). We give then \(\mathcal{P}(M)=(Q_{M},\Sigma_{M},\ell_{in},\ell_{f},T_{M})\) with \(Q_{M}=\mathrm{Loc}\cup\{\ell_{\delta}\mid\delta\in\Delta_{b}\}\), \(\Sigma_{M}=\{\mathrm{inc}_{\mathtt{x}},\overline{\mathrm{inc}}_{\mathtt{x}}, \mathrm{dec}_{\mathtt{x}},\overline{\mathrm{dec}}_{\mathtt{x}},\mathrm{nbdec}_ {\mathtt{x}}\mid\mathtt{x}\in X\}\), and \(T_{M}=\{(\ell_{i},\mathrm{inc}_{\mathtt{x}},\ell_{\delta}),(\ell_{\delta}, \overline{\mathrm{inc}}_{\mathtt{x}},\ell_{j})\mid\delta=(\ell_{i},\mathtt{x} +,\ell_{j})\in\Delta_{b}\}\cup\{(\ell_{i},\mathrm{dec}_{\mathtt{x}},\ell_{ \delta}),(\ell_{\delta},\overline{\mathrm{dec}}_{\mathtt{x}},\ell_{j})\mid \delta=(\ell_{i},\mathtt{x}-,\ell_{j})\in\Delta_{b}\}\cup\{(\ell_{i},\mathrm{ nbdec}_{\mathtt{x}},\ell_{j})\mid(\ell_{i},\mathtt{nb(x}-),\ell_{j})\in\Delta_{ nb}\}\cup\{(\ell_{i},\tau,\ell_{j})\mid(\ell_{i},\mathtt{l},\ell_{j})\in\Delta_{b}\}\). Here, the reception of a message \(\overline{\mathrm{inc}}_{\mathtt{x}}\) (respectively \(\overline{\mathrm{dec}}_{\mathtt{x}}\)) works as an acknowledgement, ensuring that a process has indeed received the message \(\mathrm{inc}_{\mathtt{x}}\) (respectively \(\mathrm{dec}_{\mathtt{x}}\)), and that the corresponding counter has been incremented (resp. decremented). For non-blocking decrement, obviously no acknowledgement is required. The protocol \(\mathcal{P}=(Q,\Sigma,q_{in},\ell_{f},T)\) is then defined with \(Q=Q_{M}\cup\{1_{\mathtt{x}},q_{\mathtt{x}},q^{\prime}_{\mathtt{x}}\mid \mathtt{x}\in X\}\cup\{q_{in},q,q_{\perp}\}\), \(\Sigma=\Sigma_{M}\cup\{L,R\}\) and \(T\) is the set of transitions \(T_{M}\) along with the transitions pictured in Figure 11. Note that there is a transition \((\ell,?L,q_{\perp})\) for all \(\ell\in Q_{M}\).
With two non-blocking transitions on \(L\) and \(R\) at the beginning, protocol \(\mathcal{P}\) can faithfully simulate the NB-R-CM \(M\) without further ado, provided that the initial configuration contains enough processes to simulate all the counters values during the execution: after having sent a process in state \(\ell_{in}\), any transition of \(M\) can be simulated in \(\mathcal{P}\). Conversely, an initial execution of \(\mathcal{P}\) can send multiple processes into the \(\mathcal{P}(M)\) zone, which can mess up the simulation. However, each new process entering \(\mathcal{P}(M)\) will send the message \(L\), which will send the process already in \(\{q\}\cup Q_{M}\) in the deadlock state \(q_{\perp}\), and send the message \(R\), which will be received by any process in \(\{q_{\mathtt{x}},q^{\prime}_{\mathtt{x}}\mid\mathtt{x}\in X\}\). Moreover, the construction of the protocol ensures that there can only be one process in the set of states \(\{q_{\mathtt{x}},q^{\prime}_{\mathtt{x}}\mid\mathtt{x}\in X\}\). Then, if we have reached a configuration simulating the configuration \((\ell,v)\) of \(M\), sending a new process in the \(\mathcal{P}(M)\) zone will lead to a configuration \((\ell_{in},v)\), and hence simply mimicks a restore transition of \(M\). So every initial execution of \(\mathcal{P}\) corresponds to an initial execution of \(M\).
**Theorem 4.2**.: SCover _and CCover over rendez-vous protocols are Expspace complete._
## 5 Coverability for Wait-Only Protocols
In this section, we study a restriction on rendez-vous protocols in which we assume that a process waiting to answer a rendez-vous cannot perform another action by itself. This allows for a polynomial time algorithm for solving CCover.
Figure 11: The rendez-vous protocol \(\mathcal{P}\) built from the NB-R-CM \(M\). Note that there is one gadget with states \(\{q_{\mathtt{x}},\,q^{\prime}_{\mathtt{x}},\,1_{\mathtt{x}}\}\) for each counter \(\mathtt{x}\in X\).
### Wait-Only Protocols
We say that a protocol \(\mathcal{P}=(Q,\Sigma,q_{in},q_{f},T)\) is _wait-only_ if the set of states \(Q\) can be partitioned into \(Q_{A}\) -- the _active states_ -- and \(Q_{W}\) -- the _waiting_ states -- with \(q_{in}\in Q_{A}\) and:
* for all \(q\in Q_{A}\), for all \((q^{\prime},?m,q^{\prime\prime})\in T\), we have \(q^{\prime}\neq q\);
* for all \(q\in Q_{W}\), for all \((q^{\prime},!m,q^{\prime\prime})\in T\), we have \(q^{\prime}\neq q\) and for all \((q^{\prime},\tau,q^{\prime\prime})\in T\), we have \(q^{\prime}\neq q\).
From a waiting state, a process can only perform receptions (if it can perform anything), whereas in an active state, a process can only perform internal actions or send messages. Examples of wait-only protocols are given by Figures 12 and 13.
In the sequel, we will often refer to the paths of the underlying graph of the protocol. Formally, a _path_ in a protocol \(\mathcal{P}=(Q,\Sigma,q_{in},q_{f},T)\) is either a control state \(q\in Q\) or a finite sequence of transitions in \(T\) of the form \((q_{0},a_{0},q_{1})(q_{1},a_{1},q_{2})\ldots(q_{k},a_{k},q_{k+1})\), the first case representing a path from \(q\) to \(q\) and the second one from \(q_{0}\) to \(q_{k+1}\).
### Abstract Sets of Configurations
To solve the coverability problem for wait-only protocols in polynomial time, we rely on a sound and complete abstraction of the set of reachable configurations. In the sequel, we consider a wait-only protocol \(\mathcal{P}=(Q,\Sigma,q_{in},q_{f},T)\) whose set of states is partitioned into a set of active states \(Q_{A}\) and a set of waiting states \(Q_{W}\). An _abstract set of configurations_\(\gamma\) is a pair \((S,\textit{Toks})\) such that:
* \(S\subseteq Q\) is a subset of states, and,
* \(\textit{Toks}\subseteq Q_{W}\times\Sigma\) is a subset of pairs composed of a waiting state and a message, and,
* \(q\not\in S\) for all \((q,m)\in\textit{Toks}\).
We then abstract the set of reachable configurations as a set of states of the underlying protocol. However, as we have seen, some states, like states in \(Q_{A}\), can host an unbounded number of processes together (this will be the states in \(S\)), while some states can only host a bounded number (in fact, 1) of processes together (this will be the states stored in _Toks_). This happens when a waiting state \(q\) answers a rendez-vous \(m\), that has necessarily been requested for a process to be in \(q\). Hence, in _Toks_, along with a state \(q\), we remember the last message \(m\) having been sent in the path leading from \(q_{in}\) to \(q\), which is necessarily in \(Q_{W}\). Observe that, since several paths can lead to \(q\), there can be \((q,m_{1}),(q,m_{2})\in\textit{Toks}\) with \(m_{1}\neq m_{2}\). We denote by \(\Gamma\) the set of abstract sets of configurations.
Let \(\gamma=(S,\textit{Toks})\) be an abstract set of configurations. Before we go into the configurations represented by \(\gamma\), we need some preliminary definitions. We note \(\textsf{st}(\textit{Toks})\) the set \(\{q\in Q_{W}\mid\) there exists \(m\in\Sigma\) such that \((q,m)\in\textit{Toks}\}\) of control states appearing in _Toks_. Given a state \(q\in Q\), we let \(\text{Rec}(q)\) be the set \(\{m\in\Sigma\mid\) there exists \(q^{\prime}\in Q\) such that \((q,?m,q^{\prime})\in T\}\) of messages that can be received in state \(q\) (if \(q\) is not a waiting state, this set is empty). Given two different waiting states \(q_{1}\) and \(q_{2}\) in \(\textsf{st}(\textit{Toks})\), we say \(q_{1}\) and \(q_{2}\) are _conflict-free_ in \(\gamma\) if there exist \(m_{1},m_{2}\in\Sigma\) such that \(m_{1}\neq m_{2}\), \((q_{1},m_{1}),(q_{2},m_{2})\in\textit{Toks}\) and \(m_{1}\notin\text{Rec}(q_{2})\) and \(m_{2}\notin\text{Rec}(q_{1})\). We now say that a configuration \(C\in\mathcal{C}(\mathcal{P})\)_respects_\(\gamma\) if and only if for all \(q\in Q\) such that \(C(q)>0\) one of the following two conditions holds:
1. \(q\in S\), or,
2. \(q\in\textsf{st}(\textit{Toks})\) and \(C(q)=1\) and for all \(q^{\prime}\in\textsf{st}(\textit{Toks})\setminus\{q\}\) such that \(C(q^{\prime})=1\), we have that \(q\) and \(q^{\prime}\) are conflict-free.
Note that the condition is on states \(q\) such that \(C(q)>0\) and not all states \(q\in Q\) because it might be that some states don't appear in \(S\cup st(Toks)\) (non-reachable states for instance). Let \(\llbracket\gamma\rrbracket\) be the set of configurations respecting \(\gamma\). Note that in \(\llbracket\gamma\rrbracket\), for \(q\) in \(S\) there is no restriction on the number of processes that can be put in \(q\) and if \(q\) in \(\mathfrak{st}(\mathit{Toks})\), it can host at most one process. Two states from \(\mathfrak{st}(\mathit{Toks})\) can both host a process if they are conflict-free.
Finally, we will only consider abstract sets of configurations that are _consistent_. This property aims to ensure that concrete configurations that respect it are indeed reachable from states of \(S\). Formally, we say that an abstract set of configurations \(\gamma=(S,\mathit{Toks})\) is _consistent_ if \((i)\) for all \((q,m)\in\mathit{Toks}\), there exists a path \((q_{0},a_{0},q_{1})(q_{1},a_{1},q_{2})\ldots(q_{k},a_{k},q)\) in \(\mathcal{P}\) such that \(q_{0}\in S\) and \(a_{0}=\mathord{\upharpoonright}m\) and for all \(1\leq i\leq k\), we have that \(a_{i}=\mathord{\upharpoonright}m_{i}\) and that there exists \((q^{\prime}_{i},\mathord{\upharpoonright}m_{i},q^{\prime\prime}_{i})\in T\) with \(q^{\prime}_{i}\in S\), and \((ii)\) for two tokens \((q,m),(q^{\prime},m^{\prime})\in\mathit{Toks}\) either \(m\in\mathrm{Rec}(q^{\prime})\) and \(m^{\prime}\in\mathrm{Rec}(q)\), or, \(m\notin\mathrm{Rec}(q^{\prime})\) and \(m^{\prime}\notin\mathrm{Rec}(q)\). Condition \((i)\) ensures that processes in \(S\) can indeed lead to a process in the states from \(\mathfrak{st}(\mathit{Toks})\). Condition \((ii)\) ensures that if in a configuration \(C\), some states in \(\mathfrak{st}(\mathit{Toks})\) are pairwise conflict-free, then they can all host a process together.
Given \(\gamma\in\Gamma\) and a configuration \(C\), there exists \(C^{\prime}\in\llbracket\gamma\rrbracket\) such that \(C^{\prime}\geq C\) if and only if \(C\in\llbracket\gamma\rrbracket\). Checking that \(C\in\llbracket\gamma\rrbracket\) can be done in polynomial time.
### Computing Abstract Sets of Configurations
Our polynomial time algorithm is based on the computation of a polynomial length sequence of consistent abstract sets of configurations leading to a final abstract set characterising in a sound and complete manner (with respect to the coverability problem), an abstraction for the set of reachable configurations. This will be achieved by a function \(F:\Gamma\to\Gamma\), that inductively computes this final abstract set starting from \(\gamma_{0}=(\{q_{\mathit{in}}\},\emptyset)\).
Formal definition of the function \(F\) relies on intermediate sets \(S^{\prime\prime}\subseteq Q\) and \(\mathit{Toks}^{\prime\prime}\subseteq Q_{W}\times\Sigma\), which are the smallest sets satisfying the conditions described in Table 1. From \(S\) and \(\mathit{Toks}\), rules described in Table 1 add states and tokens to \(S^{\prime\prime}\) and \(\mathit{Toks}^{\prime\prime}\) from the outgoing transitions from states in \(S\) and \(\mathfrak{st}(\mathit{Toks})\). It must be that every state added to \(S^{\prime\prime}\) can host an unbounded number of processes, and every state added to \(\mathit{Toks}^{\prime\prime}\) can host at least one process, furthermore, two conflict-free states in \(\mathit{Toks}^{\prime\prime}\) should be able to host at least one process at the same time.
**Example 5.2**.: Consider the wait-only protocol \(\mathcal{P}_{1}\) depicted on Figure 12. From \((\{q_{in}\},\emptyset)\), rules described in Table 1 construct the following pair \((S^{\prime\prime}_{1},\mathit{Toks}^{\prime\prime}_{1})=(\{q_{in},q_{4}\},\{(q_{ 1},a)\), \((q_{1},b),(q_{5},c)\})\). In \(\mathcal{P}_{1}\), it is indeed possible to reach a configuration with as many processes as one wishes in the state \(q_{4}\) by repeating the transition \((q_{in},\mathit{ld},q_{4})\) (rule 3a). On the other hand, it is possible to put _at most_ one process in the waiting state \(q_{1}\) (rule 3b), because any other attempt from a process in \(q_{in}\) will yield a reception of the message \(a\) (resp. \(b\)) by the process already in \(q_{1}\). Similarly, we can put at most one process in \(q_{5}\). Note that in \(\mathit{Toks}^{\prime\prime}_{1}\), the states \(q_{1}\) and \(q_{5}\) are conflict-free and it is hence possible to have simultaneously one process in both of them.
If we apply rules of Table 1 one more time to \((S^{\prime\prime}_{1},\mathit{Toks}^{\prime\prime}_{1})\), we get \(S^{\prime\prime}_{2}=\{q_{in},q_{2},q_{4},q_{6},q_{7}\}\) and \(\mathit{Toks}^{\prime\prime}_{2}=\{(q_{1},a),(q_{1},b),(q_{3},a),(q_{3},b),(q_{ 5},c)\}\). We can put at most one process in \(q_{3}\): to add one, a process will take the transition \((q_{1},?c,q_{3})\). Since \((q_{1},a)\), \((q_{1},b)\in\mathit{Toks}^{\prime\prime}_{1}\), there can be at most one process in state \(q_{1}\), and this process arrived by a path in which the last request of rendez-vous was \(!a\) or \(!b\). Since \(\{a,b\}\subseteq\mathrm{Rec}(q_{3})\), by rule 5b, \((q_{3},a),(q_{3},b)\) are added. On the other hand we can put as many processes as we want in the state \(q_{7}\) (rule 5a): from a configuration with one process on state \(q_{5}\), successive non-blocking request on letter \(c\), and rendez-vous on letter \(d\) will allow to increase the number of processes in state \(q_{7}\).
However, one can observe that \(q_{5}\) can in fact host an unbounded number of processes: once two processes have been put on states \(q_{1}\) and \(q_{5}\) respectively (remember that \(q_{1}\) and \(q_{5}\) are conflict-free in \((S^{\prime\prime}_{1},\mathit{Toks}^{\prime\prime}_{1})\)), iterating rendez-vous on letter \(c\) (with transition \((q_{1},?c,q_{3})\)) and rendez-vous on letter \(a\) put as many processes as one wants on state \(q_{5}\).
This is why we need another transformation from \(S^{\prime\prime}_{2}\), \(\mathit{Toks}^{\prime\prime}_{2}\) to \(F(S^{\prime\prime}_{1},\mathit{Toks}^{\prime\prime}_{1})\). As we shall see, this transformation does not have any impact on \(S^{\prime\prime}_{1}\) and \(\mathit{Toks}^{\prime\prime}_{1}\) and so it holds that \(F((\{q_{in}\},\emptyset))=(S^{\prime\prime}_{1},\mathit{Toks}^{\prime\prime}_{1})\).
Note \(F(\gamma)=(S^{\prime},\mathit{Toks}^{\prime})\), Table 2 describes the construction of \(S^{\prime}\) from \((S^{\prime\prime},\mathit{Toks}^{\prime\prime})\), while \(\mathit{Toks}^{\prime}=\mathit{Toks}^{\prime\prime}\setminus(S\times\Sigma)\), i.e. all states added to \(S^{\prime}\) are removed from \(\mathit{Toks}^{\prime}\) so a state belongs either to \(S^{\prime}\) or to \(\mathfrak{st}(\mathit{Toks}^{\prime})\).
Now the case of state \(q_{5}\) evoked in the previous example leads to application of rule 7, since \((q_{5},c)\), \((q_{1},a)\in\mathit{Toks}^{\prime\prime}_{2}\), and \((q_{3},a)\)\((q_{1},?c,q_{3})\in T\). Finally, \(F(F(\{q_{in}\},\emptyset))=(\{q_{in},q_{2},q_{4},q_{5},q_{6},q_{7}\},\{(q_{1},a ),(q_{1},b),(q_{3},a),(q_{3},b)\})\). Since \(q_{1}\) and \(q_{3}\) are not conflict-free, they won't be reachable together in a configuration.
We consider now the wait-only protocol \(\mathcal{P}_{2}\) depicted on Figure 13. In that case, to compute \(F((\{q_{in}\},\emptyset))\) we will first have \(S^{\prime\prime}=\{q_{in}\}\) and \(\mathit{Toks}^{\prime\prime}=\{(q_{1},a),(q_{2},b),(p_{1},m_{1}),(p_{2},m_{2}),\)\((p_{3},m_{3})\}\) (using rule 3b), to finally get \(F((\{q_{in}\},\emptyset))=(\{q_{in},q_{1},p_{1}\},\{(q_{2},b),(p_{2},m_{2}),(p_ {3},m_{3})\})\). Applying rule 6 to tokens \((q_{1},a)\) and \((q_{2},b)\) from \(\mathit{Toks}^{\prime\prime}\), we obtain that \(q_{1}\in S^{\prime}\): whenever one manages to obtain one process in state \(q_{2}\), this process can answer the requests on message \(a\) instead of processes in state \(q_{1}\), allowing one to obtain as many processes as desired in state \(q_{1}\). Now since \((p_{1},m_{1})\), \((p_{2},m_{2})\) and \((p_{3},m_{3})\) are in \(\mathit{Toks}^{\prime\prime}\) and respect the conditions of rule 8, \(p_{1}\) is added to the set \(S^{\prime}\) of unbounded states. This case is a generalisation of the previous one, with 3 processes. Once one process has been put on state \(p_{2}\) from \(q_{in}\), iterating the following actions: rendez-vous over \(m_{3}\), rendez-vous over \(m_{1}\), non-blocking request of \(m_{2}\), will ensure as many processes as one wants on state \(p_{1}\). Finally applying successively \(F\), we get in this case the abstract set \((\{q_{in},q_{1},q_{3},p_{1},p_{2},p_{3},p_{4}\},\{(q_{2},b)\})\).
We show that \(F\) satisfies the following properties.
1. \(F(\gamma)\) is consistent and can be computed in polynomial time for all consistent \(\gamma\in\Gamma\).
2. If \((S^{\prime},\mathit{Toks}^{\prime})=F(S,\mathit{Toks})\) then \(S\neq S^{\prime}\) (and \(S\subseteq S^{\prime}\)) or \(\mathit{Toks}\subseteq\mathit{Toks}^{\prime}\).
3. For all consistent \(\gamma\in\Gamma\), if \(C\in[\![\gamma]\!]\) and \(C\xrightarrow{}C^{\prime}\) then \(C^{\prime}\in[\![F(\gamma)]\!]\).
4. For all consistent \(\gamma\in\Gamma\), if \(C^{\prime}\in[\![F(\gamma)]\!]\), then there exists \(C^{\prime\prime}\in\mathcal{C}\) and \(C\in[\![\gamma]\!]\) such that \(C^{\prime\prime}\geq C^{\prime}\) and \(C\xrightarrow{}^{*}C^{\prime\prime}\).
### Polynomial Time Algorithm
We now present our polynomial time algorithm to solve CCover for wait-only protocols. We define the sequence \((\gamma_{n})_{n\in\mathbb{N}}\) as follows: \(\gamma_{0}=(\{q_{in}\},\emptyset)\) and \(\gamma_{i+1}=F(\gamma_{i})\) for all \(i\in\mathbb{N}\). First note that \(\gamma_{0}\) is consistent and that \([\![\gamma_{0}]\!]=\mathcal{I}\) is the set of initial configurations. Using Lemma 5.4, we deduce that \(\gamma_{i}\) is consistent for all \(i\in\mathbb{N}\). Furthermore, each time we apply \(F\) to an abstract set of configurations \((S,\mathit{Toks})\) either \(S\) or \(\mathit{Toks}\) increases, or \((S,\mathit{Toks})\) stabilises. Hence for all \(n\geq|Q|^{2}*|\Sigma|\), we have \(\gamma_{n+1}=F(\gamma_{n})=\gamma_{n}\). Let \(\gamma_{f}=\gamma_{|Q|^{2}*|\Sigma|}\). Using Lemma 5.4, we get:
Given \(C\in\mathcal{C}\), there exists \(C_{0}\in\mathcal{I}\) and \(C^{\prime}\geq C\) such that \(C_{0}\xrightarrow{}^{*}C^{\prime}\) if and only if there exists \(C^{\prime\prime}\in[\![\gamma_{f}]\!]\) such that \(C^{\prime\prime}\geq C\).
We need to iterate \(|Q|^{2}*|\Sigma|\) times the function \(F\) to compute \(\gamma_{f}\) and each computation of \(F\) can be done in polynomial time. Furthermore checking whether there exists \(C^{\prime\prime}\in[\![\gamma_{f}]\!]\) such that \(C^{\prime\prime}\geq C\) for a configuration \(C\in\mathcal{C}\) can be done in polynomial time by Lemma 5.1, hence using the previous lemma we obtain the desired result.
CCover and SCover restricted to wait-only protocols are in Ptime.
## 6 Undecidability of Synchro
It is known that Cover[CM] is undecidable in its full generality [17]. This result holds for a very restricted class of counter machines, namely Minsky machines (Minsky-CM for short), which are CM over 2 counters, \(\mathtt{x}_{1}\) and \(\mathtt{x}_{2}\). Actually, it is already undecidable whether there is an execution \((\ell_{in},\mathbf{0}_{\{\mathtt{x}_{1},\mathtt{x}_{2}\}})\leadsto^{*}(\ell_{f}, \mathbf{0}_{\{\mathtt{x}_{1},\mathtt{x}_{2}\}})\). Reduction from this last problem gives the following result.
Synchro is undecidable, even for wait-only protocols.
Fix \(M=(\operatorname{Loc},\ell_{0},\{\mathtt{x}_{1},\mathtt{x}_{2}\},\Delta)\) with \(\ell_{f}\in\operatorname{Loc}\) the final state. W.l.o.g., we assume that there is no outgoing transition from state \(\ell_{f}\) in the machine. The protocol \(\mathcal{P}\) is described in Figures 15-15. The states \(\{0_{i},p_{i},1_{i},p^{\prime}_{i}\mid i=1,2\}\) will be visited by processes simulating values of counters, while the states in \(\operatorname{Loc}\) will be visited by a process simulating the different locations in the Minsky-CM. If at the end of the computation, the counters are equal to \(0\), it means that each counter has been incremented and decremented the same number of times, so that all processes simulating the counters end up in the state \(\ell_{f}\). The first challenge is to appropriately check when a counter equals \(0\). This is achieved thanks to the non-blocking semantics: the process sends a message \(\operatorname{\mathsf{l}zero}_{i}\) to check if the counter \(i\) equals \(0\). If it is does not, the message will be received by a process that will end up in the deadlock state \(\otimes\). The second challenge is to ensure that only one process simulates the Minsky-CM in the states in \(\operatorname{Loc}\). This is ensured by the states \(\{w,w^{\prime}\}\). Each time a process arrives in the \(\ell_{in}\) state, another must arrive in the \(w^{\prime}\) state, as a witness that the simulation has begun. This witness must reach \(\ell_{f}\) for the computation to be a testifier of a positive instance of Synchro, but it should be the first to do so, otherwise a process already in \(\ell_{f}\) will receive the message "\(\operatorname{w}\)" and reach the deadlock state \(\otimes\). Thus, if two processes simulate the Minsky-CM, there will be two witnesses, and they won't be able to reach \(\ell_{f}\) together.
## 7 Conclusion
We have introduced the model of parameterised networks communicating by non-blocking rendez-vous, and showed that safety analysis of such networks becomes much harder than in the framework of classical rendez-vous. Indeed, CCover and SCover become Expspace-complete and Synchro undecidable in our framework, while these problems are solvable in polynomial time in the framework of [13]. We have introduced a natural restriction of protocols, in which control states are partitioned between _active_ states (that allow requesting of rendez-vous) and _waiting_ states (that can only answer to rendez-vous) and showed that CCover can then be solved in polynomial time. Future work includes finding further restrictions that would yield decidability of Synchro. A candidate would be protocols in which waiting states can only receive _one_ message. Observe that in that case, the reduction of Section 6 can be adapted to simulate a test-free CM, hence Synchro for this subclass of protocols is as hard as reachability in Vector Addition Systems with States, i.e. non-primitive recursive [15]. Decidability remains open though. |
2304.12112 | Coordinated Dynamic Spectrum Sharing Between Terrestrial and
Non-Terrestrial Networks in 5G and Beyond | The emerging Non-Terrestrial Networks (NTNs) can aid to provide 5G and beyond
services everywhere and anytime. However, the vast emergence of NTN systems
will introduce an unseen interference to both the existing satellite systems
and Terrestrial Networks (TNs). For that, there is a need for novel ideas on
how to efficiently utilize the co-existing systems with the ever-increasing
competition on scarce spectrum resources. Dynamic Spectrum Sharing (DSS) is a
promising technique in which different systems can operate on the same
spectrum, thus increasing the spectrum efficiency and offering better coverage
for the users. In this paper, we present a centralized scheme for achieving
coordinated DSS to protect the primary TN while providing NTN with sufficient
resources. The scheme is evaluated by system simulations in a scenario with a
TN and low earth orbit satellite. The results reveal that in a low traffic
demand situation, the primary TN users are not affected negatively while the
NTN can provide service to the rural area. In high-demand traffic situations,
the peak performance of the TN inevitably suffers but the TN cell edge and NTN
users' performance is improved. | Henrik Martikainen, Mikko Majamaa, Jani Puttonen | 2023-04-24T14:15:13Z | http://arxiv.org/abs/2304.12112v4 | Coordinated Dynamic Spectrum Sharing Between Terrestrial and Non-Terrestrial Networks in 5G and Beyond
###### Abstract
The emerging Non-Terrestrial Networks (NTNs) can aid to provide 5G and beyond services everywhere and anytime. However, the vast emergence of NTN systems will introduce an unseen interference to both the existing satellite systems and Terrestrial Networks (TNs). For that, there is a need for novel ideas on how to efficiently utilize the co-existing systems with the ever-increasing competition on scarce spectrum resources. Dynamic Spectrum Sharing (DSS) is a promising technique in which different systems can operate on the same spectrum, thus increasing the spectrum efficiency and offering better coverage for the users. In this paper, we present a centralized scheme for achieving coordinated DSS to protect the primary TN while providing NTN with sufficient resources. The scheme is evaluated by system simulations in a scenario with a TN and low earth orbit satellite. The results reveal that in a low traffic demand situation, the primary TN users are not affected negatively while the NTN can provide service to the rural area. In high-demand traffic situations, the peak performance of the TN inevitably suffers but the TN cell edge and NTN users' performance is improved.
Low Earth Orbit (LEO) satellite, spectral efficiency enhancement, satellite network simulator, spectrum allocation +
Footnote †: publicationid: pubid: 979-8-3503-3165-3/23/$31.00 ©2023 IEEE
DOI 10.1109/WoWMoMc57956.2023.00074
## I Introduction
New Radio (NR), the air interface of 5G, is the first 3GPP mobile communications standard that supports Non-Terrestrial Networks (NTNs) communications from the go. The NTN standardization in 3GPP started in its Release 15 and 16 with study items for NR to support NTNs. Release 17, finalized in 2022, included basic functionalities to enable NR for NTNs. Release 18 marks the beginning of standardization toward the 5G-Advanced (5G-A) and 6G. NTN-wise this means, for example, expanding coverage to higher frequencies and considering mobility issues.
NTNs have attracted a lot of attention from the industry and academia in recent years. The cost of such systems has gone down, which has attracted new players to the pool of satellite communication providers. Especially Non-Geostationary Orbit (NGSO) satellite systems have gained a lot of attention due to their relatively cheap price, deployment cost, and shorter propagation delays (in an order of magnitude compared to the traditional GSO satellites). The shorter propagation delays make them suitable for a plethora of applications that are impractical when GSO satellites are involved.
However, the vast emergence of NTN systems will introduce an unseen interference to both the existing satellite systems and Terrestrial Networks (TNs). For that, there is a need for novel ideas on how to efficiently utilize the co-existing systems with the ever-increasing competition on scarce spectrum resources. Dynamic Spectrum Sharing (DSS) is a promising technique in which different systems can operate on the same spectrum, thus increasing the Spectrum Efficiency (SE) and offering better coverage for the users.
The different TN/NTN systems can operate either on a licensed or unlicensed spectrum. When operating on a licensed spectrum, the whole spectrum is primarily reserved for a single system whereas an unlicensed spectrum may be utilized by any system. Licensed spectrum is typically auctioned by the local spectrum regulatory authority, e.g., Federal Communications Commission (FCC) in the United States. Depending on the spectrum type, i.e., licensed/unlicensed, different DSS techniques can be utilized. For example, for the former, Licensed Spectrum Access (LSA) can be utilized in which a Primary User (PU) of the spectrum is the incumbent user of the spectrum, but Secondary Users (SUs) can access the spectrum when it is available. The availability of the spectrum can be deduced from a spectrum utilization database, or it can be measured with spectrum sensing techniques. Further, Concurrent Spectrum Access (CSA) can be utilized in which the SUs may transmit concurrently with the PU with limitations on transmit power. DSS for unlicensed spectrum can be achieved, e.g., by Listen Before Talk (LBT) mechanisms as in Wi-Fi.
Next, related literature is briefly surveyed. A DSS method between LEO and GEO satellites is introduced in [1]. In the scheme, one LEO satellite senses the spectrum while another data LEO satellite transmits based on the measurements. However, there may be difficulties and uncertainty associated with spectrum sensing approaches [2]. The authors in [3] survey database-assisted spectrum sharing in satellite communications. One of the potential spectrum sharing scenarios is identified as NTN as an SU of the spectrum. One of the problems identified is how to consider all the relevant information in spectrum allocations while protecting the PUs from interference and still offering enough capacity to SUs. In |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.